text
stringlengths
100
500k
subset
stringclasses
4 values
Varignon's theorem In Euclidean geometry, Varignon's theorem holds that the midpoints of the sides of an arbitrary quadrilateral form a parallelogram, called the Varignon parallelogram. It is named after Pierre Varignon, whose proof was published posthumously in 1731.[1] Theorem The midpoints of the sides of an arbitrary quadrilateral form a parallelogram. If the quadrilateral is convex or concave (not complex), then the area of the parallelogram is half the area of the quadrilateral. If one introduces the concept of oriented areas for n-gons, then this area equality also holds for complex quadrilaterals.[2] The Varignon parallelogram exists even for a skew quadrilateral, and is planar whether the quadrilateral is planar or not. The theorem can be generalized to the midpoint polygon of an arbitrary polygon. Proof Referring to the diagram above, triangles ADC and HDG are similar by the side-angle-side criterion, so angles DAC and DHG are equal, making HG parallel to AC. In the same way EF is parallel to AC, so HG and EF are parallel to each other; the same holds for HE and GF. Varignon's theorem can also be proved as a theorem of affine geometry organized as linear algebra with the linear combinations restricted to coefficients summing to 1, also called affine or barycentric coordinates. The proof applies even to skew quadrilaterals in spaces of any dimension. Any three points E, F, G are completed to a parallelogram (lying in the plane containing E, F, and G) by taking its fourth vertex to be E − F + G. In the construction of the Varignon parallelogram this is the point (A + B)/2 − (B + C)/2 + (C + D)/2 = (A + D)/2. But this is the point H in the figure, whence EFGH forms a parallelogram. In short, the centroid of the four points A, B, C, D is the midpoint of each of the two diagonals EG and FH of EFGH, showing that the midpoints coincide. From the first proof, one can see that the sum of the diagonals is equal to the perimeter of the parallelogram formed. Also, we can use vectors 1/2 the length of each side to first determine the area of the quadrilateral, and then to find areas of the four triangles divided by each side of the inner parallelogram. convex quadrilateral concave quadrilateral crossed quadrilateral The Varignon parallelogram Properties A planar Varignon parallelogram also has the following properties: • Each pair of opposite sides of the Varignon parallelogram are parallel to a diagonal in the original quadrilateral. • A side of the Varignon parallelogram is half as long as the diagonal in the original quadrilateral it is parallel to. • The area of the Varignon parallelogram equals half the area of the original quadrilateral. This is true in convex, concave and crossed quadrilaterals provided the area of the latter is defined to be the difference of the areas of the two triangles it is composed of.[2] • The perimeter of the Varignon parallelogram equals the sum of the diagonals of the original quadrilateral. • The diagonals of the Varignon parallelogram are the bimedians of the original quadrilateral. • The two bimedians in a quadrilateral and the line segment joining the midpoints of the diagonals in that quadrilateral are concurrent and are all bisected by their point of intersection.[3]: p.125  In a convex quadrilateral with sides a, b, c and d, the length of the bimedian that connects the midpoints of the sides a and c is $m={\tfrac {1}{2}}{\sqrt {-a^{2}+b^{2}-c^{2}+d^{2}+p^{2}+q^{2}}}$ where p and q are the length of the diagonals.[4] The length of the bimedian that connects the midpoints of the sides b and d is $n={\tfrac {1}{2}}{\sqrt {a^{2}-b^{2}+c^{2}-d^{2}+p^{2}+q^{2}}}.$ Hence[3]: p.126  $\displaystyle p^{2}+q^{2}=2(m^{2}+n^{2}).$ This is also a corollary to the parallelogram law applied in the Varignon parallelogram. The lengths of the bimedians can also be expressed in terms of two opposite sides and the distance x between the midpoints of the diagonals. This is possible when using Euler's quadrilateral theorem in the above formulas. Whence[5] $m={\tfrac {1}{2}}{\sqrt {2(b^{2}+d^{2})-4x^{2}}}$ and $n={\tfrac {1}{2}}{\sqrt {2(a^{2}+c^{2})-4x^{2}}}.$ The two opposite sides in these formulas are not the two that the bimedian connects. In a convex quadrilateral, there is the following dual connection between the bimedians and the diagonals:[6] • The two bimedians have equal length if and only if the two diagonals are perpendicular. • The two bimedians are perpendicular if and only if the two diagonals have equal length. Special cases The Varignon parallelogram is a rhombus if and only if the two diagonals of the quadrilateral have equal length, that is, if the quadrilateral is an equidiagonal quadrilateral.[7] The Varignon parallelogram is a rectangle if and only if the diagonals of the quadrilateral are perpendicular, that is, if the quadrilateral is an orthodiagonal quadrilateral.[6]: p. 14  [7]: p. 169  For a self-crossing quadrilateral, the Varignon parallelogram can degenerate to four collinear points, forming a line segment traversed twice. This happens whenever the polygon is formed by replacing two parallel sides of a trapezoid by the two diagonals of the trapezoid, such as in the antiparallelogram.[8] See also • Perpendicular bisector construction of a quadrilateral, a different way of forming another quadrilateral from a given quadrilateral • Morley's trisector theorem, a related theorem on triangles Notes 1. Peter N. Oliver: Pierre Varignon and the Parallelogram Theorem. Mathematics Teacher, Band 94, Nr. 4, April 2001, pp. 316-319 2. Coxeter, H. S. M. and Greitzer, S. L. "Quadrangle; Varignon's theorem" §3.1 in Geometry Revisited. Washington, DC: Math. Assoc. Amer., pp. 52–54, 1967. 3. Altshiller-Court, Nathan, College Geometry, Dover Publ., 2007. 4. Mateescu Constantin, Answer to Inequality Of Diagonal 5. Josefsson, Martin (2011), "The Area of a Bicentric Quadrilateral" (PDF), Forum Geometricorum, 11: 155–164. 6. Josefsson, Martin (2012), "Characterizations of Orthodiagonal Quadrilaterals" (PDF), Forum Geometricorum, 12: 13–25. 7. de Villiers, Michael (2009), Some Adventures in Euclidean Geometry, Dynamic Mathematics Learning, p. 58, ISBN 9780557102952. 8. Muirhead, R. F. (February 1901), "Geometry of the isosceles trapezium and the contra-parallelogram, with applications to the geometry of the ellipse", Proceedings of the Edinburgh Mathematical Society, 20: 70–72, doi:10.1017/s0013091500032892 References and further reading • H. S. M. Coxeter, S. L. Greitzer: Geometry Revisited. MAA, Washington 1967, pp. 52-54 • Peter N. Oliver: Consequences of Varignon Parallelogram Theorem. Mathematics Teacher, Band 94, Nr. 5, Mai 2001, pp. 406-408 External links Wikimedia Commons has media related to Varignon's theorem. • Weisstein, Eric W. "Varignon's theorem". MathWorld. • Varignon Parallelogram in Compendium Geometry • A generalization of Varignon's theorem to 2n-gons and to 3D at Dynamic Geometry Sketches, interactive dynamic geometry sketches. • Varignon parallelogram at cut-the-knot-org
Wikipedia
Vashishtha Narayan Singh Vashishtha Narayan Singh (2 April 1946 – 14 November 2019) was an Indian academic. He was a child prodigy and completed his PhD in 1969. He taught mathematics at various institutes in the 1960s and 1970s. Singh was diagnosed with schizophrenia in the early 1970s and was admitted to a psychiatric hospital. He went missing during a train journey and was found years later. He was again admitted to the hospital and later returned to academia in 2014. He was awarded the Padma Shri, the fourth highest civilian award of India, posthumously in 2020. Vashishtha Narayan Singh Born(1946-04-02)2 April 1946 Basantpur, Bhojpur District, British India Died14 November 2019(2019-11-14) (aged 73) Patna, Bihar, India OccupationAcademic AwardsPadma Shri (2020) Academic background Alma materNetarhat Residential School Patna Science College University of California, Berkeley Doctoral advisorJohn L. Kelley Academic work InstitutionsUniversity of Washington IIT Kanpur TIFR, Mumbai I.S.I. Kolkata Early life and career Singh was born on 2 April 1946 to Lal Bahadur Singh, a police constable, and Lahaso Devi in the Basantpur village of the Bhojpur district in Bihar, India.[1][2][3] Singh was a child prodigy.[1] He received his primary and secondary education from Netarhat Residential School, and he received his college education from Patna Science College.[4][5] He received recognition as a student when he was allowed by Patna University to appear for examination in the first year of its three-year BSc (Hons.) Mathematics course and later MSc examination the next year.[6][7] Singh joined the University of California, Berkeley in 1965 and received a PhD in Reproducing Kernels and Operators with a Cyclic Vector (Cycle Vector Space Theory) in 1969 under doctoral advisor John L. Kelley.[8][9][2][1] After receiving his PhD, Singh joined the University of Washington at Seattle as an assistant professor, and then returned to India in 1974 to teach at Indian Institute of Technology Kanpur.[10] After eight months, he joined Tata Institute of Fundamental Research (TIFR), Bombay where he worked on a short-term position. Later he was appointed a faculty at the Indian Statistical Institute, Kolkata.[11][2][1] Later life Singh married Vandana Rani Singh in 1973 and they divorced in 1976. He was later diagnosed with schizophrenia.[10][2] With his condition worsening in the late 1970s, he was admitted to the Central Institute of Psychiatry in Kanke (now in Jharkhand) and remained there until 1985.[1] In 1987, Singh returned to his village of Basantpur. He disappeared during his train journey to Pune in 1989 and was found four years later in 1993 in Doriganj near Chhapra of Saran district.[10][8] He was then admitted to the National Institute of Mental Health and Neurosciences (NIMHANS), Bangalore. In 2002, he was treated at the Institute of Human Behaviour and Allied Sciences (IHBAS), Delhi.[1] In 2014, Singh was appointed a visiting professor at Bhupendra Narayan Mandal University (BNMU) in Madhepura.[12][7][13] Singh died on 14 November 2019 at Patna Medical College and Hospital in Patna after prolonged illness.[2][14] Awards Singh was awarded the Padma Shri, the fourth highest civilian award of India, posthumously in 2020.[15][16][17] In popular culture Filmmaker Prakash Jha announced a biographical film on Singh's life in 2018.[10][18] Singh's brother Ayodhya Prasad Singh, citing pending legal guardianship issues, said that no film rights had been granted.[1][19] Publication • Singh, Vashishtha N. (1974). "Reproducing kernels and operators with a cyclic vector. I." Pacific Journal of Mathematics. 52 (2): 567–584. doi:10.2140/pjm.1974.52.567. ISSN 0030-8730. References 1. "India's unknown beautiful mind". The Economic Times. 16 November 2019. Retrieved 16 November 2019. 2. Jha, Sujeet (14 November 2019). "Mathematician, who challenged Einstein's theory, dies; family made to wait for ambulance". India Today. Archived from the original on 14 November 2019. Retrieved 14 November 2019. 3. Mishra, B. K. (15 November 2019). "Vashishtha Narayan Singh dies: A mathematician who ignited minds". The Times of India. Retrieved 16 November 2019. 4. "India's own beautiful mind?". Business Standard. 5 July 2013. Archived from the original on 9 April 2014. Retrieved 8 April 2014. 5. "Achievements of Netarhat Vidyalay". Netarhat Vidyalay. Archived from the original on 5 February 2014. Retrieved 6 April 2014. 6. "Nation fails its sick maths wizard". The Times of India. Patna. 3 April 2004. Archived from the original on 8 January 2015. Retrieved 7 April 2014. 7. "Maths wizard Vashistha Narayan Singh dies at 78 in Patna hospital". Hindustan Times. 15 November 2019. Retrieved 15 November 2019. 8. "Noted mathematician Vashishtha Singh no more". The Hindu. 15 November 2019. ISSN 0971-751X. Retrieved 15 November 2019. 9. "Vashishtha Narayan Singh". University of California, Berkeley. Archived from the original on 15 February 2014. Retrieved 4 April 2014. 10. "चांद पर पहली बार गया था इंसान, ऐसे की थी वशिष्ठ नारायण ने NASA की मदद". aajtak.intoday.in (in Hindi). 15 November 2019. Retrieved 15 November 2019. 11. "Disturbed Genius in Penury : Former IIT Prof. Vasistha Singh". The PanIIT Alumni Association. Archived from the original on 8 April 2014. Retrieved 6 April 2014. 12. Prasad, Bhuvneshwar (19 April 2013). "Forgotten mathematics legend Vashishtha Narayan Singh back in academia". The Times of India. Patna. Archived from the original on 19 June 2016. Retrieved 7 April 2014. 13. "Noted mathematician Vashishtha Singh dies; hospital denies ambulance to carry his body". The Week. Retrieved 15 November 2019. 14. "Mathematician Vashishtha Narayan Singh Dies In Patna". NDTV.com. Archived from the original on 14 November 2019. Retrieved 15 November 2019. 15. "Padma awards for George, Vashishtha & six others from state". The Times of India. 26 January 2020. Retrieved 26 January 2020. 16. "Arun Jaitley, Sushma Swaraj, George Fernandes given Padma Vibhushan posthumously. Here's full list of Padma award recipients". The Economic Times. 26 January 2020. Retrieved 26 January 2020. 17. "MINISTRY OF HOME AFFAIRS" (PDF). padmaawards.gov.in. Retrieved 25 January 2020. 18. "Prakash Jha to Direct Biopic on Mathematician Vashishtha Narayan Singh". News18. 28 July 2018. Retrieved 15 November 2019. 19. "No authority to make biopic on Vashishtha Narayan Singh: Mathematician's brother Ayodhya Prasad Singh". Free Press Journal. 10 August 2018. Retrieved 16 November 2019. Recipients of Padma Shri in Science & Engineering 1950s • Kshitish Ranjan Chakravorty (1954) • Habib Rahman (1955) • Laxman Mahadeo Chitale (1957) • Ram Prakash Gehlote (1957) • Krishnaswami Ramiah (1957) • Bal Raj Nijhawan (1958) • Benjamin Peary Pal (1958) • Navalpakkam Parthasarthy (1958) • Surendranath Kar (1959) • Om Prakash Mathur (1959) • Homi Sethna (1959) 1960s • Anil Kumar Das (1960) • A. S. Rao (1960) • M. G. K. Menon (1961) • Brahm Prakash (1961) • Man Mohan Suri (1961) • Paramananda Acharya (1964) • Vishnu Madav Ghatage (1965) • Satish Dhawan (1966) • Maganbhai Ramchhodbhai Patel (1967) • Hermenegild Santapau (1967) • M. S. Swaminathan (1967) • Guduru Venkatachalam (1967) • Raja Ramanna (1968) • Nautam Bhatt (1969) • Amrik Singh Cheema (1969) • T. V. Mahalingam (1969) 1970s • P. R. Pisharoty (1970) • Moti Lal Dhar (1971) • Zafar Futehally (1971) • Devendra Lal (1971) • Charles Correa (1972) • N. Kesava Panikkar (1973) • Govind Swarup (1973) • Achyut Kanvinde (1974) • Suchitra Mitra (1974) • C. N. R. Rao (1974) • Sitaram Rao Valluri (1974) • Rajagopala Chidambaram (1975) • Shambhu Dayal Sinvhal (1976) • B. R. Deodhar (1976) • B. V. Doshi (1976) • Atmaram Bhairav Joshi (1976) • Janaki Ammal (1977) • Jugal Kishore Choudhury (1977) • Prafulla Kumar Jena (1977) • Vishwa Gopal Jhingran (1977) • Sibte Hasan Zaidi (1977) 1980s • Hari Krishan Jain (1981) • Gurcharan Singh Kalkat (1981) • Dinkar Gangadhar Kelkar (1981) • Krishnaswamy Kasturirangan (1982) • Satya Prakash (1982) • V. Narayana Rao (1982) • Saroj Raj Choudhury (1983) • Hassan Nasiem Siddiquie (1983) • Maria Renee Cura (1984) • Vasant Gowarikar (1984) • Pramod Kale (1984) • Nilamber Pant (1984) • Myneni Hariprasada Rao (1984) • M. R. Srinivasan (1984) • Predhiman Krishan Kaw (1985) • P. V. S. Rao (1987) • Ramadas P. Shenoy (1987) • Saroj Ghose (1989) • Palle Rama Rao (1989) 1990s • Ram Narain Agarwal (1990) • Laurie Baker (1990) • M. R. Kurup (1990) • Rakesh Bakshi (1991) • B. L. Deekshatulu (1991) • Narinder Kumar Gupta (1991) • Shri Krishna Joshi (1991) • Raghunath Anant Mashelkar (1991) • Govindarajan Padmanaban (1991) • Bangalore Puttaiya Radhakrishna (1991) • A. V. Rama Rao (1991) • Ganeshan Venkataraman (1991) • Madhava Ashish (1992) • G. S. Venkataraman (1992) • Kailash Sankhala (1992) • Vinod Prakash Sharma (1992) • Joseph Allen Stein (1992) • Manmohan Attavar (1998) • Priyambada Mohanty Hejmadi (1998) • Anil Kakodkar (1998) • Aditya Narayan Purohit (1998) • V. K. Saraswat (1998) • Asis Datta (1999) • Indira Nath (1999) • M. S. Ramakumar (1999) • M. V. Rao (1999) • S. K. Sikka (1999) 2000s • Vijay P. Bhatkar (2000) • D. D. Bhawalkar (2000) • Gurdev Khush (2000) • Parasu Ram Mishra (2000) • Sandip Kumar Basu (2001) • Bisweswar Bhattacharjee (2001) • V. K. Chaturvedi (2001) • Ketayun Ardeshir Dinshaw (2001) • Prem Shanker Goel (2001) • Goverdhan Mehta (2001) • C. G. Krishnadas Nair (2001) • M. S. Raghunathan (2001) • Sanjaya Rajaram (2001) • T. V. Ramakrishnan (2001) • Thirumalachari Ramasami (2001) • Dasika Durga Prasada Rao (2001) • Paul Ratnasamy (2001) • Ashoke Sen (2001) • Bikash Sinha (2001) • Suhas Pandurang Sukhatme (2001) • A. S. Arya (2002) • Narayanaswamy Balakrishnan (2002) • Padmanabhan Balaram (2002) • Dorairajan Balasubramanian (2002) • Ramanath Cowsik (2002) • Chaitanyamoy Ganguly (2002) • Kota Harinarayana (2002) • Ashok Jhunjhunwala (2002) • Amitav Malik (2002) • Katuru Narayana (2002) • A. Sivathanu Pillai (2002) • I. V. Subba Rao (2002) • B. N. Suresh (2002) • Asok Kumar Barua (2003) • Shivram Bhoje (2003) • Jai Bhagwan Chaudhary (2003) • Sarvagya Singh Katiyar (2003) • Gyan Chandra Mishra (2003) • Jai Pal Mittal (2003) • Sundaram Ramakrishnan (2003) • Baburao Govindrao Shirke (2003) • Mahendra Singh Sodha (2003) • Nagarajan Vedachalam (2003) • Satish Kumar Kaura (2004) • Nalini Ranjan Mohanty (2004) • T. S. Prahlad (2004) • Vishweshwaraiah Prakash (2004) • K. N. Shankara (2004) • Lalji Singh (2004) • Rajpal Singh Sirohi (2004) • M. Vijayan (2004) • Dipankar Banerjee (2005) • Srikumar Banerjee (2005) • Banwari Lal Chouksey (2005) • Bhagavatula Dattaguru (2005) • Vasudevan Gnana Gandhi (2005) • Madhu Sudan Kanungo (2005) • M. Mahadevappa (2005) • Ramachandran Balasubramanian (2006) • Harsh Gupta (2006) • Seyed E. Hasnain (2006) • Narendra Kumar (2006) • B. V. Nimbkar (2006) • Swaminathan Sivaram (2006) • Thekkethil Kochandy Alex (2007) • Rabi Narayan Bastia (2007) • Dilip K. Biswas (2007) • Ananda Mohan Chakrabarty (2007) • Kiran Karnik (2007) • Thanu Padmanabhan (2007) • Baldev Raj (2007) • Sudhir Kumar Sopory (2007) • Khadg Singh Valdiya (2007) • Kasturi Lal Chopra (2008) • Joseph H. Hulse (2008) • Bhavarlal Jain (2008) • Kaleem Ullah Khan (2008) • Sant Singh Virmani (2008) • Pramod Tandon (2009) • Goriparthi Narasimha Raju Yadav (2009) 2010s • Vijay Prasad Dimri (2010) • Pucadyil Ittoop John (2010) • Palpu Pushpangadan (2010) • M. R. S. Rao (2010) • Vijayalakshmi Ravindranath (2010) • Ponisseril Somasundaran (2010) • M. Annamalai (2011) • Moni Lal Bhoumik (2011) • Coimbatore Narayana Rao Raghavendran (2011) • Suman Sahai (2011) • G. Shankar (2011) • E. A. Siddiq (2011) • Subra Suresh (2011) • V. Adimurthy (2012) • Rameshwar Nath Koul Bamezai (2012) • Krishna Lal Chadha (2012) • Virander Singh Chauhan (2012) • Y. S. Rajan (2012) • Jagadish Shukla (2012) • Vijaypal Singh (2012) • Lokesh Kumar Singhal (2012) • Manindra Agrawal (2013) • Mustansir Barma (2013) • Avinash Chander (2013) • Sanjay Govind Dhande (2013) • Jayaraman Gowrishankar (2013) • Sharad P. Kale (2013) • Sankar Kumar Pal (2013) • Deepak B. Phatak (2013) • Mudundi Ramakrishna Raju (2013) • Ajay K. Sood (2013) • K. VijayRaghavan (2013) • Sekhar Basu (2014) • Madhavan Chandradathan (2014) • Jayanta Kumar Ghosh (2014) • Ravi Grover (2014) • Ramakrishna V. Hosur (2014) • E. D. Jemmis (2014) • A. S. Kiran Kumar (2014) • Ajay Kumar Parida (2014) • M. Y. S. Prasad (2014) • Brahma Singh (2014) • Vinod K. Singh (2014) • Govindan Sundararajan (2014) • Subbiah Arunan (2015) • Jacques Blamont (2015) • N. Prabhakar (2015) • Prahlada (2015) • S. K. Shivakumar (2015) • Mylswamy Annadurai (2016) • Dipankar Chatterji (2016) • Satish Kumar (2016) • Onkar Nath Srivastava (2016) • Veena Tandon (2016) • G. D. Yadav (2016) • Jitendra Nath Goswami (2017) • Chintakindi Mallesham (2017) • Amitava Roy (2018) • Vikram Chandra Thakur (2018) • Rajagopalan Vasudevan (2018) • Manas Bihari Verma (2018) • Uddhab Bharali (2019) • Baldev Singh Dhillon (2019) • Rohini Godbole (2019) • Subhash Kak (2019) 2020s • Raman Gangakhedkar (2020) • Sujoy K. Guha (2020) • K. S. Manilal (2020) • Vashishtha Narayan Singh (2020) • Thalappil Pradeep (2020) • H. C. Verma (2020) • Sudhir K. Jain (2020) • Rattan Lal (2021) • Subbanna Ayyappan (2022) • Sanghamitra Bandyopadhyay (2022) • Aditya Prasad Dash (2022) • Moti Lal Madan (2022) • Anil K. Rajvanshi (2022) • Ajay Kumar Sonkar (2022) • Jyantkumar Maganlal Vyas (2022) • Khadar Valli Dudekula (2023) • Modadugu Vijay Gupta (2023) • Ganesh Nagappa Krishnarajanagara (2023) • Arvind Kumar (2023) • Mahendra Pal (2023) • Bakshi Ram (2023) • Sujatha Ramdorai (2023) • Abbareddy Nageswara Rao (2023) Authority control: Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Vasily Vladimirov Vasily Sergeyevich Vladimirov (Russian: Васи́лий Серге́евич Влади́миров; 9 January 1923 – 3 November 2012) was a Soviet and Russian mathematician working in the fields of number theory, mathematical physics, quantum field theory, numerical analysis, generalized functions, several complex variables, p-adic analysis, multidimensional Tauberian theorems. Vasily Vladimirov Vladimirov in Nice, 1970 Born Vasily Sergeyevich Vladimirov (1923-01-09)9 January 1923 Dyaglevo, Novoladozhsky Uyezd, Saint Petersburg Governorate, RSFSR, Soviet Union Died3 November 2012(2012-11-03) (aged 89) Odintsovsky District, Moscow Oblast, Russia NationalityRussian, Soviet Alma materLeningrad State University (now Saint Petersburg State University) 1959 Known fornumber theory, mathematical physics, quantum field theory, numerical analysis, generalized functions, several complex variables, p-adic analysis, multidimensional tauberian theorems AwardsStalin prize 1953, Lyapunov Gold Medal of the Russian Academy of Sciences 1971, USSR State Prize 1987 Scientific career FieldsMathematics and mathematical physics InstitutionsLeningrad State University (now Saint Petersburg State University) Steklov Institute of Mathematics Doctoral advisorNikolay Bogolyubov Other academic advisorsBoris Venkov, Leonid Kantorovich Life Vladimirov was born to a peasant family of 5 children, in 1923, Petrograd. Under the impact of food shortage and poverty, he began schooling in 1930. He then went to a 7-year school in 1934, but transferred to the Leningrad Technical School of Hydrology and Meteorology in 1937. In 1939, at the age of sixteen, he enrolled into a night preparatory school for workers, and finally successfully progressed to Leningrad University to study physics. During the Second World War, Vladimirov took part in defence of Leningrad against German invasion, building defences, working as a tractor driver and as meteorologist in Air Force after training. He served in several different units, mainly as part of air-defense system of Leningrad. He was given the rank of sergeant major in the reserves after the war and permitted to return to his study. When he returned to university, Vladimirov shifted his focus of interest from physics to number theory. Under the advice of Boris Alekseevich Venkov (1900-1962), an expert on quadratic forms , he started undertaking research in number theory and attained a master's degree in 1948. In the first thesis of his master study in Leningrad, he confirmed the existence of non-extreme perfect quadratic form in six variables in Georgy Fedoseevich Voronoy's conjecture. In his second thesis, he approached packing problems for convex bodies initiated by Hermann Minkowski. Upon graduation, he was appointed as a junior researcher in the Leningrad Branch of the Steklov Mathematical Institute of the USSR Academy of Sciences. As the Soviet atomic bomb programme ran, Vladimirov was assigned to assist with the development of the bomb, in joint force with many top scientists and industrialists. He worked with Vitalevich Kantorovich calculating critical parameters of certain simple nuclear systems. In 1950, when he was sent to Arzamas-16, he worked under the direction of Nikolai Nikolaevich Bogolyubov, who later became a long-term collaborator with Vladimirov. In Arzamas-16, Vladimirov worked on finding mathematical solutions for problems raised by physicists. He developed new techniques for the numerical solution of boundary value problems, especially for solving the kinetic equation of neutron transfer in nuclear reactors in 1952, which is now known as Vladimirov method. After the success of the bomb project, Vladimirov was awarded the Stalin Prize in for his contribution 1953. He continued working on mathematics for atomic bomb in the Central Scientific Research Institute for Artillery Armaments, where he served as Senior Researcher in 1955. Vladimirov moved to Steklov Mathematical Institute, Moscow, in 1956, under the supervision of Nikolay Nikolaevich Bogolyubov.[1] There he started working on new mathematical branches for solving problems in quantum field theory. He defended his doctoral thesis in 1958, which contains the renowned 'Vladimirov variational principle'.[2] Honours and awards • Hero of Socialist Labour • Two Orders of Lenin • Order of the Patriotic War 2nd class • Two Orders of the Red Banner of Labour • Medal of Zhukov • Medal "For the Defence of Leningrad" • Medal "For the Victory over Germany in the Great Patriotic War 1941–1945" • Jubilee Medal "Twenty Years of Victory in the Great Patriotic War 1941–1945" • Jubilee Medal "Thirty Years of Victory in the Great Patriotic War 1941–1945" • Jubilee Medal "Forty Years of Victory in the Great Patriotic War 1941–1945" • Jubilee Medal "50 Years of Victory in the Great Patriotic War 1941–1945" • Jubilee Medal "60 Years of Victory in the Great Patriotic War 1941–1945" • Jubilee Medal "50 Years of the Armed Forces of the USSR" • Jubilee Medal "60 Years of the Armed Forces of the USSR" • Jubilee Medal "70 Years of the Armed Forces of the USSR" • Medal "In Commemoration of the 250th Anniversary of Leningrad" • Medal "Veteran of Labour" • Medal "In Commemoration of the 850th Anniversary of Moscow" • Medal "In Commemoration of the 300th Anniversary of Saint Petersburg" • Stalin Prize • USSR State Prize Selected publications • Vladimirov, V. S. (1966), Ehrenpreis, L. (ed.), Methods of the theory of functions of several complex variables. With a foreword of N.N. Bogolyubov, Cambridge-London: The M.I.T. Press, pp. XII+353, MR 0201669, Zbl 0125.31904 (Zentralblatt review of the original Russian edition). One of the first modern monographs on the theory of several complex variables, being different from other ones of the same period due to the extensive use of generalized functions. • Vladimirov, V. S. (1979), Generalized functions in mathematical physics, Moscow: Mir Publishers, p. 362, ISBN 978-0-8285-0001-2, MR 0564116, Zbl 0515.46034. A textbook on the theory of generalized functions and their applications to mathematical physics and several complex variables. • Vladimirov, V.S. (1983), Equations of mathematical physics (2nd ed.), Moscow: Mir Publishers, p. 464, MR 0764399, Zbl 0207.09101 (Zentralblatt review of the first English edition). • Vladimirov, V.S.; Drozzinov, Yu.N.; Zavialov, B.I. (1988), Tauberian theorems for generalized functions, Mathematics and Its Applications (Soviet Series), vol. 10, Dordrecht-Boston-London: Kluwer Academic Publishers, pp. XV+293, ISBN 978-90-277-2383-3, MR 0947960, Zbl 0636.40003. • Vladimirov, V.S. (2002), Methods of the theory of generalized functions, Analytical Methods and Special Functions, vol. 6, London-New York City: Taylor & Francis, pp. XII+353, ISBN 978-0-415-27356-5, MR 2012831, Zbl 1078.46029. A monograph on the theory of generalized functions written with an eye towards their applications to several complex variables and mathematical physics, as is customary for the Author: it is a substantial revision of the textbook (Vladimirov 1979). See also • Nikolay Bogolyubov • Generalized function • Edge-of-the-wedge theorem • Riemann–Hilbert problem References 1. "Vasilii Vladimirov - The Mathematics Genealogy Project". www.mathgenealogy.org. Retrieved 2022-06-23. 2. "Vasilii Sergeevich Vladimirov - Biography". Maths History. Retrieved 2022-06-23. Biographical and general references • Bolibrukh, Andrey Andreevich; Volovich, Igor Vasil'evich; Faddeev, Lyudvig Dmitrievich; Gonchar, Andrei Aleksandrovich; Kadyshevskii, Vladimir Georgievich; Logunov, Anatoly Alekseevich; Marchuk, Guri Ivanovich; Mishchenko, Evgenii Frolovich; Nikol'skii, Sergei Mikhailovich; Novikov, Sergei Petrovich (2003), "Vasilii Sergeevich Vladimirov (on his 80th birthday)", Uspekhi Matematicheskikh Nauk (in Russian), 58 (1(349)): 199–207, Bibcode:2003RuMaS..58..199B, doi:10.1070/RM2003v058n01ABEH000608, MR 1992146, S2CID 250833289, Zbl 1050.01516. • Bogolyubov, Nikolai Nikolaevich; Logunov, Anatoly Alekseevich; Marchuk, Guri Ivanovich (1983), "Vasilii Sergeevich Vladimirov (on his sixtieth birthday)", Uspekhi Matematicheskikh Nauk (in Russian), 38 (1(229)): 207–216, Bibcode:1983RuMaS..38..231B, doi:10.1070/RM1983v038n01ABEH003420, MR 0693751, S2CID 250881492, Zbl 0512.01021. • Gonchar, Andrei Aleksandrovich; Marchuk, Guri Ivanovich; Novikov, Sergei Petrovich (1993), "Vasilii Sergeevich Vladimirov (on his seventieth birthday)", Uspekhi Matematicheskikh Nauk (in Russian), 48 (1(289)): 195–204, Bibcode:1993RuMaS..48..201G, doi:10.1070/RM1993v048n01ABEH001007, MR 1227969, S2CID 250909442, Zbl 0797.01012. External links • Vladimirov's academic web page at the Russian Academy of Science. • Vasily Vladimirov author page at Math-Net.Ru. • Chuyanov, V.A. (2001) [1994], "Vladimirov method", Encyclopedia of Mathematics, EMS Press • Drozhzhinov, Yu.N. (2001) [1994], "Vladimirov variational principle", Encyclopedia of Mathematics, EMS Press • Vasily Vladimirov's obituary (in Russian) Authority control International • FAST • ISNI • VIAF National • Norway • France • BnF data • Germany • Israel • United States • Sweden • Japan • Czech Republic • Australia • Netherlands • Poland Academics • CiNii • MathSciNet • Mathematics Genealogy Project • zbMATH People • Trove Other • IdRef
Wikipedia
Finite type invariant In the mathematical theory of knots, a finite type invariant, or Vassiliev invariant (so named after Victor Anatolyevich Vassiliev), is a knot invariant that can be extended (in a precise manner to be described) to an invariant of certain singular knots that vanishes on singular knots with m + 1 singularities and does not vanish on some singular knot with 'm' singularities. It is then said to be of type or order m. We give the combinatorial definition of finite type invariant due to Goussarov, and (independently) Joan Birman and Xiao-Song Lin. Let V be a knot invariant. Define V1 to be defined on a knot with one transverse singularity. Consider a knot K to be a smooth embedding of a circle into $\mathbb {R} ^{3}$. Let K' be a smooth immersion of a circle into $\mathbb {R} ^{3}$ with one transverse double point. Then $V^{1}(K')=V(K_{+})-V(K_{-})$, where $K_{+}$ is obtained from K by resolving the double point by pushing up one strand above the other, and K_- is obtained similarly by pushing the opposite strand above the other. We can do this for maps with two transverse double points, three transverse double points, etc., by using the above relation. For V to be of finite type means precisely that there must be a positive integer m such that V vanishes on maps with $m+1$ transverse double points. Furthermore, note that there is notion of equivalence of knots with singularities being transverse double points and V should respect this equivalence. There is also a notion of finite type invariant for 3-manifolds. Examples The simplest nontrivial Vassiliev invariant of knots is given by the coefficient of the quadratic term of the Alexander–Conway polynomial. It is an invariant of order two. Modulo two, it is equal to the Arf invariant. Any coefficient of the Kontsevich invariant is a finite type invariant. The Milnor invariants are finite type invariants of string links.[1] Invariants representation Michael Polyak and Oleg Viro gave a description of the first nontrivial invariants of orders 2 and 3 by means of Gauss diagram representations. Mikhail N. Goussarov has proved that all Vassiliev invariants can be represented that way. The universal Vassiliev invariant In 1993, Maxim Kontsevich proved the following important theorem about Vassiliev invariants: For every knot one can compute an integral, now called the Kontsevich integral, which is a universal Vassiliev invariant, meaning that every Vassiliev invariant can be obtained from it by an appropriate evaluation. It is not known at present whether the Kontsevich integral, or the totality of Vassiliev invariants, is a complete knot invariant. Computation of the Kontsevich integral, which has values in an algebra of chord diagrams, turns out to be rather difficult and has been done only for a few classes of knots up to now. There is no finite-type invariant of degree less than 11 which distinguishes mutant knots.[2] See also • Willerton's fish References 1. Habegger, Nathan; Masbaum, Gregor (2000), "The Kontsevich integral and Milnor's invariants", Topology, 39 (6): 1253–1289, doi:10.1016/S0040-9383(99)00041-5, preprint. {{citation}}: External link in |postscript= (help)CS1 maint: postscript (link) 2. Murakami, Jun. "Finite-type invariants detecting the mutant knots" (PDF). Further reading • Victor A. Vassiliev, Cohomology of knot spaces. Theory of singularities and its applications, 23–69, Adv. Soviet Math., 1, American Mathematical Society, Providence, RI, 1990. • Joan Birman and Xiao-Song Lin, Knot polynomials and Vassiliev's invariants. Inventiones Mathematicae, 111, 225–270 (1993) • Bar-Natan, Dror (1995). "On the Vassiliev knot invariants". Topology. 34 (2): 423–472. doi:10.1016/0040-9383(95)93237-2. External links • Weisstein, Eric W. "Vassiliev Invariant". MathWorld. • "Finite Type (Vassiliev) Invariants", The Knot Atlas. Knot theory (knots and links) Hyperbolic • Figure-eight (41) • Three-twist (52) • Stevedore (61) • 62 • 63 • Endless (74) • Carrick mat (818) • Perko pair (10161) • (−2,3,7) pretzel (12n242) • Whitehead (52 1 ) • Borromean rings (63 2 ) • L10a140 • Conway knot (11n34) Satellite • Composite knots • Granny • Square • Knot sum Torus • Unknot (01) • Trefoil (31) • Cinquefoil (51) • Septafoil (71) • Unlink (02 1 ) • Hopf (22 1 ) • Solomon's (42 1 ) Invariants • Alternating • Arf invariant • Bridge no. • 2-bridge • Brunnian • Chirality • Invertible • Crosscap no. • Crossing no. • Finite type invariant • Hyperbolic volume • Khovanov homology • Genus • Knot group • Link group • Linking no. • Polynomial • Alexander • Bracket • HOMFLY • Jones • Kauffman • Pretzel • Prime • list • Stick no. • Tricolorability • Unknotting no. and problem Notation and operations • Alexander–Briggs notation • Conway notation • Dowker–Thistlethwaite notation • Flype • Mutation • Reidemeister move • Skein relation • Tabulation Other • Alexander's theorem • Berge • Braid theory • Conway sphere • Complement • Double torus • Fibered • Knot • List of knots and links • Ribbon • Slice • Sum • Tait conjectures • Twist • Wild • Writhe • Surgery theory • Category • Commons
Wikipedia
Phase contrast magnetic resonance imaging Phase contrast magnetic resonance imaging (PC-MRI) is a specific type of magnetic resonance imaging used primarily to determine flow velocities. PC-MRI can be considered a method of Magnetic Resonance Velocimetry. It also provides a method of magnetic resonance angiography. Since modern PC-MRI is typically time-resolved, it provides a means of 4D imaging (three spatial dimensions plus time).[2] Phase contrast magnetic resonance imaging Vastly undersampled Isotropic Projection Reconstruction (VIPR) of a phase contrast (PC) MRI sequence of a 56-year-old male with dissections of the celiac artery (upper) and the superior mesenteric artery (lower). Laminar flow is present in the true lumen (closed arrow) and helical flow is present in the false lumen (open arrow).[1] Purposemethod of magnetic resonance angiograph How it Works Atoms with an odd number of protons or neutrons have a randomly aligned angular spin momentum. When placed in a strong magnetic field, some of these spins align with the axis of the external field, which causes a net 'longitudinal' magnetization. These spins precess about the axis of the external field at a frequency proportional to the strength of that field. Then, energy is added to the system through a Radio frequency (RF) pulse to 'excite' the spins, changing the axis that the spins precess about. These spins can then be observed by receiver coils (Radiofrequency coils) using Faraday's law of induction. Different tissues respond to the added energy in different ways, and imaging parameters can be adjusted to highlight desired tissues. All of these spins have a phase that is dependent on the atom's velocity. Phase shift $(\phi )$ of a spin is a function of the gradient field $\mathbf {G} (t)$: $\phi =\gamma \int _{0}^{t}B_{0}+\mathbf {G} (\tau )\cdot \mathbf {r} (\tau )d\tau $ where $\gamma $ is the Gyromagnetic ratio and $\mathbf {r} $ is defined as: $\mathbf {r} (\tau )=\mathbf {r} _{0}+\mathbf {v} _{r}\tau +{\frac {1}{2}}\mathbf {a} _{r}\tau ^{2}+\ldots $ , $\mathbf {r} _{0}$ is the initial position of the spin, $\mathbf {v} _{r}$ is the spin velocity, and $\mathbf {a} _{r}$ is the spin acceleration. If we only consider static spins and spins in the x-direction, we can rewrite equation for phase shift as: $\phi =\gamma x_{0}\int _{0}^{t}G_{x}(\tau )d\tau +\gamma v_{x}\int _{0}^{t}G_{x}(\tau )\tau d\tau +\gamma {\frac {a_{x}}{2}}\int _{0}^{t}G_{x}(\tau )\tau ^{2}d\tau +\ldots $ We then assume that acceleration and higher order terms are negligible to simplify the expression for phase to: $\phi =\gamma (x_{0}M_{0}+v_{x}M_{1})$ where $M_{0}$ is the zeroth moment of the x-gradient and $M_{1}$ is the first moment of the x gradient. If we take two different acquisitions with applied magnetic gradients that are the opposite of each other (bipolar gradients), we can add the results of the two acquisitions together to calculate a change in phase that is dependent on gradient: $\Delta \phi =v(\gamma \Delta M_{1})$ where $\Delta M_{1}=2M_{1}$.[3] [4] The phase shift is measured and converted to a velocity according to the following equation: $v={\frac {v_{enc}}{\pi }}\Delta \phi $ where $v_{enc}$ is the maximum velocity that can be recorded and $\Delta \phi $ is the recorded phase shift. The choice of $v_{enc}$ defines range of velocities visible, known as the ‘dynamic range’. A choice of $v_{enc}$ below the maximum velocity in the slice will induce aliasing in the image where a velocity just greater than $v_{enc}$ will be incorrectly calculated as moving in the opposite direction. However, there is a direct trade-off between the maximum velocity that can be encoded and the signal-to-noise ratio of the velocity measurements. This can be described by: $SNR_{v}={\frac {\pi }{\sqrt {2}}}{\frac {v}{v_{enc}}}SNR$ where $SNR$ is the signal-to-noise ratio of the image (which depends on the magnetic field of the scanner, the voxel volume, and the acquisition time of the scan). For an example, setting a ‘low’ $v_{enc}$ (below the maximum velocity expected in the scan) will allow for better visualization of slower velocities (better SNR), but any higher velocities will alias to an incorrect value. Setting a ‘high’ $v_{enc}$ (above the maximum velocity expected in the scan) will allow for the proper velocity quantification, but the larger dynamic range will obscure the smaller velocity features as well as decrease SNR. Therefore, the setting of $v_{enc}$ will be application dependent and care must be exercised in the selection. In order to further allow for proper velocity quantification, especially in clinical applications where the velocity dynamic range of flow is high (e.g. blood flow velocities in vessels across the thoracoabdominal cavity), a dual-echo PC-MRI (DEPC) method with dual velocity encoding in the same repetition time has been developed.[5] The DEPC method does not only allow for proper velocity quantification, but also reduces the total acquisition time (especially when applied to 4D flow imaging) compared to a single-echo single-$v_{enc}$ PC-MRI acquisition carried out at two separate $v_{enc}$ values. To allow for more flexibility in selecting $v_{enc}$, instantaneous phase (phase unwrapping) can be used to increase both dynamic range and SNR.[6] Encoding Methods When each dimension of velocity is calculated based on acquisitions from oppositely applied gradients, this is known as a six-point method. However, more efficient methods are also used. Two are described here: Simple Four-point Method Four sets of encoding gradients are used. The first is a reference and applies a negative moment in $x$,$y$, and $z$. The next applies a positive moment in $x$, and a negative moment in $y$ and $z$. The third applies a positive moment in $y$, and a negative moment in $x$ and $z$. And the last applies a positive moment in $z$, and a negative moment in $x$ and $y$.[7] Then, the velocities can be solved based on the phase information from the corresponding phase encodes as follows: ${\hat {v}}_{x}={\frac {\phi _{x}-\phi _{0}}{\gamma \Delta M_{1}}}$ ${\hat {v}}_{y}={\frac {\phi _{y}-\phi _{0}}{\gamma \Delta M_{1}}}$ ${\hat {v}}_{z}={\frac {\phi _{z}-\phi _{0}}{\gamma \Delta M_{1}}}$ Balanced Four-Point Method The balanced four-point method also includes four sets of encoding gradients. The first is the same as in the simple four-point method with negative gradients applied in all directions. The second has a negative moment in $x$, and a positive moment in $y$ and $z$. The third has a negative moment in $y$, and a positive moment in $x$ and $z$. The last has a negative moment in $z$ and a positive moment in $x$ and $y$.[8] This gives us the following system of equations: $\phi _{2}-\phi _{1}=\phi _{x}+\phi _{y}$ $\phi _{3}-\phi _{1}=\phi _{x}+\phi _{z}$ $\phi _{4}-\phi _{1}=\phi _{y}+\phi _{z}$ Then, the velocities can be calculated: ${\hat {v}}_{x}={\frac {-\phi _{1}+\phi _{2}+\phi _{3}-\phi _{4}}{2\gamma \Delta M_{1}}}$ ${\hat {v}}_{y}={\frac {-\phi _{1}+\phi _{2}-\phi _{3}+\phi _{4}}{2\gamma \Delta M_{1}}}$ ${\hat {v}}_{z}={\frac {-\phi _{1}-\phi _{2}+\phi _{3}+\phi _{4}}{2\gamma \Delta M_{1}}}$ Retrospective Cardiac and Respiratory Gating For medical imaging, in order to get highly resolved scans in 3D space and time without motion artifacts from the heart or lungs, retrospective cardiac gating and respiratory compensation are employed. Beginning with cardiac gating, the patient's ECG signal is recorded throughout the imaging process. Similarly, the patient's respiratory patterns can be tracked throughout the scan. After the scan, the continuously collected data in k-space (temporary image space) can be assigned accordingly to match-up with the timing of the heart beat and lung motion of the patient. This means that these scans are cardiac-averaged so the measured blood velocities are an average over multiple cardiac cycles.[9] Applications Phase contrast MRI is one of the main techniques for magnetic resonance angiography (MRA). This is used to generate images of arteries (and less commonly veins) in order to evaluate them for stenosis (abnormal narrowing), occlusions, aneurysms (vessel wall dilatations, at risk of rupture) or other abnormalities. MRA is often used to evaluate the arteries of the neck and brain, the thoracic and abdominal aorta, the renal arteries, and the legs (the latter exam is often referred to as a "run-off"). Limitations In particular, a few limitations of PC-MRI are of importance for the measured velocities: • Partial volume effects (when a voxel contains the boundary between static and moving materials) can overestimate phase leading to inaccurate velocities at the interface between materials or tissues. • Intravoxel phase dispersion (when velocities within a pixel are heterogeneous or in areas of turbulent flow) can produce a resultant phase that does not resolve the flow features accurately. • Assuming that acceleration and higher orders of motion are negligible can be inaccurate depending on the flow field. • Displacement artifacts (also known as misregistration and oblique flow artifacts) occur when there is a time difference between the phase and frequency encoding. These artifacts are highest when the flow direction is within the slice plane (most prominent in the heart and aorta for biological flows)[10] Vastly undersampled Isotropic Projection Reconstruction (VIPR) A Vastly undersampled Isotropic Projection Reconstruction (VIPR) is a radially acquired MRI sequence which results in high-resolution MRA with significantly reduced scan times, and without the need for breath-holding.[11] References 1. Hartung, Michael P; Grist, Thomas M; François, Christopher J (2011). "Magnetic resonance angiography: current status and future directions". Journal of Cardiovascular Magnetic Resonance. 13 (1): 19. doi:10.1186/1532-429X-13-19. ISSN 1532-429X. PMC 3060856. PMID 21388544. (CC-BY-2.0) 2. Stankovic, Zoran; Allen, Bradley D.; Garcia, Julio; Jarvis, Kelly B.; Markl, Michael (2014). "4D flow imaging with MRI". Cardiovascular Diagnosis and Therapy. 4 (2): 173–192. doi:10.3978/j.issn.2223-3652.2014.01.02. PMC 3996243. PMID 24834414. 3. Elkins, C.; Alley, M.T. (2007). "Magnetic resonance velocimetry: applications of magnetic resonance imaging in the measurement of fluid motion". Experiments in Fluids. 43 (6): 823. Bibcode:2007ExFl...43..823E. doi:10.1007/s00348-007-0383-2. S2CID 121958168. 4. Taylor, Charles A.; Draney, Mary T. (2004). "Experimental and computational methods in cardiovascular fluid mechanics". Annual Review of Fluid Mechanics. 36: 197–231. Bibcode:2004AnRFM..36..197T. doi:10.1146/annurev.fluid.36.050802.121944. 5. Ajala, Afis; Zhang, Jiming; Pednekar, Amol; Buko, Erick; Wang, Luning; Cheong, Benjamin; Hor, Pei-Herng; Muthupillai, Raja (2020). "Mitral Valve Flow and Myocardial Motion Assessed with Dual-Echo Dual-Velocity Cardiac MRI". Radiology: Cardiothoracic Imaging. 3 (2): e190126. doi:10.1148/ryct.2020190126. PMC 7977974. PMID 33778578. 6. Salfitya, M.F.; Huntleya, J.M.; Gravesb, M.J.; Marklundc, O.; Cusackd, R.; Beauregardd, D.A. (2006). "Extending the dynamic range of phase contrast magnetic resonance velocity imaging using advanced higher-dimensional phase unwrapping algorithms". Journal of the Royal Society Interface. 3 (8): 415–427. doi:10.1098/rsif.2005.0096. PMC 1578755. PMID 16849270. 7. Pelc, Norbert J.; Bernstein, Matt A.; Shimakawa, Ann; Glover, Gary H. (1991). "Encoding strategies for three‐direction phase‐contrast MR imaging of flow". Journal of Magnetic Resonance Imaging. 1 (4): 405–413. doi:10.1002/jmri.1880010404. PMID 1790362. S2CID 3000911. 8. Pelc, Norbert J.; Bernstein, Matt A.; Shimakawa, Ann; Glover, Gary H. (1991). "Encoding strategies for three‐direction phase‐contrast MR imaging of flow". Journal of Magnetic Resonance Imaging. 1 (4): 405–413. doi:10.1002/jmri.1880010404. PMID 1790362. S2CID 3000911. 9. Lotz, Joachim; Meier, Christian; Leppert, Andreas; Galanski, Michael (2002). "Cardiovascular Flow Measurement with Phase-Contrast MR Imaging: Basic Facts and Implementation 1". Radiographics. 22 (3): 651–671. doi:10.1148/radiographics.22.3.g02ma11651. PMID 12006694. 10. Petersson, Sven; Dyverfeldt, Petter; Gårdhagen, Roland; Karlsson, Matts; Ebbers, Tino (2010). "Simulation of phase contrast MRI of turbulent flow". Magnetic Resonance in Medicine. 64 (4): 1039–1046. doi:10.1002/mrm.22494. PMID 20574963. 11. Page 602 in: Hersh Chandarana (2015). Advanced MR Imaging in Clinical Practice, An Issue of Radiologic Clinics of North America. Vol. 53. Elsevier Health Sciences. ISBN 9780323376181.
Wikipedia
Vaughan's identity In mathematics and analytic number theory, Vaughan's identity is an identity found by R. C. Vaughan (1977) that can be used to simplify Vinogradov's work on trigonometric sums. It can be used to estimate summatory functions of the form $\sum _{n\leq N}f(n)\Lambda (n)$ where f is some arithmetic function of the natural integers n, whose values in applications are often roots of unity, and Λ is the von Mangoldt function. Procedure for applying the method The motivation for Vaughan's construction of his identity is briefly discussed at the beginning of Chapter 24 in Davenport. For now, we will skip over most of the technical details motivating the identity and its usage in applications, and instead focus on the setup of its construction by parts. Following from the reference, we construct four distinct sums based on the expansion of the logarithmic derivative of the Riemann zeta function in terms of functions which are partial Dirichlet series respectively truncated at the upper bounds of $U$ and $V$, respectively. More precisely, we define $F(s)=\sum _{m\leq U}\Lambda (m)m^{-s}$ and $G(s)=\sum _{d\leq V}\mu (d)d^{-s}$, which leads us to the exact identity that $-{\frac {\zeta ^{\prime }(s)}{\zeta (s)}}=F(s)-\zeta (s)F(s)G(s)-\zeta ^{\prime }(s)G(s)+\left(-{\frac {\zeta ^{\prime }(s)}{\zeta (s)}}-F(s)\right)(1-\zeta (s)G(s)).$ This last expansion implies that we can write $\Lambda (n)=a_{1}(n)+a_{2}(n)+a_{3}(n)+a_{4}(n),$ where the component functions are defined to be ${\begin{aligned}a_{1}(n)&:={\Biggl \{}{\begin{matrix}\Lambda (n),&{\text{ if }}n\leq U;\\0,&{\text{ if }}n>U\end{matrix}}\\a_{2}(n)&:=-\sum _{\stackrel {mdr=n}{\stackrel {m\leq U}{d\leq V}}}\Lambda (m)\mu (d)\\a_{3}(n)&:=\sum _{\stackrel {hd=n}{d\leq V}}\mu (d)\log(h)\\a_{4}(n)&:=-\sum _{\stackrel {mk=n}{\stackrel {m>U}{k>1}}}\Lambda (m)\left(\sum _{\stackrel {d|k}{d\leq V}}\mu (d)\right).\end{aligned}}$ We then define the corresponding summatory functions for $1\leq i\leq 4$ to be $S_{i}(N):=\sum _{n\leq N}f(n)a_{i}(n),$ so that we can write $\sum _{n\leq N}f(n)\Lambda (n)=S_{1}(N)+S_{2}(N)+S_{3}(N)+S_{4}(N).$ Finally, at the conclusion of a multi-page argument of technical and at times delicate estimations of these sums,[1] we obtain the following form of Vaughan's identity when we assume that $|f(n)|\leq 1,\ \forall n$, $U,V\geq 2$, and $UV\leq N$: $\sum _{n\leq N}f(n)\Lambda (n)\ll U+(\log N)\times \sum _{t\leq UV}\left(\max _{w}\left|\sum _{w\leq r\leq {\frac {N}{t}}}f(rt)\right|\right)+{\sqrt {N}}(\log N)^{3}\times \max _{U\leq M\leq N/V}\max _{V\leq j\leq N/M}\left(\sum _{V<k\leq N/M}\left|\sum _{\stackrel {M<m\leq 2M}{\stackrel {m\leq N/k}{m\leq N/j}}}f(mj){\bar {f(mk)}}\right|\right)^{1/2}\mathbf {(V1)} .$ It is remarked that in some instances sharper estimates can be obtained from Vaughan's identity by treating the component sum $S_{2}$ more carefully by expanding it in the form of $S_{2}=\sum _{t\leq UV}\longmapsto \sum _{t\leq U}+\sum _{U<t\leq UV}=:S_{2}^{\prime }+S_{2}^{\prime \prime }.$ The optimality of the upper bound obtained by applying Vaughan's identity appears to be application-dependent with respect to the best functions $U=f_{U}(N)$ and $V=f_{V}(N)$ we can choose to input into equation (V1). See the applications cited in the next section for specific examples that arise in the different contexts respectively considered by multiple authors. Applications • Vaughan's identity has been used to simplify the proof of the Bombieri–Vinogradov theorem and to study Kummer sums (see the references and external links below). • In Chapter 25 of Davenport, one application of Vaughan's identity is to estimate an important prime-related exponential sum of Vinogradov defined by $S(\alpha ):=\sum _{n\leq N}\Lambda (n)e\left(n\alpha \right).$ In particular, we obtain an asymptotic upper bound for these sums (typically evaluated at irrational $\alpha \in \mathbb {R} \setminus \mathbb {Q} $) whose rational approximations satisfy $\left|\alpha -{\frac {a}{q}}\right|\leq {\frac {1}{q^{2}}},(a,q)=1,$ of the form $S(\alpha )\ll \left({\frac {N}{\sqrt {q}}}+N^{4/5}+{\sqrt {Nq}}\right)(\log N)^{4}.$ The argument for this estimate follows from Vaughan's identity by proving by a somewhat intricate argument that $S(\alpha )\ll \left(UV+q+{\frac {N}{\sqrt {U}}}+{\frac {N}{\sqrt {V}}}+{\frac {N}{\sqrt {q}}}+{\sqrt {Nq}}\right)(\log(qN))^{4},$ and then deducing the first formula above in the non-trivial cases when $q\leq N$ and with $U=V=N^{2/5}$. • Another application of Vaughan's identity is found in Chapter 26 of Davenport where the method is employed to derive estimates for sums (exponential sums) of three primes. • Examples of Vaughan's identity in practice are given as the following references / citations in this informative post:.[2][3][4][5] Generalizations Vaughan's identity was generalized by Heath-Brown (1982). Notes 1. N.b., which if you read Davenport frequently enough will lead you to conclude evident properties about the difficulty level of the complete details to proving Vaughan's identity carefully. 2. Tao, T. (2012). "Every integer greater than 1 is the sum of at most five primes". arXiv:1201.6656 [math.NT]. 3. Conrey, J. B. (1989). "More than two fifths of the zeros of the Riemann zeta function are on the critical line". J. Reine Angew. Math. 399: 1–26. 4. H. L. Montgomery and R. C. Vaughan (1981). "On the distribution of square-free numbers". Recent Progress in Analytic Number Theory, H. Halberstam (Ed.), C. Hooley (Ed.). 1: 247–256. 5. D. R. Heath-Brown and S. J. Patterson (1979). "The distribution of Kummer sums at prime arguments". J. Reine Angew. Math. 310: 110–130. References • Davenport, Harold (31 October 2000). Multiplicative Number Theory (Third ed.). New York: Springer Graduate Texts in Mathematics. ISBN 0-387-95097-4. • Graham, S.W. (2001) [1994], "Vaughan's identity", Encyclopedia of Mathematics, EMS Press • Heath-Brown, D. R. (1982), "Prime numbers in short intervals and a generalized Vaughan identity", Can. J. Math., 34 (6): 1365–1377, doi:10.4153/CJM-1982-095-9, MR 0678676 • Vaughan, R.C. (1977), "Sommes trigonométriques sur les nombres premiers", Comptes Rendus de l'Académie des Sciences, Série A, 285: 981–983, MR 0498434 External links • Proof Wiki on Vaughan's Identity • Joni's Math Notes (very detailed exposition) • Encyclopedia of Mathematics • Terry Tao's blog on the large sieve and the Bombieri-Vinogradov theorem
Wikipedia
Vaughan Jones Sir Vaughan Frederick Randal Jones KNZM FRS FRSNZ FAA (31 December 1952 – 6 September 2020) was a New Zealand mathematician known for his work on von Neumann algebras and knot polynomials. He was awarded a Fields Medal in 1990. Sir Vaughan Jones KNZM FRS FRSNZ FAA Jones in 2007 Born Vaughan Frederick Randal Jones (1952-12-31)31 December 1952 Gisborne, New Zealand Died6 September 2020(2020-09-06) (aged 67) Alma materUniversity of Geneva University of Auckland Known forJones polynomial Aharonov–Jones–Landau algorithm SpouseMartha Myers AwardsFields Medal (1990) Scientific career FieldsVon Neumann algebras, knot polynomials, conformal field theory InstitutionsUniversity of California, Berkeley Vanderbilt University University of California, Los Angeles University of Pennsylvania Doctoral advisorAndré Haefliger Early life Jones was born in Gisborne, New Zealand, on 31 December 1952.[1] He was brought up in Cambridge, New Zealand, where he attended St Peter's School. He subsequently transferred to Auckland Grammar School after winning the Gillies Scholarship,[2] and graduated in 1969 from Auckland Grammar.[3] He went on to complete his undergraduate studies at the University of Auckland, obtaining a BSc in 1972 and an MSc in 1973. For his graduate studies, he went to Switzerland, where he completed his PhD at the University of Geneva in 1979. His thesis, titled Actions of finite groups on the hyperfinite II1 factor, was written under the supervision of André Haefliger, and won him the Vacheron Constantin Prize.[2] Career Jones moved to the United States in 1980. There, he taught at the University of California, Los Angeles (1980–1981), and the University of Pennsylvania (1981–1985), before being appointed as professor of mathematics at the University of California, Berkeley.[4][5] His work on knot polynomials, with the discovery of what is now called the Jones polynomial, was from an unexpected direction with origins in the theory of von Neumann algebras,[2] an area of analysis already much developed by Alain Connes. It led to the solution of a number of classical problems of knot theory, to increased interest in low-dimensional topology,[6] and the development of quantum topology. Jones taught at Vanderbilt University as Stevenson Distinguished Professor of mathematics from 2011 until his death.[7] He remained Professor Emeritus at University of California, Berkeley, where he had been on the faculty from 1985 to 2011[8] and was a Distinguished Alumni Professor at the University of Auckland.[9] Jones was made an honorary vice-president for life of the International Guild of Knot Tyers in 1992.[3] The Jones Medal, created by the Royal Society of New Zealand in 2010, is named after him.[10] Personal life Jones met his wife, Martha Myers, during a ski camp for foreign students while they were studying in Switzerland.[11] She was there as a Fulbright scholar,[11] and subsequently became an associate professor of medicine, health and society.[3] Together, they have three children.[2][3] Jones died on 6 September 2020 at age 67 from health complications resulting from a severe ear infection.[12][2] Jones was a certified barista.[13] Honours and awards • 1990 – awarded the Fields Medal[2] • 1990 – elected Fellow of the Royal Society[14][15] • 1991 – awarded the Rutherford Medal by the Royal Society of New Zealand[3][10] • 1991 – awarded the degree of Doctor of Science by the University of Auckland[16] • 1992 – elected to the Australian Academy of Science as a Corresponding Fellow[17] • 1992 – awarded a Miller Professorship at the University of California Berkeley[18] • 2002 – appointed Distinguished Companion of the New Zealand Order of Merit (DCNZM) in the 2002 Queen's Birthday and Golden Jubilee Honours, for services to mathematics[19] • 2009 – his DCNZM redesignated to a Knight Companion of the New Zealand Order of Merit in the 2009 Special Honours[20] • 2012 – elected a Fellow of the American Mathematical Society[21] Publications • Jones, Vaughan F. R. (1980). "Actions of finite groups on the hyperfinite type II1 factor". Memoirs of the American Mathematical Society. doi:10.1090/memo/0237. • Jones, Vaughan F. R. (1983). "Index for subfactors". Inventiones Mathematicae. 72 (1): 1–25. Bibcode:1983InMat..72....1J. doi:10.1007/BF01389127. MR 0696688. S2CID 121577421. • Jones, Vaughan F. R. (1985). "A polynomial invariant for knots via von Neumann algebra". Bulletin of the American Mathematical Society. (N.S.). 12: 103–111. doi:10.1090/s0273-0979-1985-15304-2. MR 0766964. • Jones, Vaughan F. R. (1987). "Hecke algebra representations of braid groups and link polynomials". Annals of Mathematics. (2). 126 (2): 335–388. doi:10.2307/1971403. JSTOR 1971403. MR 0908150. • Goodman, Frederick M.; de la Harpe, Pierre; Jones, Vaughan F. R. (1989). Coxeter graphs and towers of algebras. Mathematical Sciences Research Institute Publications. Vol. 14. Springer-Verlag. doi:10.1007/978-1-4613-9641-3. ISBN 978-1-4613-9643-7. MR 0999799.[22] • Jones, Vaughan F. R. (1991). Subfactors and knots. CBMS Regional Conference Series in Mathematics. Vol. 80. Providence, RI: American Mathematical Society. doi:10.1090/cbms/080. ISBN 9780821807293. MR 1134131.[23] • Jones, Vaughan F. R.; Sunder, Viakalathur Shankar (1997). Introduction to subfactors. London Mathematical Society Lecture Note Series. Vol. 234. Cambridge: Cambridge University Press. doi:10.1017/CBO9780511566219. ISBN 0-521-58420-5. MR 1473221. See also • Aharonov–Jones–Landau algorithm • Planar algebra • Subfactor References 1. "Vaughan Jones (New Zealand mathematician)". Encyclopedia Britannica. Encyclopedia Britannica, Inc. 27 December 2019. Retrieved 8 September 2020. 2. "Celebrated NZ mathematician Sir Vaughan Jones dies". The New Zealand Herald. 9 September 2020. Retrieved 9 September 2020. 3. "Obituary: Sir Vaughan Jones". Auckland Grammar School. 8 September 2020. Retrieved 8 September 2020. 4. Lambert, Max (1991). Who's Who in New Zealand, 1991 (12th ed.). Auckland: Octopus. p. 331. ISBN 9780790001302. Retrieved 29 July 2015. 5. "Vaughan Jones - University of St. Andrews". Retrieved 9 September 2020. 6. "Fields Medalist Vaughan Jones Joins the Department". Department of Mathematics. Vanderbilt University. 25 October 2011. Archived from the original on 9 September 2020. Retrieved 8 September 2020. 7. Personal web page at Vanderbilt University 8. Personal web page at Berkeley 9. Personal web page at Auckland 10. "About the Jones Medal". Royal Society Te Apārangi. Archived from the original on 9 September 2020. Retrieved 8 September 2020. 11. Salisbury, David (3 October 2011). "Vaughan Jones – Fields medalist brings informal style to Vanderbilt". Vanderbilt University. Archived from the original on 14 July 2020. Retrieved 8 September 2020. 12. Release of Vanderbilt University, 8 September 2020. 13. Vaughan Jones: “God May or May Not Play Dice but She Sure Loves a von Neumann Algebra”, retrieved 14 March 2023 14. "Fellows". Royal Society. Retrieved 5 November 2010. 15. Evans, David E. (2022). "Sir Vaughan Jones. 31 December 1952—6 September 2020". Biographical Memoirs of Fellows of the Royal Society. 73: 333–356. doi:10.1098/rsbm.2021.0051. S2CID 249648564. 16. Department of Statistics (1992). The New Zealand Official Year-book. Vol. 95. Government Printer (New Zealand). 17. "Faculty Awards". Department of Mathematics. Vanderbilt University. Retrieved 8 September 2020. 18. "Celebrating Science – Miller Reminiscences". Miller Institute. University of California, Berkeley. Retrieved 8 September 2020. 19. "Queen's Birthday and Golden Jubilee honours list 2002". Department of the Prime Minister and Cabinet. 3 June 2002. Retrieved 25 June 2020. 20. "Special honours list 1 August 2009". Department of the Prime Minister and Cabinet. 5 April 2011. Retrieved 25 June 2020. 21. List of Fellows of the American Mathematical Society, retrieved 26 January 2013. 22. Birman, Joan S. (1991). "Review: Coxeter graphs and towers of algebras, by F. M. Goodman, P. de la Harpe, and V. F. R. Jones". Bulletin of the American Mathematical Society. (N.S.). 25 (1): 195–199. doi:10.1090/s0273-0979-1991-16063-5. 23. Kauffman, Louis H. (1994). "Review: Subfactors and knots, by V. F. R. Jones". Bulletin of the American Mathematical Society. (N.S.). 31 (1): 147–154. doi:10.1090/s0273-0979-1994-00509-9. External links Wikimedia Commons has media related to Vaughan Jones. • O'Connor, John J.; Robertson, Edmund F., "Vaughan Jones", MacTutor History of Mathematics Archive, University of St Andrews • Vaughan Jones at the Mathematics Genealogy Project • Jones' home page • Career profile page at the University of Auckland • Joan S. Birman: The Work of Vaughan F. R. Jones in Ichirō Satake (ed.): Proceedings of the International Congress of Mathematicians, 21–29 August 1990, Kyoto, Japan, Springer, 1991 (Laudatio for Fields-Medal 1990; online) Fields Medalists • 1936  Ahlfors • Douglas • 1950  Schwartz • Selberg • 1954  Kodaira • Serre • 1958  Roth • Thom • 1962  Hörmander • Milnor • 1966  Atiyah • Cohen • Grothendieck • Smale • 1970  Baker • Hironaka • Novikov • Thompson • 1974  Bombieri • Mumford • 1978  Deligne • Fefferman • Margulis • Quillen • 1982  Connes • Thurston • Yau • 1986  Donaldson • Faltings • Freedman • 1990  Drinfeld • Jones • Mori • Witten • 1994  Bourgain • Lions • Yoccoz • Zelmanov • 1998  Borcherds • Gowers • Kontsevich • McMullen • 2002  Lafforgue • Voevodsky • 2006  Okounkov • Perelman • Tao • Werner • 2010  Lindenstrauss • Ngô • Smirnov • Villani • 2014  Avila • Bhargava • Hairer • Mirzakhani • 2018  Birkar • Figalli • Scholze • Venkatesh • 2022  Duminil-Copin • Huh • Maynard • Viazovska • Category • Mathematics portal Fellows of the Royal Society elected in 1990 Fellows • Roger Angel • Michael Ashburner • David Bohm • David Brown • Malcolm H. Chisholm • Robin Clark • Peter Clarricoats • John G. Collier • Simon Conway Morris • Andrew Crawford • Leslie Dutton • Robert Fettiplace • Erwin Gabathuler • Nicholas C. Handy • Allen Hill • Jonathan Hodgkin • Eric Jakeman • George Jellicoe • Louise Johnson • Vaughan Jones • Carole Jordan • John Knott • Harry Kroto • Steven V. Ley • Lew Mander • Michael E. McIntyre • Derek W. Moore • Colin James Pennycuick • John Albert Raven • David Read • Man Mohan Sharma • Allan Snyder • George Stark • Azim Surani • Bob Vaughan • Herman Waldmann • William Lionel Wilkinson • Robert Hughes Williams • Alan Williams • Gregory Winter • Semir Zeki Foreign • Edward Norton Lorenz • Yasutomi Nishizuka • Christiane Nüsslein-Volhard • E. O. Wilson • Bengt I. Samuelsson • Lyman Spitzer  Recipients of the Rutherford Medal of the Royal Society of New Zealand New Zealand Science and Technology Gold Medal • 1991: Vaughan Jones • 1992: Department of Scientific and Industrial Research (group award) • 1993: Roy Kerr • 1994: Ian Axford • 1995: Bill Denny and Auckland Cancer Research Laboratory • 1996: No award • 1997: Thomas William Walker • 1998: Bill Robinson • 1999: David Vere-Jones Rutherford Medal • 2000: Alan MacDiarmid • 2001: Peter Gluckman • 2002: Jeff Tallon • 2003: George Petersen • 2004: David Penny • 2005: Paul Callaghan • 2006: Ted Baker • 2007: Richard Faull • 2008: David Parry • 2009: Peter Hunter • 2010: Warren Tate • 2011: Christine Winterbourn • 2012: Margaret Brimble • 2013: Anne Salmond • 2014: Peter Schwerdtfeger • 2015: Ian Reid • 2016: Michael Corballis • 2017: Colin Wilson • 2018: Rod Downey • 2019: Jane Harding • 2020: Brian Boyd • 2021: Philippa Howden-Chapman and team • 2022: The Dunedin Study Authority control International • FAST • ISNI • VIAF National • Norway • France • BnF data • Germany • Israel • United States • Netherlands • Poland Academics • CiNii • DBLP • MathSciNet • Mathematics Genealogy Project • Scopus • zbMATH Other • IdRef
Wikipedia
Vaught conjecture The Vaught conjecture is a conjecture in the mathematical field of model theory originally proposed by Robert Lawson Vaught in 1961. It states that the number of countable models of a first-order complete theory in a countable language is finite or ℵ0 or 2ℵ0. Morley showed that the number of countable models is finite or ℵ0 or ℵ1 or 2ℵ0, which solves the conjecture except for the case of ℵ1 models when the continuum hypothesis fails. For this remaining case, Robin Knight (2002, 2007) has announced a counterexample to the Vaught conjecture and the topological Vaught conjecture. As of 2021, the counterexample has not been verified. Statement of the conjecture Let $T$ be a first-order, countable, complete theory with infinite models. Let $I(T,\alpha )$ denote the number of models of T of cardinality $\alpha $ up to isomorphism—the spectrum of the theory $T$. Morley proved that if I(T, ℵ0) is infinite then it must be ℵ0 or ℵ1 or the cardinality of the continuum. The Vaught conjecture is the statement that it is not possible for $\aleph _{0}<I(T,\aleph _{0})<2^{\aleph _{0}}$. The conjecture is a trivial consequence of the continuum hypothesis; so this axiom is often excluded in work on the conjecture. Alternatively, there is a sharper form of the conjecture that states that any countable complete T with uncountably many countable models will have a perfect set of uncountable models (as pointed out by John Steel, in "On Vaught's conjecture". Cabal Seminar 76–77 (Proc. Caltech-UCLA Logic Sem., 1976–77), pp. 193–208, Lecture Notes in Math., 689, Springer, Berlin, 1978, this form of the Vaught conjecture is equiprovable with the original). Original formulation The original formulation by Vaught was not stated as a conjecture, but as a problem: Can it be proved, without the use of the continuum hypothesis, that there exists a complete theory having exactly ℵ1 non-isomorphic denumerable models? By the result by Morley mentioned at the beginning, a positive solution to the conjecture essentially corresponds to a negative answer to Vaught's problem as originally stated. Vaught's theorem Vaught proved that the number of countable models of a complete theory cannot be 2. It can be any finite number other than 2, for example: • Any complete theory with a finite model has no countable models. • The theories with just one countable model are the ω-categorical theories. There are many examples of these, such as the theory of an infinite set, or the theory of a dense unbounded total order. • Ehrenfeucht gave the following example of a theory with 3 countable models: the language has a relation ≥ and a countable number of constants c0, c1, ... with axioms stating that ≥ is a dense unbounded total order, and c0 < c1 < c2 < ... The three models differ according to whether this sequence is unbounded, or converges, or is bounded but does not converge. • Ehrenfeucht's example can be modified to give a theory with any finite number n ≥ 3 of models by adding n − 2 unary relations Pi to the language, with axioms stating that for every x exactly one of the Pi is true, the values of y for which Pi(y) is true are dense, and P1 is true for all ci. Then the models for which the sequence of elements ci converge to a limit c split into n − 2 cases depending on for which i the relation Pi(c) is true. The idea of the proof of Vaught's theorem is as follows. If there are at most countably many countable models, then there is a smallest one: the atomic model, and a largest one, the saturated model, which are different if there is more than one model. If they are different, the saturated model must realize some n-type omitted by the atomic model. Then one can show that an atomic model of the theory of structures realizing this n-type (in a language expanded by finitely many constants) is a third model, not isomorphic to either the atomic or the saturated model. In the example above with 3 models, the atomic model is the one where the sequence is unbounded, the saturated model is the one where the sequence converges, and an example of a type not realized by the atomic model is an element greater than all elements of the sequence. Topological Vaught conjecture The topological Vaught conjecture is the statement that whenever a Polish group acts continuously on a Polish space, there are either countably many orbits or continuum many orbits. The topological Vaught conjecture is more general than the original Vaught conjecture: Given a countable language we can form the space of all structures on the natural numbers for that language. If we equip this with the topology generated by first-order formulas, then it is known from A. Gregorczyk, A. Mostowski, C. Ryll-Nardzewski, "Definability of sets of models of axiomatic theories" (Bulletin of the Polish Academy of Sciences (series Mathematics, Astronomy, Physics), vol. 9 (1961), pp. 163–7) that the resulting space is Polish. There is a continuous action of the infinite symmetric group (the collection of all permutations of the natural numbers with the topology of point-wise convergence) that gives rise to the equivalence relation of isomorphism. Given a complete first-order theory T, the set of structures satisfying T is a minimal, closed invariant set, and hence Polish in its own right. See also • Spectrum of a theory • Morley's categoricity theorem References • Knight, R. W. (2002), The Vaught Conjecture: A Counterexample, manuscript • Knight, R. W. (2007), "Categories of topological spaces and scattered theories", Notre Dame Journal of Formal Logic, 48 (1): 53–77, doi:10.1305/ndjfl/1172787545, ISSN 0029-4527, MR 2289897 • R. Vaught, "Denumerable models of complete theories", Infinitistic Methods (Proc. Symp. Foundations Math., Warsaw, 1959) Warsaw/Pergamon Press (1961) pp. 303–321 • Harrington, Leo; Makkai, Michael; Shelah, Saharon (1984), "A proof of Vaught's conjecture for ω-stable theories", Israel Journal of Mathematics, 49: 259–280, doi:10.1007/BF02760651 • Marker, David (2002), Model theory: An introduction, Graduate Texts in Mathematics, vol. 217, New York, NY: Springer-Verlag, ISBN 0-387-98760-6, Zbl 1003.03034
Wikipedia
Refinement monoid In mathematics, a refinement monoid is a commutative monoid M such that for any elements a0, a1, b0, b1 of M such that a0+a1=b0+b1, there are elements c00, c01, c10, c11 of M such that a0=c00+c01, a1=c10+c11, b0=c00+c10, and b1=c01+c11. A commutative monoid M is said to be conical if x+y=0 implies that x=y=0, for any elements x,y of M. Basic examples A join-semilattice with zero is a refinement monoid if and only if it is distributive. Any abelian group is a refinement monoid. The positive cone G+ of a partially ordered abelian group G is a refinement monoid if and only if G is an interpolation group, the latter meaning that for any elements a0, a1, b0, b1 of G such that ai ≤ bj for all i, j<2, there exists an element x of G such that ai ≤ x ≤ bj for all i, j<2. This holds, for example, in case G is lattice-ordered. The isomorphism type of a Boolean algebra B is the class of all Boolean algebras isomorphic to B. (If we want this to be a set, restrict to Boolean algebras of set-theoretical rank below the one of B.) The class of isomorphism types of Boolean algebras, endowed with the addition defined by $[X]+[Y]=[X\times Y]$ (for any Boolean algebras X and Y, where $[X]$ denotes the isomorphism type of X), is a conical refinement monoid. Vaught measures on Boolean algebras For a Boolean algebra A and a commutative monoid M, a map μ : A → M is a measure, if μ(a)=0 if and only if a=0, and μ(a ∨ b)=μ(a)+μ(b) whenever a and b are disjoint (that is, a ∧ b=0), for any a, b in A. We say in addition that μ is a Vaught measure (after Robert Lawson Vaught), or V-measure, if for all c in A and all x,y in M such that μ(c)=x+y, there are disjoint a, b in A such that c=a ∨ b, μ(a)=x, and μ(b)=y. An element e in a commutative monoid M is measurable (with respect to M), if there are a Boolean algebra A and a V-measure μ : A → M such that μ(1)=e---we say that μ measures e. We say that M is measurable, if any element of M is measurable (with respect to M). Of course, every measurable monoid is a conical refinement monoid. Hans Dobbertin proved in 1983 that any conical refinement monoid with at most ℵ1 elements is measurable.[1] He also proved that any element in an at most countable conical refinement monoid is measured by a unique (up to isomorphism) V-measure on a unique at most countable Boolean algebra. He raised there the problem whether any conical refinement monoid is measurable. This was answered in the negative by Friedrich Wehrung in 1998.[2] The counterexamples can have any cardinality greater than or equal to ℵ2. Nonstable K-theory of von Neumann regular rings For a ring (with unit) R, denote by FP(R) the class of finitely generated projective right R-modules. Equivalently, the objects of FP(R) are the direct summands of all modules of the form Rn, with n a positive integer, viewed as a right module over itself. Denote by $[X]$ the isomorphism type of an object X in FP(R). Then the set V(R) of all isomorphism types of members of FP(R), endowed with the addition defined by $[X]+[Y]=[X\oplus Y]$, is a conical commutative monoid. In addition, if R is von Neumann regular, then V(R) is a refinement monoid. It has the order-unit $[R]$. We say that V(R) encodes the nonstable K-theory of R. For example, if R is a division ring, then the members of FP(R) are exactly the finite-dimensional right vector spaces over R, and two vector spaces are isomorphic if and only if they have the same dimension. Hence V(R) is isomorphic to the monoid $\mathbb {Z} ^{+}=\{0,1,2,\dots \}$ of all natural numbers, endowed with its usual addition. A slightly more complicated example can be obtained as follows. A matricial algebra over a field F is a finite product of rings of the form $M_{n}(F)$, the ring of all square matrices with n rows and entries in F, for variable positive integers n. A direct limit of matricial algebras over F is a locally matricial algebra over F. Every locally matricial algebra is von Neumann regular. For any locally matricial algebra R, V(R) is the positive cone of a so-called dimension group. By definition, a dimension group is a partially ordered abelian group whose underlying order is directed, whose positive cone is a refinement monoid, and which is unperforated, the letter meaning that mx≥0 implies that x≥0, for any element x of G and any positive integer m. Any simplicial group, that is, a partially ordered abelian group of the form $\mathbb {Z} ^{n}$, is a dimension group. Effros, Handelman, and Shen proved in 1980 that dimension groups are exactly the direct limits of simplicial groups, where the transition maps are positive homomorphisms.[3] This result had already been proved in 1976, in a slightly different form, by P. A. Grillet.[4] Elliott proved in 1976 that the positive cone of any countable direct limit of simplicial groups is isomorphic to V(R), for some locally matricial ring R.[5] Finally, Goodearl and Handelman proved in 1986 that the positive cone of any dimension group with at most ℵ1 elements is isomorphic to V(R), for some locally matricial ring R (over any given field).[6] Wehrung proved in 1998 that there are dimension groups with order-unit whose positive cone cannot be represented as V(R), for a von Neumann regular ring R.[2] The given examples can have any cardinality greater than or equal to ℵ2. Whether any conical refinement monoid with at most ℵ1 (or even ℵ0) elements can be represented as V(R) for R von Neumann regular is an open problem. References 1. Dobbertin, Hans (1983), "Refinement monoids, Vaught monoids, and Boolean algebras", Mathematische Annalen, 265 (4): 473–487, doi:10.1007/BF01455948 2. Wehrung, Friedrich (1998), "Non-measurability properties of interpolation vector spaces", Israel Journal of Mathematics, 103: 177–206, doi:10.1007/BF02762273 3. Effros, Edward G.; Handelman, David E.; Shen, Chao-Liang (1980), "Dimension groups and their affine representations", American Journal of Mathematics, 102 (2): 385–407, doi:10.2307/2374244 4. Grillet, Pierre Antoine (1976), "Directed colimits of free commutative semigroups", Journal of Pure and Applied Algebra, 9 (1): 73–87, doi:10.1016/0022-4049(76)90007-4 5. Elliott, George A. (1976), "On the classification of inductive limits of sequences of semisimple finite-dimensional algebras", Journal of Algebra, 38 (1): 29–44, doi:10.1016/0021-8693(76)90242-8 6. Goodearl, K. R.; Handelman, D. E. (June 1986), "Tensor products of dimension groups and $K_{0}$ of unit-regular rings", Canadian Journal of Mathematics, 38 (3): 633–658, doi:10.4153/CJM-1986-032-0 Further reading • Dobbertin, Hans (1986), "Vaught measures and their applications in lattice theory", Journal of Pure and Applied Algebra, 43 (1): 27–51, doi:10.1016/0022-4049(86)90003-4 • Goodearl, K. R. (1995), "von Neumann regular rings and direct sum decomposition problems", Abelian groups and modules (Padova, 1994), Mathematics and its Applications, vol. 343, Springer, Dordrecht, pp. 249–255, doi:10.1007/978-94-011-0443-2_20 • Goodearl, K. R. (1986), Partially Ordered Abelian Groups with Interpolation, Mathematical Surveys and Monographs, vol. 20, American Mathematical Society, Providence, RI, ISBN 0-8218-1520-2 • Goodearl, K. R. (1991), Von Neumann Regular Rings. Second edition, Robert E. Krieger Publishing Co., Inc., Malabar, FL, ISBN 0-89464-632-X • Tarski, Alfred (1949), Cardinal Algebras. With an Appendix: Cardinal Products of Isomorphism Types, by Bjarni Jónsson and Alfred Tarski, Oxford University Press, New York
Wikipedia
Small Veblen ordinal In mathematics, the small Veblen ordinal is a certain large countable ordinal, named after Oswald Veblen. It is occasionally called the Ackermann ordinal, though the Ackermann ordinal described by Ackermann (1951) is somewhat smaller than the small Veblen ordinal. There is no standard notation for ordinals beyond the Feferman–Schütte ordinal $\Gamma _{0}$. Most systems of notation use symbols such as $\psi (\alpha )$, $\theta (\alpha )$, $\psi _{\alpha }(\beta )$, some of which are modifications of the Veblen functions to produce countable ordinals even for uncountable arguments, and some of which are "collapsing functions". The small Veblen ordinal $\theta _{\Omega ^{\omega }}(0)$ or $\psi (\Omega ^{\Omega ^{\omega }})$ is the limit of ordinals that can be described using a version of Veblen functions with finitely many arguments. It is the ordinal that measures the strength of Kruskal's theorem. It is also the ordinal type of a certain ordering of rooted trees (Jervell 2005). References • Ackermann, Wilhelm (1951), "Konstruktiver Aufbau eines Abschnitts der zweiten Cantorschen Zahlenklasse", Math. Z., 53 (5): 403–413, doi:10.1007/BF01175640, MR 0039669, S2CID 119687180 • Jervell, Herman Ruge (2005), "Finite Trees as Ordinals" (PDF), New Computational Paradigms, Lecture Notes in Computer Science, vol. 3526, Berlin / Heidelberg: Springer, pp. 211–220, doi:10.1007/11494645_26, ISBN 978-3-540-26179-7 • Rathjen, Michael; Weiermann, Andreas (1993), "Proof-theoretic investigations on Kruskal's theorem", Ann. Pure Appl. Logic, 60 (1): 49–88, doi:10.1016/0168-0072(93)90192-G, MR 1212407 • Veblen, Oswald (1908), "Continuous Increasing Functions of Finite and Transfinite Ordinals", Transactions of the American Mathematical Society, 9 (3): 280–292, doi:10.2307/1988605, JSTOR 1988605 • Weaver, Nik (2005), "Predicativity beyond Gamma_0", arXiv:math/0509244 Large countable ordinals • First infinite ordinal ω • Epsilon numbers ε0 • Feferman–Schütte ordinal Γ0 • Ackermann ordinal θ(Ω2) • small Veblen ordinal θ(Ωω) • large Veblen ordinal θ(ΩΩ) • Bachmann–Howard ordinal ψ(εΩ+1) • Buchholz's ordinal ψ0(Ωω) • Takeuti–Feferman–Buchholz ordinal ψ(εΩω+1) • Proof-theoretic ordinals of the theories of iterated inductive definitions • Nonrecursive ordinal ≥ ω‍CK 1
Wikipedia
Veblen's theorem In mathematics, Veblen's theorem, introduced by Oswald Veblen (1912), states that the set of edges of a finite graph can be written as a union of disjoint simple cycles if and only if every vertex has even degree. Thus, it is closely related to the theorem of Euler (1736) that a finite graph has an Euler tour (a single non-simple cycle that covers the edges of the graph) if and only if it is connected and every vertex has even degree. Indeed, a representation of a graph as a union of simple cycles may be obtained from an Euler tour by repeatedly splitting the tour into smaller cycles whenever there is a repeated vertex. However, Veblen's theorem applies also to disconnected graphs, and can be generalized to infinite graphs in which every vertex has finite degree (Sabidussi 1964). If a countably infinite graph G has no odd-degree vertices, then it may be written as a union of disjoint (finite) simple cycles if and only if every finite subgraph of G can be extended (by including more edges and vertices from G) to a finite Eulerian graph. In particular, every countably infinite graph with only one end and with no odd vertices can be written as a union of disjoint cycles (Sabidussi 1964). See also • Cycle basis • Cycle double cover conjecture • Eulerian matroid References • Euler, L. (1736), "Solutio problematis ad geometriam situs pertinentis" (PDF), Commentarii Academiae Scientiarum Imperialis Petropolitanae, 8: 128–140. Reprinted and translated in Biggs, N. L.; Lloyd, E. K.; Wilson, R. J. (1976), Graph Theory 1736–1936, Oxford University Press. • Sabidussi, Gert (1964), "Infinite Euler graphs", Canadian Journal of Mathematics, 16: 821–838, doi:10.4153/CJM-1964-078-x, MR 0169236. • Veblen, Oswald (1912), "An Application of Modular Equations in Analysis Situs", Annals of Mathematics, Second Series, 14 (1): 86–94, doi:10.2307/1967604, JSTOR 1967604
Wikipedia
Veblen–Young theorem In mathematics, the Veblen–Young theorem, proved by Oswald Veblen and John Wesley Young (1908, 1910, 1917), states that a projective space of dimension at least 3 can be constructed as the projective space associated to a vector space over a division ring. Non-Desarguesian planes give examples of 2-dimensional projective spaces that do not arise from vector spaces over division rings, showing that the restriction to dimension at least 3 is necessary. Jacques Tits generalized the Veblen–Young theorem to Tits buildings, showing that those of rank at least 3 arise from algebraic groups. John von Neumann (1998) generalized the Veblen–Young theorem to continuous geometry, showing that a complemented modular lattice of order at least 4 is isomorphic to the principal right ideals of a von Neumann regular ring. Statement A projective space S can be defined abstractly as a set P (the set of points), together with a set L of subsets of P (the set of lines), satisfying these axioms : • Each two distinct points p and q are in exactly one line. • Veblen's axiom: If a, b, c, d are distinct points and the lines through ab and cd meet, then so do the lines through ac and bd. • Any line has at least 3 points on it. The Veblen–Young theorem states that if the dimension of a projective space is at least 3 (meaning that there are two non-intersecting lines) then the projective space is isomorphic with the projective space of lines in a vector space over some division ring K. References • Cameron, Peter J. (1992), Projective and polar spaces, QMW Maths Notes, vol. 13, London: Queen Mary and Westfield College School of Mathematical Sciences, ISBN 978-0-902480-12-4, MR 1153019 • Veblen, Oswald; Young, John Wesley (1908), "A Set of Assumptions for Projective Geometry", American Journal of Mathematics, 30 (4): 347–380, doi:10.2307/2369956, ISSN 0002-9327, JSTOR 2369956, MR 1506049 • Veblen, Oswald; Young, John Wesley (1910), Projective geometry Volume I, Ginn and Co., Boston, ISBN 978-1-4181-8285-4, MR 0179666 • Veblen, Oswald; Young, John Wesley (1917), Projective geometry Volume II, Ginn and Co., Boston, ISBN 978-1-60386-062-8, MR 0179667 • von Neumann, John (1998) [1960], Continuous geometry, Princeton Landmarks in Mathematics, Princeton University Press, ISBN 978-0-691-05893-1, MR 0120174
Wikipedia
Veblen function In mathematics, the Veblen functions are a hierarchy of normal functions (continuous strictly increasing functions from ordinals to ordinals), introduced by Oswald Veblen in Veblen (1908). If φ0 is any normal function, then for any non-zero ordinal α, φα is the function enumerating the common fixed points of φβ for β<α. These functions are all normal. The Veblen hierarchy In the special case when φ0(α)=ωα this family of functions is known as the Veblen hierarchy. The function φ1 is the same as the ε function: φ1(α)= εα.[1] If $\alpha <\beta \,,$ then $\varphi _{\alpha }(\varphi _{\beta }(\gamma ))=\varphi _{\beta }(\gamma )$.[2] From this and the fact that φβ is strictly increasing we get the ordering: $\varphi _{\alpha }(\beta )<\varphi _{\gamma }(\delta )$ if and only if either ($\alpha =\gamma $ and $\beta <\delta $) or ($\alpha <\gamma $ and $\beta <\varphi _{\gamma }(\delta )$) or ($\alpha >\gamma $ and $\varphi _{\alpha }(\beta )<\delta $).[2] Fundamental sequences for the Veblen hierarchy The fundamental sequence for an ordinal with cofinality ω is a distinguished strictly increasing ω-sequence which has the ordinal as its limit. If one has fundamental sequences for α and all smaller limit ordinals, then one can create an explicit constructive bijection between ω and α, (i.e. one not using the axiom of choice). Here we will describe fundamental sequences for the Veblen hierarchy of ordinals. The image of n under the fundamental sequence for α will be indicated by α[n]. A variation of Cantor normal form used in connection with the Veblen hierarchy is — every nonzero ordinal number α can be uniquely written as $\alpha =\varphi _{\beta _{1}}(\gamma _{1})+\varphi _{\beta _{2}}(\gamma _{2})+\cdots +\varphi _{\beta _{k}}(\gamma _{k})$, where k>0 is a natural number and each term after the first is less than or equal to the previous term, $\varphi _{\beta _{m}}(\gamma _{m})\geq \varphi _{\beta _{m+1}}(\gamma _{m+1})\,,$ and each $\gamma _{m}<\varphi _{\beta _{m}}(\gamma _{m})\,.$ If a fundamental sequence can be provided for the last term, then that term can be replaced by such a sequence to get $\alpha [n]=\varphi _{\beta _{1}}(\gamma _{1})+\cdots +\varphi _{\beta _{k-1}}(\gamma _{k-1})+(\varphi _{\beta _{k}}(\gamma _{k})[n])\,.$ For any β, if γ is a limit with $\gamma <\varphi _{\beta }(\gamma )\,,$ then let $\varphi _{\beta }(\gamma )[n]=\varphi _{\beta }(\gamma [n])\,.$ No such sequence can be provided for $\varphi _{0}(0)$ = ω0 = 1 because it does not have cofinality ω. For $\varphi _{0}(\gamma +1)=\omega ^{\gamma +1}=\omega ^{\gamma }\cdot \omega \,,$ we choose $\varphi _{0}(\gamma +1)[n]=\varphi _{0}(\gamma )\cdot n=\omega ^{\gamma }\cdot n\,.$ For $\varphi _{\beta +1}(0)\,,$ we use $\varphi _{\beta +1}(0)[0]=0$ and $\varphi _{\beta +1}(0)[n+1]=\varphi _{\beta }(\varphi _{\beta +1}(0)[n])\,,$ i.e. 0, $\varphi _{\beta }(0)$, $\varphi _{\beta }(\varphi _{\beta }(0))$, etc.. For $\varphi _{\beta +1}(\gamma +1)$, we use $\varphi _{\beta +1}(\gamma +1)[0]=\varphi _{\beta +1}(\gamma )+1$ and $\varphi _{\beta +1}(\gamma +1)[n+1]=\varphi _{\beta }(\varphi _{\beta +1}(\gamma +1)[n])\,.$ Now suppose that β is a limit: If $\beta <\varphi _{\beta }(0)$, then let $\varphi _{\beta }(0)[n]=\varphi _{\beta [n]}(0)\,.$ For $\varphi _{\beta }(\gamma +1)$, use $\varphi _{\beta }(\gamma +1)[n]=\varphi _{\beta [n]}(\varphi _{\beta }(\gamma )+1)\,.$ Otherwise, the ordinal cannot be described in terms of smaller ordinals using $\varphi $ and this scheme does not apply to it. The Γ function The function Γ enumerates the ordinals α such that φα(0) = α. Γ0 is the Feferman–Schütte ordinal, i.e. it is the smallest α such that φα(0) = α. For Γ0, a fundamental sequence could be chosen to be $\Gamma _{0}[0]=0$ and $\Gamma _{0}[n+1]=\varphi _{\Gamma _{0}[n]}(0)\,.$ For Γβ+1, let $\Gamma _{\beta +1}[0]=\Gamma _{\beta }+1$ and $\Gamma _{\beta +1}[n+1]=\varphi _{\Gamma _{\beta +1}[n]}(0)\,.$ For Γβ where $\beta <\Gamma _{\beta }$ is a limit, let $\Gamma _{\beta }[n]=\Gamma _{\beta [n]}\,.$ Generalizations Finitely many variables To build the Veblen function of a finite number of arguments (finitary Veblen function), let the binary function $\varphi (\alpha ,\gamma )$ be $\varphi _{\alpha }(\gamma )$ as defined above. Let $z$ be an empty string or a string consisting of one or more comma-separated zeros $0,0,...,0$ and $s$ be an empty string or a string consisting of one or more comma-separated ordinals $\alpha _{1},\alpha _{2},...,\alpha _{n}$ with $\alpha _{1}>0$. The binary function $\varphi (\beta ,\gamma )$ can be written as $\varphi (s,\beta ,z,\gamma )$ where both $s$ and $z$ are empty strings. The finitary Veblen functions are defined as follows: • $\varphi (\gamma )=\omega ^{\gamma }$ • $\varphi (z,s,\gamma )=\varphi (s,\gamma )$ • if $\beta >0$, then $\varphi (s,\beta ,z,\gamma )$ denotes the $(1+\gamma )$-th common fixed point of the functions $\xi \mapsto \varphi (s,\delta ,\xi ,z)$ for each $\delta <\beta $ For example, $\varphi (1,0,\gamma )$ is the $(1+\gamma )$-th fixed point of the functions $\xi \mapsto \varphi (\xi ,0)$, namely $\Gamma _{\gamma }$; then $\varphi (1,1,\gamma )$ enumerates the fixed points of that function, i.e., of the $\xi \mapsto \Gamma _{\xi }$ function; and $\varphi (2,0,\gamma )$ enumerates the fixed points of all the $\xi \mapsto \varphi (1,\xi ,0)$. Each instance of the generalized Veblen functions is continuous in the last nonzero variable (i.e., if one variable is made to vary and all later variables are kept constantly equal to zero). The ordinal $\varphi (1,0,0,0)$ is sometimes known as the Ackermann ordinal. The limit of the $\varphi (1,0,...,0)$ where the number of zeroes ranges over ω, is sometimes known as the "small" Veblen ordinal. Every non-zero ordinal $\alpha $ less than the small Veblen ordinal (SVO) can be uniquely written in normal form for the finitary Veblen function: $\alpha =\varphi (s_{1})+\varphi (s_{2})+\cdots +\varphi (s_{k})$ where • $k$ is a positive integer • $\varphi (s_{1})\geq \varphi (s_{2})\geq \cdots \geq \varphi (s_{k})$ • $s_{m}$ is a string consisting of one or more comma-separated ordinals $\alpha _{m,1},\alpha _{m,2},...,\alpha _{m,n_{m}}$ where $\alpha _{m,1}>0$ and each $\alpha _{m,i}<\varphi (s_{m})$ Fundamental sequences for limit ordinals of finitary Veblen function For limit ordinals $\alpha <SVO$, written in normal form for the finitary Veblen function: • $(\varphi (s_{1})+\varphi (s_{2})+\cdots +\varphi (s_{k}))[n]=\varphi (s_{1})+\varphi (s_{2})+\cdots +\varphi (s_{k})[n]$, • $\varphi (\gamma )[n]=\left\{{\begin{array}{lcr}n\quad {\text{if}}\quad \gamma =1\\\varphi (\gamma -1)\cdot n\quad {\text{if}}\quad \gamma \quad {\text{is a successor ordinal}}\\\varphi (\gamma [n])\quad {\text{if}}\quad \gamma \quad {\text{is a limit ordinal}}\\\end{array}}\right.$, • $\varphi (s,\beta ,z,\gamma )[0]=0$ and $\varphi (s,\beta ,z,\gamma )[n+1]=\varphi (s,\beta -1,\varphi (s,\beta ,z,\gamma )[n],z)$ if $\gamma =0$ and $\beta $ is a successor ordinal, • $\varphi (s,\beta ,z,\gamma )[0]=\varphi (s,\beta ,z,\gamma -1)+1$ and $\varphi (s,\beta ,z,\gamma )[n+1]=\varphi (s,\beta -1,\varphi (s,\beta ,z,\gamma )[n],z)$ if $\gamma $ and $\beta $ are successor ordinals, • $\varphi (s,\beta ,z,\gamma )[n]=\varphi (s,\beta ,z,\gamma [n])$ if $\gamma $ is a limit ordinal, • $\varphi (s,\beta ,z,\gamma )[n]=\varphi (s,\beta [n],z,\gamma )$ if $\gamma =0$ and $\beta $ is a limit ordinal, • $\varphi (s,\beta ,z,\gamma )[n]=\varphi (s,\beta [n],\varphi (s,\beta ,z,\gamma -1)+1,z)$ if $\gamma $ is a successor ordinal and $\beta $ is a limit ordinal. Transfinitely many variables More generally, Veblen showed that φ can be defined even for a transfinite sequence of ordinals αβ, provided that all but a finite number of them are zero. Notice that if such a sequence of ordinals is chosen from those less than an uncountable regular cardinal κ, then the sequence may be encoded as a single ordinal less than κκ (ordinal exponentiation). So one is defining a function φ from κκ into κ. The definition can be given as follows: let α be a transfinite sequence of ordinals (i.e., an ordinal function with finite support) which ends in zero (i.e., such that α0=0), and let α[γ@0] denote the same function where the final 0 has been replaced by γ. Then γ↦φ(α[γ@0]) is defined as the function enumerating the common fixed points of all functions ξ↦φ(β) where β ranges over all sequences which are obtained by decreasing the smallest-indexed nonzero value of α and replacing some smaller-indexed value with the indeterminate ξ (i.e., β=α[ζ@ι0,ξ@ι] meaning that for the smallest index ι0 such that αι0 is nonzero the latter has been replaced by some value ζ<αι0 and that for some smaller index ι<ι0, the value αι=0 has been replaced with ξ). For example, if α=(1@ω) denotes the transfinite sequence with value 1 at ω and 0 everywhere else, then φ(1@ω) is the smallest fixed point of all the functions ξ↦φ(ξ,0,...,0) with finitely many final zeroes (it is also the limit of the φ(1,0,...,0) with finitely many zeroes, the small Veblen ordinal). The smallest ordinal α such that α is greater than φ applied to any function with support in α (i.e., which cannot be reached "from below" using the Veblen function of transfinitely many variables) is sometimes known as the "large" Veblen ordinal, or "great" Veblen number.[3] Values The function takes on several prominent values: • $\varphi (\omega ,0)$, a bound on the order types of the recursive path orderings with finitely many function symbols. [4] • The Feferman-Schutte ordinal $\Gamma _{0}$ is equal to $\varphi (1,0,0)$.[5] • The small Veblen ordinal is equal to $\varphi {\begin{pmatrix}1\\\omega \end{pmatrix}}$. [6] References • Hilbert Levitz, Transfinite Ordinals and Their Notations: For The Uninitiated, expository article (8 pages, in PostScript) • Pohlers, Wolfram (1989), Proof theory, Lecture Notes in Mathematics, vol. 1407, Berlin: Springer-Verlag, doi:10.1007/978-3-540-46825-7, ISBN 978-3-540-51842-6, MR 1026933 • Schütte, Kurt (1977), Proof theory, Grundlehren der Mathematischen Wissenschaften, vol. 225, Berlin-New York: Springer-Verlag, pp. xii+299, ISBN 978-3-540-07911-8, MR 0505313 • Takeuti, Gaisi (1987), Proof theory, Studies in Logic and the Foundations of Mathematics, vol. 81 (Second ed.), Amsterdam: North-Holland Publishing Co., ISBN 978-0-444-87943-1, MR 0882549 • Smorynski, C. (1982), "The varieties of arboreal experience", Math. Intelligencer, 4 (4): 182–189, doi:10.1007/BF03023553 contains an informal description of the Veblen hierarchy. • Veblen, Oswald (1908), "Continuous Increasing Functions of Finite and Transfinite Ordinals", Transactions of the American Mathematical Society, 9 (3): 280–292, doi:10.2307/1988605, JSTOR 1988605 • Miller, Larry W. (1976), "Normal Functions and Constructive Ordinal Notations", The Journal of Symbolic Logic, 41 (2): 439–459, doi:10.2307/2272243, JSTOR 2272243 Citations 1. Stephen G. Simpson, Subsystems of Second-order Arithmetic (2009, p.387) 2. M. Rathjen, Ordinal notations based on a weakly Mahlo cardinal, (1990, p.251). Accessed 16 August 2022. 3. M. Rathjen, "The Art of Ordinal Analysis" (2006), appearing in Proceedings of the International Congress of Mathematicians 2006. 4. M. Dershowitz, N. Okada, Proof Theoretic Techniques for Term Rewriting Theory (1988). p.105 5. D. Madore, "A Zoo of Ordinals" (2017). Accessed 02 November 2022. 6. Ranzi, Florian; Strahm, Thomas (2019). "A flexible type system for the small Veblen ordinal" (PDF). Archive for Mathematical Logic. 58 (5–6): 711–751. doi:10.1007/s00153-019-00658-x. S2CID 253675808.
Wikipedia
Vecten points In the geometry of triangles, the Vecten points are two triangle centers associated with any triangle. They may be constructed by constructing three squares on the sides of the triangle, connecting each square centre by a line to the opposite triangle point, and finding the point where these three lines meet. The outer and inner Vecten points differ according to whether the squares are extended outward from the triangle sides, or inward. The Vecten points are named after an early 19th-century French mathematician named Vecten, who taught mathematics with Gergonne in Nîmes and published a study of the figure of three squares on the sides of a triangle in 1817.[1] Outer Vecten point Let ABC be any given plane triangle. On the sides BC, CA, AB of the triangle, construct outwardly drawn three squares with centres $O_{a},O_{b},O_{c}$ respectively. Then the lines $AO_{a},BO_{b}$ and $CO_{c}$ are concurrent. The point of concurrence is the outer Vecten point of the triangle ABC. In Clark Kimberling's Encyclopedia of Triangle Centers, the outer Vecten point is denoted by X(485).[2] Inner Vecten point Let ABC be any given plane triangle. On the sides BC, CA, AB of the triangle, construct inwardly drawn three squares respectively with centres $I_{a},I_{b},I_{c}$ respectively. Then the lines $AI_{a},BI_{b}$ and $CI_{c}$ are concurrent. The point of concurrence is the inner Vecten point of the triangle ABC. In Clark Kimberling's Encyclopedia of Triangle Centers, the inner Vecten point is denoted by X(486).[2] The line $X(485)X(486)$ meets the Euler line at the Nine point center of the triangle $ABC$. The Vecten points lie on the Kiepert hyperbola See also • Napoleon points, a pair of triangle centers constructed in an analogous way using equilateral triangles instead of squares References 1. Ayme, Jean-Louis, La Figure de Vecten (PDF), retrieved 2014-11-04. 2. Kimberling, Clark. "Encyclopedia of Triangle Centers". External links • Weisstein, Eric W. "Vecten Points". MathWorld.
Wikipedia
Vector-valued Hahn–Banach theorems In mathematics, specifically in functional analysis and Hilbert space theory, vector-valued Hahn–Banach theorems are generalizations of the Hahn–Banach theorems from linear functionals (which are always valued in the real numbers $\mathbb {R} $ or the complex numbers $\mathbb {C} $) to linear operators valued in topological vector spaces (TVSs). Definitions Throughout X and Y will be topological vector spaces (TVSs) over the field $\mathbb {K} $ and L(X; Y) will denote the vector space of all continuous linear maps from X to Y, where if X and Y are normed spaces then we endow L(X; Y) with its canonical operator norm. Extensions If M is a vector subspace of a TVS X then Y has the extension property from M to X if every continuous linear map f : M → Y has a continuous linear extension to all of X. If X and Y are normed spaces, then we say that Y has the metric extension property from M to X if this continuous linear extension can be chosen to have norm equal to ‖f‖. A TVS Y has the extension property from all subspaces of X (to X) if for every vector subspace M of X, Y has the extension property from M to X. If X and Y are normed spaces then Y has the metric extension property from all subspace of X (to X) if for every vector subspace M of X, Y has the metric extension property from M to X. A TVS Y has the extension property[1] if for every locally convex space X and every vector subspace M of X, Y has the extension property from M to X. A Banach space Y has the metric extension property[1] if for every Banach space X and every vector subspace M of X, Y has the metric extension property from M to X. 1-extensions If M is a vector subspace of normed space X over the field $\mathbb {K} $ then a normed space Y has the immediate 1-extension property from M to X if for every x ∉ M, every continuous linear map f : M → Y has a continuous linear extension $F:M\oplus (\mathbb {K} x)\to Y$ such that ‖f‖ = ‖F‖. We say that Y has the immediate 1-extension property if Y has the immediate 1-extension property from M to X for every Banach space X and every vector subspace M of X. Injective spaces A locally convex topological vector space Y is injective[1] if for every locally convex space Z containing Y as a topological vector subspace, there exists a continuous projection from Z onto Y. A Banach space Y is 1-injective[1] or a P1-space if for every Banach space Z containing Y as a normed vector subspace (i.e. the norm of Y is identical to the usual restriction to Y of Z's norm), there exists a continuous projection from Z onto Y having norm 1. Properties In order for a TVS Y to have the extension property, it must be complete (since it must be possible to extend the identity map $\mathbf {1} :Y\to Y$ from Y to the completion Z of Y; that is, to the map Z → Y).[1] Existence If f : M → Y is a continuous linear map from a vector subspace M of X into a complete Hausdorff space Y then there always exists a unique continuous linear extension of f from M to the closure of M in X.[1][2] Consequently, it suffices to only consider maps from closed vector subspaces into complete Hausdorff spaces.[1] Results Any locally convex space having the extension property is injective.[1] If Y is an injective Banach space, then for every Banach space X, every continuous linear operator from a vector subspace of X into Y has a continuous linear extension to all of X.[1] In 1953, Alexander Grothendieck showed that any Banach space with the extension property is either finite-dimensional or else not separable.[1] Theorem[1] — Suppose that Y is a Banach space over the field $\mathbb {K} .$ Then the following are equivalent: 1. Y is 1-injective; 2. Y has the metric extension property; 3. Y has the immediate 1-extension property; 4. Y has the center-radius property; 5. Y has the weak intersection property; 6. Y is 1-complemented in any Banach space into which it is norm embedded; 7. Whenever Y in norm-embedded into a Banach space $X$ then identity map $\mathbf {1} :Y\to Y$ can be extended to a continuous linear map of norm $1$ to $X$; 8. Y is linearly isometric to $C\left(T,\mathbb {K} ,\|{\dot {}}\|_{\infty }\right)$ for some compact, Hausdorff space, extremally disconnected space T. (This space T is unique up to homeomorphism). where if in addition, Y is a vector space over the real numbers then we may add to this list: 1. Y has the binary intersection property; 2. Y is linearly isometric to a complete Archimedean ordered vector lattice with order unit and endowed with the order unit norm. Theorem[1] — Suppose that Y is a real Banach space with the metric extension property. Then the following are equivalent: 1. Y is reflexive; 2. Y is separable; 3. Y is finite-dimensional; 4. Y is linearly isometric to $C\left(T,\mathbb {K} ,\|\cdot \|_{\infty }\right),$ for some discrete finite space $T.$ Examples Products of the underlying field Suppose that $X$ is a vector space over $\mathbb {K} $, where $\mathbb {K} $ is either $\mathbb {R} $ or $\mathbb {C} $ and let $T$ be any set. Let $Y:=\mathbb {K} ^{T},$ which is the product of $\mathbb {K} $ taken $|T|$ times, or equivalently, the set of all $\mathbb {K} $-valued functions on T. Give $Y$ its usual product topology, which makes it into a Hausdorff locally convex TVS. Then $Y$ has the extension property.[1] For any set $T,$ the Lp space $\ell ^{\infty }(T)$ has both the extension property and the metric extension property. See also • Continuous linear extension – Mathematical method in functional analysis • Continuous linear operator • Hahn–Banach theorem – Theorem on extension of bounded linear functionals • Hyperplane separation theorem – On the existence of hyperplanes separating disjoint convex sets Citations 1. Narici & Beckenstein 2011, pp. 341–370. 2. Rudin 1991, p. 40 Stated for linear maps into F-spaces only; outlines proof. References • Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. • Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277. • Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. • Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. Topological vector spaces (TVSs) Basic concepts • Banach space • Completeness • Continuous linear operator • Linear functional • Fréchet space • Linear map • Locally convex space • Metrizability • Operator topologies • Topological vector space • Vector space Main results • Anderson–Kadec • Banach–Alaoglu • Closed graph theorem • F. Riesz's • Hahn–Banach (hyperplane separation • Vector-valued Hahn–Banach) • Open mapping (Banach–Schauder) • Bounded inverse • Uniform boundedness (Banach–Steinhaus) Maps • Bilinear operator • form • Linear map • Almost open • Bounded • Continuous • Closed • Compact • Densely defined • Discontinuous • Topological homomorphism • Functional • Linear • Bilinear • Sesquilinear • Norm • Seminorm • Sublinear function • Transpose Types of sets • Absolutely convex/disk • Absorbing/Radial • Affine • Balanced/Circled • Banach disks • Bounding points • Bounded • Complemented subspace • Convex • Convex cone (subset) • Linear cone (subset) • Extreme point • Pre-compact/Totally bounded • Prevalent/Shy • Radial • Radially convex/Star-shaped • Symmetric Set operations • Affine hull • (Relative) Algebraic interior (core) • Convex hull • Linear span • Minkowski addition • Polar • (Quasi) Relative interior Types of TVSs • Asplund • B-complete/Ptak • Banach • (Countably) Barrelled • BK-space • (Ultra-) Bornological • Brauner • Complete • Convenient • (DF)-space • Distinguished • F-space • FK-AK space • FK-space • Fréchet • tame Fréchet • Grothendieck • Hilbert • Infrabarreled • Interpolation space • K-space • LB-space • LF-space • Locally convex space • Mackey • (Pseudo)Metrizable • Montel • Quasibarrelled • Quasi-complete • Quasinormed • (Polynomially • Semi-) Reflexive • Riesz • Schwartz • Semi-complete • Smith • Stereotype • (B • Strictly • Uniformly) convex • (Quasi-) Ultrabarrelled • Uniformly smooth • Webbed • With the approximation property •  Mathematics portal • Category • Commons Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons
Wikipedia
Vector-valued differential form In mathematics, a vector-valued differential form on a manifold M is a differential form on M with values in a vector space V. More generally, it is a differential form with values in some vector bundle E over M. Ordinary differential forms can be viewed as R-valued differential forms. An important case of vector-valued differential forms are Lie algebra-valued forms. (A connection form is an example of such a form.) Definition Let M be a smooth manifold and E → M be a smooth vector bundle over M. We denote the space of smooth sections of a bundle E by Γ(E). An E-valued differential form of degree p is a smooth section of the tensor product bundle of E with Λp(T ∗M), the p-th exterior power of the cotangent bundle of M. The space of such forms is denoted by $\Omega ^{p}(M,E)=\Gamma (E\otimes \Lambda ^{p}T^{*}M).$ Because Γ is a strong monoidal functor,[1] this can also be interpreted as $\Gamma (E\otimes \Lambda ^{p}T^{*}M)=\Gamma (E)\otimes _{\Omega ^{0}(M)}\Gamma (\Lambda ^{p}T^{*}M)=\Gamma (E)\otimes _{\Omega ^{0}(M)}\Omega ^{p}(M),$ where the latter two tensor products are the tensor product of modules over the ring Ω0(M) of smooth R-valued functions on M (see the seventh example here). By convention, an E-valued 0-form is just a section of the bundle E. That is, $\Omega ^{0}(M,E)=\Gamma (E).\,$ Equivalently, an E-valued differential form can be defined as a bundle morphism $TM\otimes \cdots \otimes TM\to E$ which is totally skew-symmetric. Let V be a fixed vector space. A V-valued differential form of degree p is a differential form of degree p with values in the trivial bundle M × V. The space of such forms is denoted Ωp(M, V). When V = R one recovers the definition of an ordinary differential form. If V is finite-dimensional, then one can show that the natural homomorphism $\Omega ^{p}(M)\otimes _{\mathbb {R} }V\to \Omega ^{p}(M,V),$ where the first tensor product is of vector spaces over R, is an isomorphism.[2] Operations on vector-valued forms Pullback One can define the pullback of vector-valued forms by smooth maps just as for ordinary forms. The pullback of an E-valued form on N by a smooth map φ : M → N is an (φ*E)-valued form on M, where φ*E is the pullback bundle of E by φ. The formula is given just as in the ordinary case. For any E-valued p-form ω on N the pullback φ*ω is given by $(\varphi ^{*}\omega )_{x}(v_{1},\cdots ,v_{p})=\omega _{\varphi (x)}(\mathrm {d} \varphi _{x}(v_{1}),\cdots ,\mathrm {d} \varphi _{x}(v_{p})).$ Wedge product Just as for ordinary differential forms, one can define a wedge product of vector-valued forms. The wedge product of an E1-valued p-form with an E2-valued q-form is naturally an (E1⊗E2)-valued (p+q)-form: $\wedge :\Omega ^{p}(M,E_{1})\times \Omega ^{q}(M,E_{2})\to \Omega ^{p+q}(M,E_{1}\otimes E_{2}).$ :\Omega ^{p}(M,E_{1})\times \Omega ^{q}(M,E_{2})\to \Omega ^{p+q}(M,E_{1}\otimes E_{2}).} The definition is just as for ordinary forms with the exception that real multiplication is replaced with the tensor product: $(\omega \wedge \eta )(v_{1},\cdots ,v_{p+q})={\frac {1}{p!q!}}\sum _{\sigma \in S_{p+q}}\operatorname {sgn}(\sigma )\omega (v_{\sigma (1)},\cdots ,v_{\sigma (p)})\otimes \eta (v_{\sigma (p+1)},\cdots ,v_{\sigma (p+q)}).$ In particular, the wedge product of an ordinary (R-valued) p-form with an E-valued q-form is naturally an E-valued (p+q)-form (since the tensor product of E with the trivial bundle M × R is naturally isomorphic to E). For ω ∈ Ωp(M) and η ∈ Ωq(M, E) one has the usual commutativity relation: $\omega \wedge \eta =(-1)^{pq}\eta \wedge \omega .$ In general, the wedge product of two E-valued forms is not another E-valued form, but rather an (E⊗E)-valued form. However, if E is an algebra bundle (i.e. a bundle of algebras rather than just vector spaces) one can compose with multiplication in E to obtain an E-valued form. If E is a bundle of commutative, associative algebras then, with this modified wedge product, the set of all E-valued differential forms $\Omega (M,E)=\bigoplus _{p=0}^{\dim M}\Omega ^{p}(M,E)$ becomes a graded-commutative associative algebra. If the fibers of E are not commutative then Ω(M,E) will not be graded-commutative. Exterior derivative For any vector space V there is a natural exterior derivative on the space of V-valued forms. This is just the ordinary exterior derivative acting component-wise relative to any basis of V. Explicitly, if {eα} is a basis for V then the differential of a V-valued p-form ω = ωαeα is given by $d\omega =(d\omega ^{\alpha })e_{\alpha }.\,$ The exterior derivative on V-valued forms is completely characterized by the usual relations: ${\begin{aligned}&d(\omega +\eta )=d\omega +d\eta \\&d(\omega \wedge \eta )=d\omega \wedge \eta +(-1)^{p}\,\omega \wedge d\eta \qquad (p=\deg \omega )\\&d(d\omega )=0.\end{aligned}}$ More generally, the above remarks apply to E-valued forms where E is any flat vector bundle over M (i.e. a vector bundle whose transition functions are constant). The exterior derivative is defined as above on any local trivialization of E. If E is not flat then there is no natural notion of an exterior derivative acting on E-valued forms. What is needed is a choice of connection on E. A connection on E is a linear differential operator taking sections of E to E-valued one forms: $\nabla :\Omega ^{0}(M,E)\to \Omega ^{1}(M,E).$ :\Omega ^{0}(M,E)\to \Omega ^{1}(M,E).} If E is equipped with a connection ∇ then there is a unique covariant exterior derivative $d_{\nabla }:\Omega ^{p}(M,E)\to \Omega ^{p+1}(M,E)$ extending ∇. The covariant exterior derivative is characterized by linearity and the equation $d_{\nabla }(\omega \wedge \eta )=d_{\nabla }\omega \wedge \eta +(-1)^{p}\,\omega \wedge d\eta $ where ω is a E-valued p-form and η is an ordinary q-form. In general, one need not have d∇2 = 0. In fact, this happens if and only if the connection ∇ is flat (i.e. has vanishing curvature). Basic or tensorial forms on principal bundles Let E → M be a smooth vector bundle of rank k over M and let π : F(E) → M be the (associated) frame bundle of E, which is a principal GLk(R) bundle over M. The pullback of E by π is canonically isomorphic to F(E) ×ρ Rk via the inverse of [u, v] →u(v), where ρ is the standard representation. Therefore, the pullback by π of an E-valued form on M determines an Rk-valued form on F(E). It is not hard to check that this pulled back form is right-equivariant with respect to the natural action of GLk(R) on F(E) × Rk and vanishes on vertical vectors (tangent vectors to F(E) which lie in the kernel of dπ). Such vector-valued forms on F(E) are important enough to warrant special terminology: they are called basic or tensorial forms on F(E). Let π : P → M be a (smooth) principal G-bundle and let V be a fixed vector space together with a representation ρ : G → GL(V). A basic or tensorial form on P of type ρ is a V-valued form ω on P which is equivariant and horizontal in the sense that 1. $(R_{g})^{*}\omega =\rho (g^{-1})\omega \,$ for all g ∈ G, and 2. $\omega (v_{1},\ldots ,v_{p})=0$ whenever at least one of the vi are vertical (i.e., dπ(vi) = 0). Here Rg denotes the right action of G on P for some g ∈ G. Note that for 0-forms the second condition is vacuously true. Example: If ρ is the adjoint representation of G on the Lie algebra, then the connection form ω satisfies the first condition (but not the second). The associated curvature form Ω satisfies both; hence Ω is a tensorial form of adjoint type. The "difference" of two connection forms is a tensorial form. Given P and ρ as above one can construct the associated vector bundle E = P ×ρ V. Tensorial q-forms on P are in a natural one-to-one correspondence with E-valued q-forms on M. As in the case of the principal bundle F(E) above, given a q-form ${\overline {\phi }}$ on M with values in E, define φ on P fiberwise by, say at u, $\phi =u^{-1}\pi ^{*}{\overline {\phi }}$ where u is viewed as a linear isomorphism $V{\overset {\simeq }{\to }}E_{\pi (u)}=(\pi ^{*}E)_{u},v\mapsto [u,v]$. φ is then a tensorial form of type ρ. Conversely, given a tensorial form φ of type ρ, the same formula defines an E-valued form ${\overline {\phi }}$ on M (cf. the Chern–Weil homomorphism.) In particular, there is a natural isomorphism of vector spaces $\Gamma (M,E)\simeq \{f:P\to V|f(ug)=\rho (g)^{-1}f(u)\},\,{\overline {f}}\leftrightarrow f$. Example: Let E be the tangent bundle of M. Then identity bundle map idE: E →E is an E-valued one form on M. The tautological one-form is a unique one-form on the frame bundle of E that corresponds to idE. Denoted by θ, it is a tensorial form of standard type. Now, suppose there is a connection on P so that there is an exterior covariant differentiation D on (various) vector-valued forms on P. Through the above correspondence, D also acts on E-valued forms: define ∇ by $\nabla {\overline {\phi }}={\overline {D\phi }}.$ In particular for zero-forms, $\nabla :\Gamma (M,E)\to \Gamma (M,T^{*}M\otimes E)$ :\Gamma (M,E)\to \Gamma (M,T^{*}M\otimes E)} . This is exactly the covariant derivative for the connection on the vector bundle E.[3] Examples Siegel modular forms arise as vector-valued differential forms on Siegel modular varieties.[4] Notes 1. "Global sections of a tensor product of vector bundles on a smooth manifold". math.stackexchange.com. Retrieved 27 October 2014. 2. Proof: One can verify this for p=0 by turning a basis for V into a set of constant functions to V, which allows the construction of an inverse to the above homomorphism. The general case can be proved by noting that $\Omega ^{p}(M,V)=\Omega ^{0}(M,V)\otimes _{\Omega ^{0}(M)}\Omega ^{p}(M),$ and that because $\mathbb {R} $ is a sub-ring of Ω0(M) via the constant functions, $\Omega ^{0}(M,V)\otimes _{\Omega ^{0}(M)}\Omega ^{p}(M)=(V\otimes _{\mathbb {R} }\Omega ^{0}(M))\otimes _{\Omega ^{0}(M)}\Omega ^{p}(M)=V\otimes _{\mathbb {R} }(\Omega ^{0}(M)\otimes _{\Omega ^{0}(M)}\Omega ^{p}(M))=V\otimes _{\mathbb {R} }\Omega ^{p}(M).$ 3. Proof: $D(f\phi )=Df\otimes \phi +fD\phi $ for any scalar-valued tensorial zero-form f and any tensorial zero-form φ of type ρ, and Df = df since f descends to a function on M; cf. this Lemma 2. 4. Hulek, Klaus; Sankaran, G. K. (2002). "The Geometry of Siegel Modular Varieties". Advanced Studies in Pure Mathematics. 35: 89–156. References • Shoshichi Kobayashi and Katsumi Nomizu (1963) Foundations of Differential Geometry, Vol. 1, Wiley Interscience.
Wikipedia
Vector Analysis Vector Analysis is a textbook by Edwin Bidwell Wilson, first published in 1901 and based on the lectures that Josiah Willard Gibbs had delivered on the subject at Yale University. The book did much to standardize the notation and vocabulary of three-dimensional linear algebra and vector calculus, as used by physicists and mathematicians. It was reprinted by Yale in 1913, 1916, 1922, 1925, 1929, 1931, and 1943. The work is now in the public domain. It was reprinted by Dover Publications in 1960. For the branch of mathematics, see vector calculus. Vector Analysis Title page to Vector Analysis by Edwin Bidwell Wilson (1907 copy) AuthorEdwin Bidwell Wilson Publication date 1901 Contents The book carries the subtitle "A text-book for the use of students of mathematics and physics. Founded upon the lectures of J. Willard Gibbs, Ph.D., LL.D." The first chapter covers vectors in three spatial dimensions, the concept of a (real) scalar, and the product of a scalar with a vector. The second chapter introduces the dot and cross products for pairs of vectors. These are extended to a scalar triple product and a quadruple product. Pages 77–81 cover the essentials of spherical trigonometry, a topic of considerable interest at the time because of its use in celestial navigation. The third chapter introduces the vector calculus notation based on the del operator. The Helmholtz decomposition of a vector field is given on page 237. The final eight pages develop bivectors as these were integral to the course on the electromagnetic theory of light that Professor Gibbs taught at Yale. First Wilson associates a bivector with an ellipse. The product of the bivector with a complex number on the unit circle is then called an elliptical rotation. Wilson continues with a description of elliptic harmonic motion and the case of stationary waves. Genesis Professor Gibbs produced an 85-page outline of his treatment of vectors for use by his students and had sent a copy to Oliver Heaviside in 1888. In 1892 Heaviside, who was formulating his own vectorial system in the Transactions of the Royal Society, praised Gibbs' "little book", saying it "deserves to be well known". However, he also noted that it was "much too condensed for a first introduction to the subject".[1] On the occasion of the bicentennial of Yale University, a series of publications were to be issued to showcase Yale's role in the advancement of knowledge. Gibbs was authoring Elementary Principles in Statistical Mechanics for that series. Mindful of the demand for innovative university textbooks, the editor of the series, Professor Morris, wished to include also a volume dedicated to Gibbs's lectures on vectors, but Gibbs's time and attention were entirely absorbed by the Statistical Mechanics. E. B. Wilson was then a new graduate student in mathematics. He had learned about quaternions from James Mills Peirce at Harvard, but Dean A. W. Phillips persuaded him to take Gibbs's course on vectors, which treated similar problems from a rather different perspective. After Wilson had completed the course, Morris approached him about the project of producing a textbook. Wilson wrote the book by expanding his own class notes, providing exercises, and consulting with others (including his father).[2] • 1907 copy of Vector Analysis • Preface to Vector Analysis (1907) • Table of contents to Vector Analysis (1907) • First page of Vector Analysis (1907) References 1. Oliver Heaviside (1892) "On the forces, stresses, and fluxes of energy in the electromagnetic field", Philosophical Transactions of the Royal Society of London A 183:423–80. 2. Edwin Bidwell Wilson (1931) "Reminiscences of Gibbs by a student and colleague" Bulletin of the American Mathematical Society. Volume 37, Number 6, 401–416. • Alexander Ziwet ("review". {{cite journal}}: Cite journal requires |journal= (help)) Bulletin of the American Mathematical Association 8:207–15. • Anon. (review) Bulletin des sciences mathématiques 26:21–30. • Victor Schlegel (review) Jahrbuch über die Fortschritte der Mathematik 33:96–7. • Cargill Gilston Knott (review) Philosophical Magazine 6th Ser, 4:614–22. • Michael J. Crowe (1967) A History of Vector Analysis, Notre Dame University Press. External links • E. B. Wilson (1901) Vector Analysis, based on the Lectures of J. W. Gibbs at Internet Archive. • Edwin Bidwell Wilson (1913). Vector Analysis. Founded upon the lectures of J. William Gibbs. New Haven: Yale University Press – via Wikimedia Commons.
Wikipedia
Vector algebra relations The following are important identities in vector algebra. Identities that involve the magnitude of a vector $\|\mathbf {A} \|$, or the dot product (scalar product) of two vectors A·B, apply to vectors in any dimension. Identities that use the cross product (vector product) A×B are defined only in three dimensions.[nb 1][1] See also: Vector calculus identities Magnitudes The magnitude of a vector A can be expressed using the dot product: $\|\mathbf {A} \|^{2}=\mathbf {A\cdot A} $ In three-dimensional Euclidean space, the magnitude of a vector is determined from its three components using Pythagoras' theorem: $\|\mathbf {A} \|^{2}=A_{1}^{2}+A_{2}^{2}+A_{3}^{2}$ Inequalities • The Cauchy–Schwarz inequality: $\mathbf {A} \cdot \mathbf {B} \leq \left\|\mathbf {A} \right\|\left\|\mathbf {B} \right\|$ • The triangle inequality: $\|\mathbf {A+B} \|\leq \|\mathbf {A} \|+\|\mathbf {B} \|$ • The reverse triangle inequality: $\|\mathbf {A-B} \|\geq {\Bigl |}\|\mathbf {A} \|-\|\mathbf {B} \|{\Bigr |}$ Angles The vector product and the scalar product of two vectors define the angle between them, say θ:[1][2] $\sin \theta ={\frac {\|\mathbf {A} \times \mathbf {B} \|}{\left\|\mathbf {A} \right\|\left\|\mathbf {B} \right\|}}\quad (-\pi <\theta \leq \pi )$ To satisfy the right-hand rule, for positive θ, vector B is counter-clockwise from A, and for negative θ it is clockwise. $\cos \theta ={\frac {\mathbf {A} \cdot \mathbf {B} }{\left\|\mathbf {A} \right\|\left\|\mathbf {B} \right\|}}\quad (-\pi <\theta \leq \pi )$ The Pythagorean trigonometric identity then provides: $\left\|\mathbf {A\times B} \right\|^{2}+(\mathbf {A} \cdot \mathbf {B} )^{2}=\left\|\mathbf {A} \right\|^{2}\left\|\mathbf {B} \right\|^{2}$ If a vector A = (Ax, Ay, Az) makes angles α, β, γ with an orthogonal set of x-, y- and z-axes, then: $\cos \alpha ={\frac {A_{x}}{\sqrt {A_{x}^{2}+A_{y}^{2}+A_{z}^{2}}}}={\frac {A_{x}}{\|\mathbf {A} \|}}\ ,$ and analogously for angles β, γ. Consequently: $\mathbf {A} =\left\|\mathbf {A} \right\|\left(\cos \alpha \ {\hat {\mathbf {i} }}+\cos \beta \ {\hat {\mathbf {j} }}+\cos \gamma \ {\hat {\mathbf {k} }}\right),$ with ${\hat {\mathbf {i} }},\ {\hat {\mathbf {j} }},\ {\hat {\mathbf {k} }}$ unit vectors along the axis directions. Areas and volumes The area Σ of a parallelogram with sides A and B containing the angle θ is: $\Sigma =AB\sin \theta ,$ which will be recognized as the magnitude of the vector cross product of the vectors A and B lying along the sides of the parallelogram. That is: $\Sigma =\left\|\mathbf {A} \times \mathbf {B} \right\|={\sqrt {\left\|\mathbf {A} \right\|^{2}\left\|\mathbf {B} \right\|^{2}-\left(\mathbf {A} \cdot \mathbf {B} \right)^{2}}}\ .$ (If A, B are two-dimensional vectors, this is equal to the determinant of the 2 × 2 matrix with rows A, B.) The square of this expression is:[3] $\Sigma ^{2}=(\mathbf {A\cdot A} )(\mathbf {B\cdot B} )-(\mathbf {A\cdot B} )(\mathbf {B\cdot A} )=\Gamma (\mathbf {A} ,\ \mathbf {B} )\ ,$ where Γ(A, B) is the Gram determinant of A and B defined by: $\Gamma (\mathbf {A} ,\ \mathbf {B} )={\begin{vmatrix}\mathbf {A\cdot A} &\mathbf {A\cdot B} \\\mathbf {B\cdot A} &\mathbf {B\cdot B} \end{vmatrix}}\ .$ In a similar fashion, the squared volume V of a parallelepiped spanned by the three vectors A, B, C is given by the Gram determinant of the three vectors:[3] $V^{2}=\Gamma (\mathbf {A} ,\ \mathbf {B} ,\ \mathbf {C} )={\begin{vmatrix}\mathbf {A\cdot A} &\mathbf {A\cdot B} &\mathbf {A\cdot C} \\\mathbf {B\cdot A} &\mathbf {B\cdot B} &\mathbf {B\cdot C} \\\mathbf {C\cdot A} &\mathbf {C\cdot B} &\mathbf {C\cdot C} \end{vmatrix}}\ ,$ Since A, B, C are three-dimensional vectors, this is equal to the square of the scalar triple product $\det[\mathbf {A} ,\mathbf {B} ,\mathbf {C} ]=|\mathbf {A} ,\mathbf {B} ,\mathbf {C} |$ below. This process can be extended to n-dimensions. Addition and multiplication of vectors • Commutativity of addition: $\mathbf {A} +\mathbf {B} =\mathbf {B} +\mathbf {A} $. • Commutativity of scalar product: $\mathbf {A} \cdot \mathbf {B} =\mathbf {B} \cdot \mathbf {A} $. • Anticommutativity of cross product: $\mathbf {A} \times \mathbf {B} =\mathbf {-B} \times \mathbf {A} $. • Distributivity of multiplication by a scalar over addition: $c(\mathbf {A} +\mathbf {B} )=c\mathbf {A} +c\mathbf {B} $. • Distributivity of scalar product over addition: $\left(\mathbf {A} +\mathbf {B} \right)\cdot \mathbf {C} =\mathbf {A} \cdot \mathbf {C} +\mathbf {B} \cdot \mathbf {C} $. • Distributivity of vector product over addition: $(\mathbf {A} +\mathbf {B} )\times \mathbf {C} =\mathbf {A} \times \mathbf {C} +\mathbf {B} \times \mathbf {C} $. • Scalar triple product: $\mathbf {A} \cdot (\mathbf {B} \times \mathbf {C} )=\mathbf {B} \cdot (\mathbf {C} \times \mathbf {A} )=\mathbf {C} \cdot (\mathbf {A} \times \mathbf {B} )=|\mathbf {A} \,\mathbf {B} \,\mathbf {C} |={\begin{vmatrix}A_{x}&B_{x}&C_{x}\\A_{y}&B_{y}&C_{y}\\A_{z}&B_{z}&C_{z}\end{vmatrix}}.$ • Vector triple product: $\mathbf {A} \times (\mathbf {B} \times \mathbf {C} )=(\mathbf {A} \cdot \mathbf {C} )\mathbf {B} -(\mathbf {A} \cdot \mathbf {B} )\mathbf {C} $. • Jacobi identity: $\mathbf {A} \times (\mathbf {B} \times \mathbf {C} )+\mathbf {C} \times (\mathbf {A} \times \mathbf {B} )+\mathbf {B} \times (\mathbf {C} \times \mathbf {A} )=\mathbf {0} .$ • Binet-Cauchy identity: $\mathbf {\left(A\times B\right)\cdot } \left(\mathbf {C} \times \mathbf {D} \right)=\left(\mathbf {A} \cdot \mathbf {C} \right)\left(\mathbf {B} \cdot \mathbf {D} \right)-\left(\mathbf {B} \cdot \mathbf {C} \right)\left(\mathbf {A} \cdot \mathbf {D} \right).$ • Lagrange's identity: $|\mathbf {A} \times \mathbf {B} |^{2}=(\mathbf {A} \cdot \mathbf {A} )(\mathbf {B} \cdot \mathbf {B} )-(\mathbf {A} \cdot \mathbf {B} )^{2}$. • Vector quadruple product:[4][5] $(\mathbf {A} \times \mathbf {B} )\times (\mathbf {C} \times \mathbf {D} )\ =\ |\mathbf {A} \,\mathbf {B} \,\mathbf {D} |\,\mathbf {C} \,-\,|\mathbf {A} \,\mathbf {B} \,\mathbf {C} |\,\mathbf {D} \ =\ |\mathbf {A} \,\mathbf {C} \,\mathbf {D} |\,\mathbf {B} \,-\,|\mathbf {B} \,\mathbf {C} \,\mathbf {D} |\,\mathbf {A} .$ • A consequence of the previous equation:[6] $|\mathbf {A} \,\mathbf {B} \,\mathbf {C} |\,\mathbf {D} =(\mathbf {A} \cdot \mathbf {D} )\left(\mathbf {B} \times \mathbf {C} \right)+\left(\mathbf {B} \cdot \mathbf {D} \right)\left(\mathbf {C} \times \mathbf {A} \right)+\left(\mathbf {C} \cdot \mathbf {D} \right)\left(\mathbf {A} \times \mathbf {B} \right).$ • In 3 dimensions, a vector D can be expressed in terms of basis vectors {A,B,C} as:[7] $\mathbf {D} \ =\ {\frac {\mathbf {D} \cdot (\mathbf {B} \times \mathbf {C} )}{|\mathbf {A} \,\mathbf {B} \,\mathbf {C} |}}\ \mathbf {A} +{\frac {\mathbf {D} \cdot (\mathbf {C} \times \mathbf {A} )}{|\mathbf {A} \,\mathbf {B} \,\mathbf {C} |}}\ \mathbf {B} +{\frac {\mathbf {D} \cdot (\mathbf {A} \times \mathbf {B} )}{|\mathbf {A} \,\mathbf {B} \,\mathbf {C} |}}\ \mathbf {C} .$ See also • Vector space • Geometric algebra Notes 1. There is also a seven-dimensional cross product of vectors that relates to multiplication in the octonions, but it does not satisfy these three-dimensional identities. References 1. Lyle Frederick Albright (2008). "§2.5.1 Vector algebra". Albright's chemical engineering handbook. CRC Press. p. 68. ISBN 978-0-8247-5362-7. 2. Francis Begnaud Hildebrand (1992). Methods of applied mathematics (Reprint of Prentice-Hall 1965 2nd ed.). Courier Dover Publications. p. 24. ISBN 0-486-67002-3. 3. Richard Courant, Fritz John (2000). "Areas of parallelograms and volumes of parallelepipeds in higher dimensions". Introduction to calculus and analysis, Volume II (Reprint of original 1974 Interscience ed.). Springer. pp. 190–195. ISBN 3-540-66569-2. 4. Vidwan Singh Soni (2009). "§1.10.2 Vector quadruple product". Mechanics and relativity. PHI Learning Pvt. Ltd. pp. 11–12. ISBN 978-81-203-3713-8. 5. This formula is applied to spherical trigonometry by Edwin Bidwell Wilson, Josiah Willard Gibbs (1901). "§42 in Direct and skew products of vectors". Vector analysis: a text-book for the use of students of mathematics. Scribner. pp. 77ff. 6. "linear algebra - Cross-product identity". Mathematics Stack Exchange. Retrieved 2021-10-07. 7. Joseph George Coffin (1911). Vector analysis: an introduction to vector-methods and their various applications to physics and mathematics (2nd ed.). Wiley. p. 56.
Wikipedia
Vector area In 3-dimensional geometry and vector calculus, an area vector is a vector combining an area quantity with a direction, thus representing an oriented area in three dimensions. Every bounded surface in three dimensions can be associated with a unique area vector called its vector area. It is equal to the surface integral of the surface normal, and distinct from the usual (scalar) surface area. Vector area can be seen as the three dimensional generalization of signed area in two dimensions. Definition For a finite planar surface of scalar area S and unit normal n̂, the vector area S is defined as the unit normal scaled by the area: $\mathbf {S} =\mathbf {\hat {n}} S$ For an orientable surface S composed of a set Si of flat facet areas, the vector area of the surface is given by $\mathbf {S} =\sum _{i}\mathbf {\hat {n}} _{i}S_{i}$ where n̂i is the unit normal vector to the area Si. For bounded, oriented curved surfaces that are sufficiently well-behaved, we can still define vector area. First, we split the surface into infinitesimal elements, each of which is effectively flat. For each infinitesimal element of area, we have an area vector, also infinitesimal. $d\mathbf {S} =\mathbf {\hat {n}} dS$ where n̂ is the local unit vector perpendicular to dS. Integrating gives the vector area for the surface. $\mathbf {S} =\int d\mathbf {S} $ Properties The vector area of a surface can be interpreted as the (signed) projected area or "shadow" of the surface in the plane in which it is greatest; its direction is given by that plane's normal. For a curved or faceted (i.e. non-planar) surface, the vector area is smaller in magnitude than the actual surface area. As an extreme example, a closed surface can possess arbitrarily large area, but its vector area is necessarily zero.[1] Surfaces that share a boundary may have very different areas, but they must have the same vector area—the vector area is entirely determined by the boundary. These are consequences of Stokes' theorem. The vector area of a parallelogram is given by the cross product of the two vectors that span it; it is twice the (vector) area of the triangle formed by the same vectors. In general, the vector area of any surface whose boundary consists of a sequence of straight line segments (analogous to a polygon in two dimensions) can be calculated using a series of cross products corresponding to a triangularization of the surface. This is the generalization of the Shoelace formula to three dimensions. Using Stokes' theorem applied to an appropriately chosen vector field, a boundary integral for the vector area can be derived: $\mathbf {S} ={\frac {1}{2}}\oint _{\partial S}\mathbf {r} \times d\mathbf {r} $ where $\partial S$ is the boundary of S, i.e. one or more oriented closed space curves. This is analogous to the two dimensional area calculation using Green's theorem. Applications Area vectors are used when calculating surface integrals, such as when determining the flux of a vector field through a surface. The flux is given by the integral of the dot product of the field and the (infinitesimal) area vector. When the field is constant over the surface the integral simplifies to the dot product of the field and the vector area of the surface. Projection of area onto planes The projected area onto a plane is given by the dot product of the vector area S and the target plane unit normal m̂: $A_{\parallel }=\mathbf {S} \cdot {\hat {\mathbf {m} }}$ For example, the projected area onto the xy-plane is equivalent to the z-component of the vector area, and is also equal to $\mathbf {S} _{z}=\left|\mathbf {S} \right|\cos \theta $ where θ is the angle between the plane normal n̂ and the z-axis. See also • Bivector, representing an oriented area in any number of dimensions • De Gua's theorem, on the decomposition of vector area into orthogonal components • Cross product • Surface normal • Surface integral Notes 1. Spiegel, Murray R. (1959). Theory and problems of vector analysis. Schaum's Outline Series. McGraw Hill. p. 25.
Wikipedia
Vector algebra In mathematics, vector algebra may mean: • Linear algebra, specifically the basic algebraic operations of vector addition and scalar multiplication; see vector space. • The algebraic operations in vector calculus, namely the specific additional structure of vectors in 3-dimensional Euclidean space $\mathbb {R} ^{3}$ of dot product and especially cross product. In this sense, vector algebra is contrasted with geometric algebra, which provides an alternative generalization to higher dimensions. • An algebra over a field, a vector space equipped with a bilinear product • Original vector algebras of the nineteenth century like quaternions, tessarines, or coquaternions, each of which has its own product. The vector algebras biquaternions and hyperbolic quaternions enabled the revolution in physics called special relativity by providing mathematical models.
Wikipedia
Vector bornology In mathematics, especially functional analysis, a bornology ${\mathcal {B}}$ on a vector space $X$ over a field $\mathbb {K} ,$ where $\mathbb {K} $ has a bornology ℬ$\mathbb {F} $, is called a vector bornology if ${\mathcal {B}}$ makes the vector space operations into bounded maps. Definitions Prerequisits Main article: Bornology A bornology on a set $X$ is a collection ${\mathcal {B}}$ of subsets of $X$ that satisfy all the following conditions: 1. ${\mathcal {B}}$ covers $X;$ that is, $X=\cup {\mathcal {B}}$ 2. ${\mathcal {B}}$ is stable under inclusions; that is, if $B\in {\mathcal {B}}$ and $A\subseteq B,$ then $A\in {\mathcal {B}}$ 3. ${\mathcal {B}}$ is stable under finite unions; that is, if $B_{1},\ldots ,B_{n}\in {\mathcal {B}}$ then $B_{1}\cup \cdots \cup B_{n}\in {\mathcal {B}}$ Elements of the collection ${\mathcal {B}}$ are called ${\mathcal {B}}$-bounded or simply bounded sets if ${\mathcal {B}}$ is understood. The pair $(X,{\mathcal {B}})$ is called a bounded structure or a bornological set. A base or fundamental system of a bornology ${\mathcal {B}}$ is a subset ${\mathcal {B}}_{0}$ of ${\mathcal {B}}$ such that each element of ${\mathcal {B}}$ is a subset of some element of ${\mathcal {B}}_{0}.$ Given a collection ${\mathcal {S}}$ of subsets of $X,$ the smallest bornology containing ${\mathcal {S}}$ is called the bornology generated by ${\mathcal {S}}.$[1] If $(X,{\mathcal {B}})$ and $(Y,{\mathcal {C}})$ are bornological sets then their product bornology on $X\times Y$ is the bornology having as a base the collection of all sets of the form $B\times C,$ where $B\in {\mathcal {B}}$ and $C\in {\mathcal {C}}.$[1] A subset of $X\times Y$ is bounded in the product bornology if and only if its image under the canonical projections onto $X$ and $Y$ are both bounded. If $(X,{\mathcal {B}})$ and $(Y,{\mathcal {C}})$ are bornological sets then a function $f:X\to Y$ is said to be a locally bounded map or a bounded map (with respect to these bornologies) if it maps ${\mathcal {B}}$-bounded subsets of $X$ to ${\mathcal {C}}$-bounded subsets of $Y;$ that is, if $f\left({\mathcal {B}}\right)\subseteq {\mathcal {C}}.$[1] If in addition $f$ is a bijection and $f^{-1}$ is also bounded then $f$ is called a bornological isomorphism. Vector bornology Let $X$ be a vector space over a field $\mathbb {K} $ where $\mathbb {K} $ has a bornology ${\mathcal {B}}_{\mathbb {K} }.$ A bornology ${\mathcal {B}}$ on $X$ is called a vector bornology on $X$ if it is stable under vector addition, scalar multiplication, and the formation of balanced hulls (i.e. if the sum of two bounded sets is bounded, etc.). If $X$ is a vector space and ${\mathcal {B}}$ is a bornology on $X,$ then the following are equivalent: 1. ${\mathcal {B}}$ is a vector bornology 2. Finite sums and balanced hulls of ${\mathcal {B}}$-bounded sets are ${\mathcal {B}}$-bounded[1] 3. The scalar multiplication map $\mathbb {K} \times X\to X$ defined by $(s,x)\mapsto sx$ and the addition map $X\times X\to X$ defined by $(x,y)\mapsto x+y,$ are both bounded when their domains carry their product bornologies (i.e. they map bounded subsets to bounded subsets)[1] A vector bornology ${\mathcal {B}}$ is called a convex vector bornology if it is stable under the formation of convex hulls (i.e. the convex hull of a bounded set is bounded) then ${\mathcal {B}}.$ And a vector bornology ${\mathcal {B}}$ is called separated if the only bounded vector subspace of $X$ is the 0-dimensional trivial space $\{0\}.$ Usually, $\mathbb {K} $ is either the real or complex numbers, in which case a vector bornology ${\mathcal {B}}$ on $X$ will be called a convex vector bornology if ${\mathcal {B}}$ has a base consisting of convex sets. Characterizations Suppose that $X$ is a vector space over the field $\mathbb {F} $ of real or complex numbers and ${\mathcal {B}}$ is a bornology on $X.$ Then the following are equivalent: 1. ${\mathcal {B}}$ is a vector bornology 2. addition and scalar multiplication are bounded maps[1] 3. the balanced hull of every element of ${\mathcal {B}}$ is an element of ${\mathcal {B}}$ and the sum of any two elements of ${\mathcal {B}}$ is again an element of ${\mathcal {B}}$[1] Bornology on a topological vector space If $X$ is a topological vector space then the set of all bounded subsets of $X$ from a vector bornology on $X$ called the von Neumann bornology of $X$, the usual bornology, or simply the bornology of $X$ and is referred to as natural boundedness.[1] In any locally convex topological vector space $X,$ the set of all closed bounded disks form a base for the usual bornology of $X.$[1] Unless indicated otherwise, it is always assumed that the real or complex numbers are endowed with the usual bornology. Topology induced by a vector bornology Suppose that $X$ is a vector space over the field $\mathbb {K} $ of real or complex numbers and ${\mathcal {B}}$ is a vector bornology on $X.$ Let ${\mathcal {N}}$ denote all those subsets $N$ of $X$ that are convex, balanced, and bornivorous. Then ${\mathcal {N}}$ forms a neighborhood basis at the origin for a locally convex topological vector space topology. Examples Locally convex space of bounded functions Let $\mathbb {K} $ be the real or complex numbers (endowed with their usual bornologies), let $(T,{\mathcal {B}})$ be a bounded structure, and let $LB(T,\mathbb {K} )$ denote the vector space of all locally bounded $\mathbb {K} $-valued maps on $T.$ For every $B\in {\mathcal {B}},$ let $p_{B}(f):=\sup \left|f(B)\right|$ for all $f\in LB(T,\mathbb {K} ),$ where this defines a seminorm on $X.$ The locally convex topological vector space topology on $LB(T,\mathbb {K} )$ defined by the family of seminorms $\left\{p_{B}:B\in {\mathcal {B}}\right\}$ is called the topology of uniform convergence on bounded set.[1] This topology makes $LB(T,\mathbb {K} )$ into a complete space.[1] Bornology of equicontinuity Let $T$ be a topological space, $\mathbb {K} $ be the real or complex numbers, and let $C(T,\mathbb {K} )$ denote the vector space of all continuous $\mathbb {K} $-valued maps on $T.$ The set of all equicontinuous subsets of $C(T,\mathbb {K} )$ forms a vector bornology on $C(T,\mathbb {K} ).$[1] See also • Bornivorous set • Bornological space • Bornology • Space of linear maps • Ultrabornological space Citations 1. Narici & Beckenstein 2011, pp. 156–175. Bibliography • Hogbe-Nlend, Henri (1977). Bornologies and Functional Analysis: Introductory Course on the Theory of Duality Topology-Bornology and its use in Functional Analysis. North-Holland Mathematics Studies. Vol. 26. Amsterdam New York New York: North Holland. ISBN 978-0-08-087137-0. MR 0500064. OCLC 316549583. • Kriegl, Andreas; Michor, Peter W. (1997). The Convenient Setting of Global Analysis. Mathematical Surveys and Monographs. American Mathematical Society. ISBN 978-082180780-4. • Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. • Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons Boundedness and bornology Basic concepts • Barrelled space • Bounded set • Bornological space • (Vector) Bornology Operators • (Un)Bounded operator • Uniform boundedness principle Subsets • Barrelled set • Bornivorous set • Saturated family Related spaces • (Countably) Barrelled space • (Countably) Quasi-barrelled space • Infrabarrelled space • (Quasi-) Ultrabarrelled space • Ultrabornological space Topological vector spaces (TVSs) Basic concepts • Banach space • Completeness • Continuous linear operator • Linear functional • Fréchet space • Linear map • Locally convex space • Metrizability • Operator topologies • Topological vector space • Vector space Main results • Anderson–Kadec • Banach–Alaoglu • Closed graph theorem • F. Riesz's • Hahn–Banach (hyperplane separation • Vector-valued Hahn–Banach) • Open mapping (Banach–Schauder) • Bounded inverse • Uniform boundedness (Banach–Steinhaus) Maps • Bilinear operator • form • Linear map • Almost open • Bounded • Continuous • Closed • Compact • Densely defined • Discontinuous • Topological homomorphism • Functional • Linear • Bilinear • Sesquilinear • Norm • Seminorm • Sublinear function • Transpose Types of sets • Absolutely convex/disk • Absorbing/Radial • Affine • Balanced/Circled • Banach disks • Bounding points • Bounded • Complemented subspace • Convex • Convex cone (subset) • Linear cone (subset) • Extreme point • Pre-compact/Totally bounded • Prevalent/Shy • Radial • Radially convex/Star-shaped • Symmetric Set operations • Affine hull • (Relative) Algebraic interior (core) • Convex hull • Linear span • Minkowski addition • Polar • (Quasi) Relative interior Types of TVSs • Asplund • B-complete/Ptak • Banach • (Countably) Barrelled • BK-space • (Ultra-) Bornological • Brauner • Complete • Convenient • (DF)-space • Distinguished • F-space • FK-AK space • FK-space • Fréchet • tame Fréchet • Grothendieck • Hilbert • Infrabarreled • Interpolation space • K-space • LB-space • LF-space • Locally convex space • Mackey • (Pseudo)Metrizable • Montel • Quasibarrelled • Quasi-complete • Quasinormed • (Polynomially • Semi-) Reflexive • Riesz • Schwartz • Semi-complete • Smith • Stereotype • (B • Strictly • Uniformly) convex • (Quasi-) Ultrabarrelled • Uniformly smooth • Webbed • With the approximation property •  Mathematics portal • Category • Commons
Wikipedia
Connection (vector bundle) In mathematics, and especially differential geometry and gauge theory, a connection on a fiber bundle is a device that defines a notion of parallel transport on the bundle; that is, a way to "connect" or identify fibers over nearby points. The most common case is that of a linear connection on a vector bundle, for which the notion of parallel transport must be linear. A linear connection is equivalently specified by a covariant derivative, an operator that differentiates sections of the bundle along tangent directions in the base manifold, in such a way that parallel sections have derivative zero. Linear connections generalize, to arbitrary vector bundles, the Levi-Civita connection on the tangent bundle of a pseudo-Riemannian manifold, which gives a standard way to differentiate vector fields. Nonlinear connections generalize this concept to bundles whose fibers are not necessarily linear. This article is about connections on vector bundles. For other types of connections in mathematics, see connection (mathematics). Linear connections are also called Koszul connections after Jean-Louis Koszul, who gave an algebraic framework for describing them (Koszul 1950). This article defines the connection on a vector bundle using a common mathematical notation which de-emphasizes coordinates. However, other notations are also regularly used: in general relativity, vector bundle computations are usually written using indexed tensors; in gauge theory, the endomorphisms of the vector space fibers are emphasized. The different notations are equivalent, as discussed in the article on metric connections (the comments made there apply to all vector bundles). Motivation Let M be a differentiable manifold, such as Euclidean space. A vector-valued function $M\to \mathbb {R} ^{n}$ can be viewed as a section of the trivial vector bundle $M\times \mathbb {R} ^{n}\to M.$ One may consider a section of a general differentiable vector bundle, and it is therefore natural to ask if it is possible to differentiate a section, as a generalization of how one differentiates a function on M. The model case is to differentiate a function $X:\mathbb {R} ^{n}\to \mathbb {R} ^{m}$ on Euclidean space $\mathbb {R} ^{n}$. In this setting the derivative $dX$ at a point $x\in \mathbb {R} ^{n}$ in the direction $v\in \mathbb {R} ^{n}$ may be defined by the standard formula $dX(v)(x)=\lim _{t\to 0}{\frac {X(x+tv)-X(x)}{t}}.$ For every $x\in \mathbb {R} ^{n}$, this defines a new vector $dX(v)(x)\in \mathbb {R} ^{m}.$ When passing to a section $X$ of a vector bundle $E$ over a manifold $M$, one encounters two key issues with this definition. Firstly, since the manifold has no linear structure, the term $x+tv$ makes no sense on $M$. Instead one takes a path $\gamma :(-1,1)\to M$ :(-1,1)\to M} such that $\gamma (0)=x,\gamma '(0)=v$ and computes $dX(v)(x)=\lim _{t\to 0}{\frac {X(\gamma (t))-X(\gamma (0))}{t}}.$ However this still does not make sense, because $X(\gamma (t))$ and $X(\gamma (0))$ are elements of the distinct vector spaces $E_{\gamma (t)}$ and $E_{x}.$ This means that subtraction of these two terms is not naturally defined. The problem is resolved by introducing the extra structure of a connection to the vector bundle. There are at least three perspectives from which connections can be understood. When formulated precisely, all three perspectives are equivalent. 1. (Parallel transport) A connection can be viewed as assigning to every differentiable path $\gamma $ a linear isomorphism $P_{t}^{\gamma }:E_{\gamma (t)}\to E_{x}$ for all $t.$ Using this isomorphism one can transport $X(\gamma (t))$ to the fibre $E_{x}$ and then take the difference; explicitly, $\nabla _{v}X=\lim _{t\to 0}{\frac {P_{t}^{\gamma }X(\gamma (t))-X(\gamma (0))}{t}}.$ In order for this to depend only on $v,$ and not on the path $\gamma $ extending $v,$ it is necessary to place restrictions (in the definition) on the dependence of $P_{t}^{\gamma }$ on $\gamma .$ This is not straightforward to formulate, and so this notion of "parallel transport" is usually derived as a by-product of other ways of defining connections. In fact, the following notion of "Ehresmann connection" is nothing but an infinitesimal formulation of parallel transport. 2. (Ehresmann connection) The section $X$ may be viewed as a smooth map from the smooth manifold $M$ to the smooth manifold $E.$ As such, one may consider the pushforward $dX(v),$ which is an element of the tangent space $T_{X(x)}E.$ In Ehresmann's formulation of a connection, one chooses a way of assigning, to each $x$ and every $e\in E_{x},$ a direct sum decomposition of $T_{X(x)}E$ into two linear subspaces, one of which is the natural embedding of $E_{x}.$ With this additional data, one defines $\nabla _{v}X$ by projecting $dX(v)$ to be valued in $E_{x}.$ In order to respect the linear structure of a vector bundle, one imposes additional restrictions on how the direct sum decomposition of $T_{e}E$ moves as e is varied over a fiber. 3. (Covariant derivative) The standard derivative $dX(v)$ in Euclidean contexts satisfies certain dependencies on $X$ and $v,$ the most fundamental being linearity. A covariant derivative is defined to be any operation $(v,X)\mapsto \nabla _{v}X$ which mimics these properties, together with a form of the product rule. Unless the base is zero-dimensional, there are always infinitely many connections which exist on a given differentiable vector bundle, and so there is always a corresponding choice of how to differentiate sections. Depending on context, there may be distinguished choices, for instance those which are determined by solving certain partial differential equations. In the case of the tangent bundle, any pseudo-Riemannian metric (and in particular any Riemannian metric) determines a canonical connection, called the Levi-Civita connection. Formal definition Let $E\to M$ be a smooth real vector bundle over a smooth manifold $M$. Denote the space of smooth sections of $E\to M$ by $\Gamma (E)$. A covariant derivative on $E\to M$ is either of the following equivalent structures: 1. an $\mathbb {R} $-linear map $\nabla :\Gamma (E)\to \Gamma (T^{*}M\otimes E)$ :\Gamma (E)\to \Gamma (T^{*}M\otimes E)} such that the product rule $\nabla (fs)=df\otimes s+f\nabla s$ holds for all smooth functions $f$ on $M$ and all smooth sections $s$ of $E.$ 2. an assignment, to any smooth section s and every $x\in M$, of a $\mathbb {R} $-linear map $(\nabla s)_{x}:T_{x}M\to E_{x}$ which depends smoothly on x and such that $\nabla (a_{1}s_{1}+a_{2}s_{2})=a_{1}\nabla s_{1}+a_{2}\nabla s_{2}$ for any two smooth sections $s_{1},s_{2}$ and any real numbers $a_{1},a_{2},$ and such that for every smooth function $f$, $\nabla (fs)$ is related to $\nabla s$ by ${\big (}\nabla (fs){\big )}_{x}(v)=df(v)s(x)+f(x)(\nabla s)_{x}(v)$ for any $x\in M$ and $v\in T_{x}M.$ Beyond using the canonical identification between the vector space $T_{x}^{\ast }M\otimes E_{x}$ and the vector space of linear maps $T_{x}M\to E_{x},$ these two definitions are identical and differ only in the language used. It is typical to denote $(\nabla s)_{x}(v)$ by $\nabla _{v}s,$ with $x$ being implicit in $v.$ With this notation, the product rule in the second version of the definition given above is written $\nabla _{v}(fs)=df(v)s+f\nabla _{v}s.$ Remark. In the case of a complex vector bundle, the above definition is still meaningful, but is usually taken to be modified by changing "real" and "ℝ" everywhere they appear to "complex" and "$\mathbb {C} .$" This places extra restrictions, as not every real-linear map between complex vector spaces is complex-linear. There is some ambiguity in this distinction, as a complex vector bundle can also be regarded as a real vector bundle. Induced connections Given a vector bundle $E\to M$, there are many associated bundles to $E$ which may be constructed, for example the dual vector bundle $E^{*}$, tensor powers $E^{\otimes k}$, symmetric and antisymmetric tensor powers $S^{k}E,\Lambda ^{k}E$, and the direct sums $E^{\oplus k}$. A connection on $E$ induces a connection on any one of these associated bundles. The ease of passing between connections on associated bundles is more elegantly captured by the theory of principal bundle connections, but here we present some of the basic induced connections. Dual connection Given $\nabla $ a connection on $E$, the induced dual connection $\nabla ^{*}$ on $E^{*}$ is defined implicitly by $d(\langle \xi ,s\rangle )(X)=\langle \nabla _{X}^{*}\xi ,s\rangle +\langle \xi ,\nabla _{X}s\rangle .$ Here $X\in \Gamma (TM)$ is a smooth vector field, $s\in \Gamma (E)$ is a section of $E$, and $\xi \in \Gamma (E^{*})$ a section of the dual bundle, and $\langle \cdot ,\cdot \rangle $ the natural pairing between a vector space and its dual (occurring on each fibre between $E$ and $E^{*}$), i.e., $\langle \xi ,s\rangle :=\xi (s)$ :=\xi (s)} . Notice that this definition is essentially enforcing that $\nabla ^{*}$ be the connection on $E^{*}$ so that a natural product rule is satisfied for pairing $\langle \cdot ,\cdot \rangle $. Tensor product connection Given $\nabla ^{E},\nabla ^{F}$ connections on two vector bundles $E,F\to M$, define the tensor product connection by the formula $(\nabla ^{E}\otimes \nabla ^{F})_{X}(s\otimes t)=\nabla _{X}^{E}(s)\otimes t+s\otimes \nabla _{X}^{F}(t).$ Here we have $s\in \Gamma (E),t\in \Gamma (F),X\in \Gamma (TM)$. Notice again this is the natural way of combining $\nabla ^{E},\nabla ^{F}$ to enforce the product rule for the tensor product connection. By repeated application of the above construction applied to the tensor product $E^{\otimes k}=(E^{\otimes (k-1)})\otimes E$, one also obtains the tensor power connection on $E^{\otimes k}$ for any $k\geq 1$ and vector bundle $E$. Direct sum connection The direct sum connection is defined by $(\nabla ^{E}\oplus \nabla ^{F})_{X}(s\oplus t)=\nabla _{X}^{E}(s)\oplus \nabla _{X}^{F}(t),$ where $s\oplus t\in \Gamma (E\oplus F)$. Symmetric and exterior power connections Since the symmetric power and exterior power of a vector bundle may be viewed naturally as subspaces of the tensor power, $S^{k}E,\Lambda ^{k}E\subset E^{\otimes k}$, the definition of the tensor product connection applies in a straightforward manner to this setting. Indeed, since the symmetric and exterior algebras sit inside the tensor algebra as direct summands, and the connection $\nabla $ respects this natural splitting, one can simply restrict $\nabla $ to these summands. Explicitly, define the symmetric product connection by $\nabla _{X}^{\odot 2}(s\cdot t)=\nabla _{X}s\odot t+s\odot \nabla _{X}t$ and the exterior product connection by $\nabla _{X}^{\wedge 2}(s\wedge t)=\nabla _{X}s\wedge t+s\wedge \nabla _{X}t$ for all $s,t\in \Gamma (E),X\in \Gamma (TM)$. Repeated applications of these products gives induced symmetric power and exterior power connections on $S^{k}E$ and $\Lambda ^{k}E$ respectively. Endomorphism connection Finally, one may define the induced connection $\nabla ^{\operatorname {End} {E}}$ on the vector bundle of endomorphisms $\operatorname {End} (E)=E^{*}\otimes E$, the endomorphism connection. This is simply the tensor product connection of the dual connection $\nabla ^{*}$ on $E^{*}$ and $\nabla $ on $E$. If $s\in \Gamma (E)$ and $u\in \Gamma (\operatorname {End} (E))$, so that the composition $u(s)\in \Gamma (E)$ also, then the following product rule holds for the endomorphism connection: $\nabla _{X}(u(s))=\nabla _{X}^{\operatorname {End} (E)}(u)(s)+u(\nabla _{X}(s)).$ By reversing this equation, it is possible to define the endomorphism connection as the unique connection satisfying $\nabla _{X}^{\operatorname {End} (E)}(u)(s)=\nabla _{X}(u(s))-u(\nabla _{X}(s))$ for any $u,s,X$, thus avoiding the need to first define the dual connection and tensor product connection. Any associated bundle See also: Connection (principal bundle) Given a vector bundle $E$ of rank $r$, and any representation $\rho :\mathrm {GL} (r,\mathbb {K} )\to G$ :\mathrm {GL} (r,\mathbb {K} )\to G} into a linear group $G\subset \mathrm {GL} (V)$, there is an induced connection on the associated vector bundle $F=E\times _{\rho }V$. This theory is most succinctly captured by passing to the principal bundle connection on the frame bundle of $E$ and using the theory of principal bundles. Each of the above examples can be seen as special cases of this construction: the dual bundle corresponds to the inverse transpose (or inverse adjoint) representation, the tensor product to the tensor product representation, the direct sum to the direct sum representation, and so on. Exterior covariant derivative and vector-valued forms See also: Exterior covariant derivative Let $E\to M$ be a vector bundle. An $E$-valued differential form of degree $r$ is a section of the tensor product bundle: $\bigwedge ^{r}T^{*}M\otimes E.$ The space of such forms is denoted by $\Omega ^{r}(E)=\Omega ^{r}(M;E)=\Gamma \left(\bigwedge ^{r}T^{*}M\otimes E\right)=\Omega ^{r}(M)\otimes _{C^{\infty }(M)}\Gamma (E),$ where the last tensor product denotes the tensor product of modules over the ring of smooth functions on $M$. An $E$-valued 0-form is just a section of the bundle $E$. That is, $\Omega ^{0}(E)=\Gamma (E).$ In this notation a connection on $E\to M$ is a linear map $\nabla :\Omega ^{0}(E)\to \Omega ^{1}(E).$ :\Omega ^{0}(E)\to \Omega ^{1}(E).} A connection may then be viewed as a generalization of the exterior derivative to vector bundle valued forms. In fact, given a connection $\nabla $ on $E$ there is a unique way to extend $\nabla $ to an exterior covariant derivative $d_{\nabla }:\Omega ^{r}(E)\to \Omega ^{r+1}(E).$ This exterior covariant derivative is defined by the following Leibniz rule, which is specified on simple tensors of the form $\omega \otimes s$ and extended linearly: $d_{\nabla }(\omega \otimes s)=d\omega \otimes s+(-1)^{\deg \omega }\omega \wedge \nabla s$ where $\omega \in \Omega ^{r}(M)$ so that $\deg \omega =r$, $s\in \Gamma (E)$ is a section, and $\omega \wedge \nabla s$ denotes the $(r+1)$-form with values in $E$ defined by wedging $\omega $ with the one-form part of $\nabla s$. Notice that for $E$-valued 0-forms, this recovers the normal Leibniz rule for the connection $\nabla $. Unlike the ordinary exterior derivative, one generally has $d_{\nabla }^{2}\neq 0$. In fact, $d_{\nabla }^{2}$ is directly related to the curvature of the connection $\nabla $ (see below). Affine properties of the set of connections Every vector bundle over a manifold admits a connection, which can be proved using partitions of unity. However, connections are not unique. If $\nabla _{1}$ and $\nabla _{2}$ are two connections on $E\to M$ then their difference is a $C^{\infty }(M)$-linear operator. That is, $(\nabla _{1}-\nabla _{2})(fs)=f(\nabla _{1}s-\nabla _{2}s)$ for all smooth functions $f$ on $M$ and all smooth sections $s$ of $E$. It follows that the difference $\nabla _{1}-\nabla _{2}$ can be uniquely identified with a one-form on $M$ with values in the endomorphism bundle $\operatorname {End} (E)=E^{*}\otimes E$: $\nabla _{1}-\nabla _{2}\in \Omega ^{1}(M;\mathrm {End} \,E).$ Conversely, if $\nabla $ is a connection on $E$ and $A$ is a one-form on $M$ with values in $\operatorname {End} (E)$, then $\nabla +A$ is a connection on $E$. In other words, the space of connections on $E$ is an affine space for $\Omega ^{1}(\operatorname {End} (E))$. This affine space is commonly denoted ${\mathcal {A}}$. Relation to principal and Ehresmann connections Let $E\to M$ be a vector bundle of rank $k$ and let ${\mathcal {F}}(E)$ be the frame bundle of $E$. Then a (principal) connection on ${\mathcal {F}}(E)$ induces a connection on $E$. First note that sections of $E$ are in one-to-one correspondence with right-equivariant maps ${\mathcal {F}}(E)\to \mathbb {R} ^{k}$. (This can be seen by considering the pullback of $E$ over ${\mathcal {F}}(E)\to M$, which is isomorphic to the trivial bundle ${\mathcal {F}}(E)\times \mathbb {R} ^{k}$.) Given a section $s$ of $E$ let the corresponding equivariant map be $\psi (s)$. The covariant derivative on $E$ is then given by $\psi (\nabla _{X}s)=X^{H}(\psi (s))$ where $X^{H}$ is the horizontal lift of $X$ from $M$ to ${\mathcal {F}}(E)$. (Recall that the horizontal lift is determined by the connection on ${\mathcal {F}}(E)$.) Conversely, a connection on $E$ determines a connection on ${\mathcal {F}}(E)$, and these two constructions are mutually inverse. A connection on $E$ is also determined equivalently by a linear Ehresmann connection on $E$. This provides one method to construct the associated principal connection. The induced connections discussed in #Induced connections can be constructed as connections on other associated bundles to the frame bundle of $E$, using representations other than the standard representation used above. For example if $\rho $ denotes the standard representation of $\operatorname {GL} (k,\mathbb {R} )$ on $\mathbb {R} ^{k}$, then the associated bundle to the representation $\rho \oplus \rho $ of $\operatorname {GL} (k,\mathbb {R} )$ on $\mathbb {R} ^{k}\oplus \mathbb {R} ^{k}$ is the direct sum bundle $E\oplus E$, and the induced connection is precisely that which was described above. Local expression Let $E\to M$ be a vector bundle of rank $k$, and let $U$ be an open subset of $M$ over which $E$ trivialises. Therefore over the set $U$, $E$ admits a local smooth frame of sections $\mathbf {e} =(e_{1},\dots ,e_{k});\quad e_{i}:U\to \left.E\right|_{U}.$ Since the frame $\mathbf {e} $ defines a basis of the fibre $E_{x}$ for any $x\in U$, one can expand any local section $s:U\to \left.E\right|_{U}$ in the frame as $s=\sum _{i=1}^{k}s^{i}e_{i}$ for a collection of smooth functions $s^{1},\dots ,s^{k}:U\to \mathbb {R} $. Given a connection $\nabla $ on $E$, it is possible to express $\nabla $ over $U$ in terms of the local frame of sections, by using the characteristic product rule for the connection. For any basis section $e_{i}$, the quantity $\nabla (e_{i})\in \Omega ^{1}(U)\otimes \Gamma (U,E)$ may be expanded in the local frame $\mathbf {e} $ as $\nabla (e_{i})=\sum _{j=1}^{k}A_{i}^{\ j}\otimes e_{j},$ where $A_{i}^{\ j}\in \Omega ^{1}(U);\,j=1,\dots ,k$ are a collection of local one-forms. These forms can be put into a matrix of one-forms defined by $A={\begin{pmatrix}A_{1}^{\ 1}&\cdots &A_{k}^{\ 1}\\\vdots &\ddots &\vdots \\A_{1}^{\ k}&\cdots &A_{k}^{\ k}\end{pmatrix}}\in \Omega ^{1}(U,\operatorname {End} (\left.E\right|_{U}))$ called the local connection form of $\nabla $ over $U$. The action of $\nabla $ on any section $s:U\to \left.E\right|_{U}$ can be computed in terms of $A$ using the product rule as $\nabla (s)=\sum _{j=1}^{k}\left(ds^{j}+\sum _{i=1}^{k}A_{i}^{\ j}s^{i}\right)\otimes e_{j}.$ If the local section $s$ is also written in matrix notation as a column vector using the local frame $\mathbf {e} $ as a basis, $s={\begin{pmatrix}s^{1}\\\vdots \\s^{k}\end{pmatrix}},$ then using regular matrix multiplication one can write $\nabla (s)=ds+As$ where $ds$ is shorthand for applying the exterior derivative $d$ to each component of $s$ as a column vector. In this notation, one often writes locally that $\left.\nabla \right|_{U}=d+A$. In this sense a connection is locally completely specified by its connection one-form in some trivialisation. As explained in #Affine properties of the set of connections, any connection differs from another by an endomorphism-valued one-form. From this perspective, the connection one-form $A$ is precisely the endomorphism-valued one-form such that the connection $\left.\nabla \right|_{U}$ on $\left.E\right|_{U}$ differs from the trivial connection $d$ on $\left.E\right|_{U}$, which exists because $U$ is a trivialising set for $E$. Relationship to Christoffel symbols In pseudo-Riemannian geometry, the Levi-Civita connection is often written in terms of the Christoffel symbols $\Gamma _{ij}^{\ \ k}$ instead of the connection one-form $A$. It is possible to define Christoffel symbols for a connection on any vector bundle, and not just the tangent bundle of a pseudo-Riemannian manifold. To do this, suppose that in addition to $U$ being a trivialising open subset for the vector bundle $E\to M$, that $U$ is also a local chart for the manifold $M$, admitting local coordinates $\mathbf {x} =(x^{1},\dots ,x^{n});\quad x^{i}:U\to \mathbb {R} $. In such a local chart, there is a distinguished local frame for the differential one-forms given by $(dx^{1},\dots ,dx^{n})$, and the local connection one-forms $A_{i}^{j}$ can be expanded in this basis as $A_{i}^{\ j}=\sum _{\ell =1}^{n}\Gamma _{\ell i}^{\ \ j}dx^{\ell }$ for a collection of local smooth functions $\Gamma _{\ell i}^{\ \ j}:U\to \mathbb {R} $, called the Christoffel symbols of $\nabla $ over $U$. In the case where $E=TM$ and $\nabla $ is the Levi-Civita connection, these symbols agree precisely with the Christoffel symbols from pseudo-Riemannian geometry. The expression for how $\nabla $ acts in local coordinates can be further expanded in terms of the local chart $U$ and the Christoffel symbols, to be given by $\nabla (s)=\sum _{i,j=1}^{k}\sum _{\ell =1}^{n}\left({\frac {\partial s^{j}}{\partial x^{\ell }}}+\Gamma _{\ell i}^{\ \ j}s^{i}\right)dx^{\ell }\otimes e_{j}.$ Contracting this expression with the local coordinate tangent vector ${\frac {\partial }{\partial x^{\ell }}}$ leads to $\nabla _{\frac {\partial }{\partial x^{\ell }}}(s)=\sum _{i,j=1}^{k}\left({\frac {\partial s^{j}}{\partial x^{\ell }}}+\Gamma _{\ell i}^{\ \ j}s^{i}\right)e_{j}.$ This defines a collection of $n$ locally defined operators $\nabla _{\ell }:\Gamma (U,E)\to \Gamma (U,E);\quad \nabla _{\ell }(s):=\sum _{i,j=1}^{k}\left({\frac {\partial s^{j}}{\partial x^{\ell }}}+\Gamma _{\ell i}^{\ \ j}s^{i}\right)e_{j},$ with the property that $\nabla (s)=\sum _{\ell =1}^{n}dx^{\ell }\otimes \nabla _{\ell }(s).$ Change of local trivialisation Suppose $\mathbf {e'} $ is another choice of local frame over the same trivialising set $U$, so that there is a matrix $g=(g_{i}^{\ j})$ of smooth functions relating $\mathbf {e} $ and $\mathbf {e'} $, defined by $e_{i}=\sum _{j=1}^{k}g_{i}^{\ j}e'_{j}.$ Tracing through the construction of the local connection form $A$ for the frame $\mathbf {e} $, one finds that the connection one-form $A'$ for $\mathbf {e'} $ is given by ${A'}_{i}^{\ j}=\sum _{p,q=1}^{k}g_{p}^{\ j}A_{q}^{\ p}{(g^{-1})}_{i}^{\ q}-\sum _{p=1}^{k}(dg)_{p}^{\ j}{(g^{-1})}_{i}^{\ p}$ where $g^{-1}=\left({(g^{-1})}_{i}^{\ j}\right)$ denotes the inverse matrix to $g$. In matrix notation this may be written $A'=gAg^{-1}-(dg)g^{-1}$ where $dg$ is the matrix of one-forms given by taking the exterior derivative of the matrix $g$ component-by-component. In the case where $E=TM$ is the tangent bundle and $g$ is the Jacobian of a coordinate transformation of $M$, the lengthy formulae for the transformation of the Christoffel symbols of the Levi-Civita connection can be recovered from the more succinct transformation laws of the connection form above. Parallel transport and holonomy A connection $\nabla $ on a vector bundle $E\to M$ defines a notion of parallel transport on $E$ along a curve in $M$. Let $\gamma :[0,1]\to M$ :[0,1]\to M} be a smooth path in $M$. A section $s$ of $E$ along $\gamma $ is said to be parallel if $\nabla _{{\dot {\gamma }}(t)}s=0$ for all $t\in [0,1]$. Equivalently, one can consider the pullback bundle $\gamma ^{*}E$ of $E$ by $\gamma $. This is a vector bundle over $[0,1]$ with fiber $E_{\gamma (t)}$ over $t\in [0,1]$. The connection $\nabla $ on $E$ pulls back to a connection on $\gamma ^{*}E$. A section $s$ of $\gamma ^{*}E$ is parallel if and only if $\gamma ^{*}\nabla (s)=0$. Suppose $\gamma $ is a path from $x$ to $y$ in $M$. The above equation defining parallel sections is a first-order ordinary differential equation (cf. local expression above) and so has a unique solution for each possible initial condition. That is, for each vector $v$ in $E_{x}$ there exists a unique parallel section $s$ of $\gamma ^{*}E$ with $s(0)=v$. Define a parallel transport map $\tau _{\gamma }:E_{x}\to E_{y}\,$ by $\tau _{\gamma }(v)=s(1)$. It can be shown that $\tau _{\gamma }$ is a linear isomorphism, with inverse given by following the same procedure with the reversed path $\gamma ^{-}$ from $y$ to $x$. Parallel transport can be used to define the holonomy group of the connection $\nabla $ based at a point $x$ in $M$. This is the subgroup of $\operatorname {GL} (E_{x})$ consisting of all parallel transport maps coming from loops based at $x$: $\mathrm {Hol} _{x}=\{\tau _{\gamma }:\gamma {\text{ is a loop based at }}x\}.\,$ The holonomy group of a connection is intimately related to the curvature of the connection (AmbroseSinger 1953). The connection can be recovered from its parallel transport operators as follows. If $X\in \Gamma (TM)$ is a vector field and $s\in \Gamma (E)$ a section, at a point $x\in M$ pick an integral curve $\gamma :(-\varepsilon ,\varepsilon )\to M$ :(-\varepsilon ,\varepsilon )\to M} for $X$ at $x$. For each $t\in (-\varepsilon ,\varepsilon )$ we will write $\tau _{t}:E_{\gamma (t)}\to E_{x}$ for the parallel transport map traveling along $\gamma $ from $t$ to $0$. In particular for every $t\in (-\varepsilon ,\varepsilon )$, we have $\tau _{t}s(\gamma (t))\in E_{x}$. Then $t\mapsto \tau _{t}s(\gamma (t))$ defines a curve in the vector space $E_{x}$, which may be differentiated. The covariant derivative is recovered as $\nabla _{X}s(x)={\frac {d}{dt}}\left(\tau _{t}s(\gamma (t))\right)_{t=0}.$ This demonstrates that an equivalent definition of a connection is given by specifying all the parallel transport isomorphisms $\tau _{\gamma }$ between fibres of $E$ and taking the above expression as the definition of $\nabla $. Curvature See also: Curvature form The curvature of a connection $\nabla $ on $E\to M$ is a 2-form $F_{\nabla }$ on $M$ with values in the endomorphism bundle $\operatorname {End} (E)=E^{*}\otimes E$. That is, $F_{\nabla }\in \Omega ^{2}(\mathrm {End} (E))=\Gamma (\Lambda ^{2}T^{*}M\otimes \mathrm {End} (E)).$ It is defined by the expression $F_{\nabla }(X,Y)(s)=\nabla _{X}\nabla _{Y}s-\nabla _{Y}\nabla _{X}s-\nabla _{[X,Y]}s$ where $X$ and $Y$ are tangent vector fields on $M$ and $s$ is a section of $E$. One must check that $F_{\nabla }$ is $C^{\infty }(M)$-linear in both $X$ and $Y$ and that it does in fact define a bundle endomorphism of $E$. As mentioned above, the covariant exterior derivative $d_{\nabla }$ need not square to zero when acting on $E$-valued forms. The operator $d_{\nabla }^{2}$ is, however, strictly tensorial (i.e. $C^{\infty }(M)$-linear). This implies that it is induced from a 2-form with values in $\operatorname {End} (E)$. This 2-form is precisely the curvature form given above. For an $E$-valued form $\sigma $ we have $(d_{\nabla })^{2}\sigma =F_{\nabla }\wedge \sigma .$ A flat connection is one whose curvature form vanishes identically. Local form and Cartan's structure equation The curvature form has a local description called Cartan's structure equation. If $\nabla $ has local form $A$ on some trivialising open subset $U\subset M$ for $E$, then $F_{\nabla }=dA+A\wedge A$ on $U$. To clarify this notation, notice that $A$ is a endomorphism-valued one-form, and so in local coordinates takes the form of a matrix of one-forms. The operation $d$ applies the exterior derivative component-wise to this matrix, and $A\wedge A$ denotes matrix multiplication, where the components are wedged rather than multiplied. In local coordinates $\mathbf {x} =(x^{1},\dots ,x^{n})$ on $M$ over $U$, if the connection form is written $A=A_{\ell }dx^{\ell }=(\Gamma _{\ell i}^{\ \ j})dx^{\ell }$ for a collection of local endomorphisms $A_{\ell }=(\Gamma _{\ell i}^{\ \ j})$, then one has $F_{\nabla }=\sum _{p,q=1}^{n}{\frac {1}{2}}\left({\frac {\partial A_{q}}{\partial x^{p}}}-{\frac {\partial A_{p}}{\partial x^{q}}}+[A_{p},A_{q}]\right)dx^{p}\wedge dx^{q}.$ Further expanding this in terms of the Christoffel symbols $\Gamma _{\ell i}^{\ \ j}$ produces the familiar expression from Riemannian geometry. Namely if $s=s^{i}e_{i}$ is a section of $E$ over $U$, then $F_{\nabla }(s)=\sum _{i,j=1}^{k}\sum _{p,q=1}^{n}{\frac {1}{2}}\left({\frac {\partial \Gamma _{qi}^{\ \ j}}{\partial x^{p}}}-{\frac {\partial \Gamma _{pi}^{\ \ j}}{\partial x^{q}}}+\Gamma _{pr}^{\ \ j}\Gamma _{qi}^{\ \ r}-\Gamma _{qr}^{\ \ j}\Gamma _{pi}^{\ \ r}\right)s^{i}dx^{p}\wedge dx^{q}\otimes e_{j}=\sum _{i,j=1}^{k}\sum _{p,q=1}^{n}R_{pqi}^{\ \ \ j}s^{i}dx^{p}\wedge dx^{q}\otimes e_{j}.$ Here $R=(R_{pqi}^{\ \ \ j})$ is the full curvature tensor of $F_{\nabla }$, and in Riemannian geometry would be identified with the Riemannian curvature tensor. It can be checked that if we define $[A,A]$ to be wedge product of forms but commutator of endomorphisms as opposed to composition, then $A\wedge A={\frac {1}{2}}[A,A]$, and with this alternate notation the Cartan structure equation takes the form $F_{\nabla }=dA+{\frac {1}{2}}[A,A].$ This alternate notation is commonly used in the theory of principal bundle connections, where instead we use a connection form $\omega $, a Lie algebra-valued one-form, for which there is no notion of composition (unlike in the case of endomorphisms), but there is a notion of a Lie bracket. In some references (see for example (MadsenTornehave1997)) the Cartan structure equation may be written with a minus sign: $F_{\nabla }=dA-A\wedge A.$ This different convention uses an order of matrix multiplication that is different from the standard Einstein notation in the wedge product of matrix-valued one-forms. Bianchi identity A version of the second (differential) Bianchi identity from Riemannian geometry holds for a connection on any vector bundle. Recall that a connection $\nabla $ on a vector bundle $E\to M$ induces an endomorphism connection on $\operatorname {End} (E)$. This endomorphism connection has itself an exterior covariant derivative, which we ambiguously call $d_{\nabla }$. Since the curvature is a globally defined $\operatorname {End} (E)$-valued two-form, we may apply the exterior covariant derivative to it. The Bianchi identity says that $d_{\nabla }F_{\nabla }=0$. This succinctly captures the complicated tensor formulae of the Bianchi identity in the case of Riemannian manifolds, and one may translate from this equation to the standard Bianchi identities by expanding the connection and curvature in local coordinates. There is no analogue in general of the first (algebraic) Bianchi identity for a general connection, as this exploits the special symmetries of the Levi-Civita connection. Namely, one exploits that the vector bundle indices of $E=TM$ in the curvature tensor $R$ may be swapped with the cotangent bundle indices coming from $T^{*}M$ after using the metric to lower or raise indices. For example this allows the torsion-freeness condition $\Gamma _{\ell i}^{\ \ j}=\Gamma _{i\ell }^{\ \ j}$ to be defined for the Levi-Civita connection, but for a general vector bundle the $\ell $-index refers to the local coordinate basis of $T^{*}M$, and the $i,j$-indices to the local coordinate frame of $E$ and $E^{*}$ coming from the splitting $\mathrm {End} (E)=E^{*}\otimes E$. However in special circumstance, for example when the rank of $E$ equals the dimension of $M$ and a solder form has been chosen, one can use the soldering to interchange the indices and define a notion of torsion for affine connections which are not the Levi-Civita connection. Gauge transformations See also: Gauge group (mathematics) Given two connections $\nabla _{1},\nabla _{2}$ on a vector bundle $E\to M$, it is natural to ask when they might be considered equivalent. There is a well-defined notion of an automorphism of a vector bundle $E\to M$. A section $u\in \Gamma (\operatorname {End} (E))$ is an automorphism if $u(x)\in \operatorname {End} (E_{x})$ is invertible at every point $x\in M$. Such an automorphism is called a gauge transformation of $E$, and the group of all automorphisms is called the gauge group, often denoted ${\mathcal {G}}$ or $\operatorname {Aut} (E)$. The group of gauge transformations may be neatly characterised as the space of sections of the capital A adjoint bundle $\operatorname {Ad} ({\mathcal {F}}(E))$ of the frame bundle of the vector bundle $E$. This is not to be confused with the lowercase a adjoint bundle $\operatorname {ad} ({\mathcal {F}}(E))$, which is naturally identified with $\operatorname {End} (E)$ itself. The bundle $\operatorname {Ad} {\mathcal {F}}(E)$ is the associated bundle to the principal frame bundle by the conjugation representation of $G=\operatorname {GL} (r)$ on itself, $g\mapsto ghg^{-1}$, and has fibre the same general linear group $\operatorname {GL} (r)$ where $\operatorname {rank} (E)=r$. Notice that despite having the same fibre as the frame bundle ${\mathcal {F}}(E)$ and being associated to it, $\operatorname {Ad} ({\mathcal {F}}(E))$ is not equal to the frame bundle, nor even a principal bundle itself. The gauge group may be equivalently characterised as ${\mathcal {G}}=\Gamma (\operatorname {Ad} {\mathcal {F}}(E)).$ A gauge transformation $u$ of $E$ acts on sections $s\in \Gamma (E)$, and therefore acts on connections by conjugation. Explicitly, if $\nabla $ is a connection on $E$, then one defines $u\cdot \nabla $ by $(u\cdot \nabla )_{X}(s)=u(\nabla _{X}(u^{-1}(s))$ for $s\in \Gamma (E),X\in \Gamma (TM)$. To check that $u\cdot \nabla $ is a connection, one verifies the product rule ${\begin{aligned}u\cdot \nabla (fs)&=u(\nabla (u^{-1}(fs)))\\&=u(\nabla (fu^{-1}(s)))\\&=u(df\otimes u^{-1}(s))+u(f\nabla (u^{-1}(s)))\\&=df\otimes s+fu\cdot \nabla (s).\end{aligned}}$ It may be checked that this defines a left group action of ${\mathcal {G}}$ on the affine space of all connections ${\mathcal {A}}$. Since ${\mathcal {A}}$ is an affine space modelled on $\Omega ^{1}(M,\operatorname {End} (E))$, there should exist some endomorphism-valued one-form $A_{u}\in \Omega ^{1}(M,\operatorname {End} (E))$ such that $u\cdot \nabla =\nabla +A_{u}$. Using the definition of the endomorphism connection $\nabla ^{\operatorname {End} (E)}$ induced by $\nabla $, it can be seen that $u\cdot \nabla =\nabla -d^{\nabla }(u)u^{-1}$ which is to say that $A_{u}=-d^{\nabla }(u)u^{-1}$. Two connections are said to be gauge equivalent if they differ by the action of the gauge group, and the quotient space ${\mathcal {B}}={\mathcal {A}}/{\mathcal {G}}$ is the moduli space of all connections on $E$. In general this topological space is neither a smooth manifold or even a Hausdorff space, but contains inside it the moduli space of Yang–Mills connections on $E$, which is of significant interest in gauge theory and physics. Examples • A classical covariant derivative or affine connection defines a connection on the tangent bundle of M, or more generally on any tensor bundle formed by taking tensor products of the tangent bundle with itself and its dual. • A connection on $\pi :\mathbb {R} ^{2}\times \mathbb {R} \to \mathbb {R} $ :\mathbb {R} ^{2}\times \mathbb {R} \to \mathbb {R} } can be described explicitly as the operator $\nabla =d+{\begin{bmatrix}f_{11}(x)&f_{12}(x)\\f_{21}(x)&f_{22}(x)\end{bmatrix}}dx$ where $d$ is the exterior derivative evaluated on vector-valued smooth functions and $f_{ij}(x)$ are smooth. A section $a\in \Gamma (\pi )$ may be identified with a map ${\begin{cases}\mathbb {R} \to \mathbb {R} ^{2}\\x\mapsto (a_{1}(x),a_{2}(x))\end{cases}}$ and then $\nabla (a)=\nabla {\begin{bmatrix}a_{1}(x)\\a_{2}(x)\end{bmatrix}}={\begin{bmatrix}{\frac {da_{1}(x)}{dx}}+f_{11}(x)a_{1}(x)+f_{12}(x)a_{2}(x)\\{\frac {da_{2}(x)}{dx}}+f_{21}(x)a_{1}(x)+f_{22}(x)a_{2}(x)\end{bmatrix}}dx$ • If the bundle is endowed with a bundle metric, an inner product on its vector space fibers, a metric connection is defined as a connection that is compatible with the bundle metric. • A Yang-Mills connection is a special metric connection which satisfies the Yang-Mills equations of motion. • A Riemannian connection is a metric connection on the tangent bundle of a Riemannian manifold. • A Levi-Civita connection is a special Riemannian connection: the metric-compatible connection on the tangent bundle that is also torsion-free. It is unique, in the sense that given any Riemannian connection, one can always find one and only one equivalent connection that is torsion-free. "Equivalent" means it is compatible with the same metric, although the curvature tensors may be different; see teleparallelism. The difference between a Riemannian connection and the corresponding Levi-Civita connection is given by the contorsion tensor. • The exterior derivative is a flat connection on $E=M\times \mathbb {R} $ (the trivial line bundle over M). • More generally, there is a canonical flat connection on any flat vector bundle (i.e. a vector bundle whose transition functions are all constant) which is given by the exterior derivative in any trivialization. See also • D-module • Connection (mathematics) References • Chern, Shiing-Shen (1951), Topics in Differential Geometry, Institute for Advanced Study, mimeographed lecture notes • Darling, R. W. R. (1994), Differential Forms and Connections, Cambridge, UK: Cambridge University Press, Bibcode:1994dfc..book.....D, ISBN 0-521-46800-0 • Kobayashi, Shoshichi; Nomizu, Katsumi (1996) [1963], Foundations of Differential Geometry, Vol. 1, Wiley Classics Library, New York: Wiley Interscience, ISBN 0-471-15733-3 • Koszul, J. L. (1950), "Homologie et cohomologie des algebres de Lie", Bulletin de la Société Mathématique, 78: 65–127, doi:10.24033/bsmf.1410 • Wells, R.O. (1973), Differential analysis on complex manifolds, Springer-Verlag, ISBN 0-387-90419-0 • Ambrose, W.; Singer, I.M. (1953), "A theorem on holonomy", Transactions of the American Mathematical Society, 75 (3): 428–443, doi:10.2307/1990721, JSTOR 1990721 • Donaldson, S.K. and Kronheimer, P.B., 1997. The geometry of four-manifolds. Oxford University Press. • Tu, L.W., 2017. Differential geometry: connections, curvature, and characteristic classes (Vol. 275). Springer. • Taubes, C.H., 2011. Differential geometry: Bundles, connections, metrics and curvature (Vol. 23). OUP Oxford. • Lee, J.M., 2018. Introduction to Riemannian manifolds. Springer International Publishing. • Madsen, I.H.; Tornehave, J. (1997), From calculus to cohomology: de Rham cohomology and characteristic classes, Cambridge University Press
Wikipedia
Group-stack In algebraic geometry, a group-stack is an algebraic stack whose categories of points have group structures or even groupoid structures in a compatible way.[1] It generalizes a group scheme, which is a scheme whose sets of points have group structures in a compatible way. Examples • A group scheme is a group-stack. More generally, a group algebraic-space, an algebraic-space analog of a group scheme, is a group-stack. • Over a field k, a vector bundle stack ${\mathcal {V}}$ on a Deligne–Mumford stack X is a group-stack such that there is a vector bundle V over k on X and a presentation $V\to {\mathcal {V}}$. It has an action by the affine line $\mathbb {A} ^{1}$ corresponding to scalar multiplication. • A Picard stack is an example of a group-stack (or groupoid-stack). Actions of group-stacks The definition of a group action of a group-stack is a bit tricky. First, given an algebraic stack X and a group scheme G on a base scheme S, a right action of G on X consists of 1. a morphism $\sigma :X\times G\to X$, 2. (associativity) a natural isomorphism $\sigma \circ (m\times 1_{X}){\overset {\sim }{\to }}\sigma \circ (1_{X}\times \sigma )$, where m is the multiplication on G, 3. (identity) a natural isomorphism $1_{X}{\overset {\sim }{\to }}\sigma \circ (1_{X}\times e)$, where $e:S\to G$ is the identity section of G, that satisfy the typical compatibility conditions. If, more generally, G is a group-stack, one then extends the above using local presentations. Notes 1. "Ag.algebraic geometry - Are Picard stacks group objects in the category of algebraic stacks". References • Behrend, K.; Fantechi, B. (1997-03-01). "The intrinsic normal cone". Inventiones Mathematicae. 128 (1): 45–88. doi:10.1007/s002220050136. ISSN 0020-9910.
Wikipedia
Vector bundles on algebraic curves In mathematics, vector bundles on algebraic curves may be studied as holomorphic vector bundles on compact Riemann surfaces, which is the classical approach, or as locally free sheaves on algebraic curves C in a more general, algebraic setting (which can for example admit singular points). Some foundational results on classification were known in the 1950s. The result of Grothendieck (1957), that holomorphic vector bundles on the Riemann sphere are sums of line bundles, is now often called the Birkhoff–Grothendieck theorem, since it is implicit in much earlier work of Birkhoff (1909) on the Riemann–Hilbert problem. Atiyah (1957) gave the classification of vector bundles on elliptic curves. The Riemann–Roch theorem for vector bundles was proved by Weil (1938), before the 'vector bundle' concept had really any official status. Although, associated ruled surfaces were classical objects. See Hirzebruch–Riemann–Roch theorem for his result. He was seeking a generalization of the Jacobian variety, by passing from holomorphic line bundles to higher rank. This idea would prove fruitful, in terms of moduli spaces of vector bundles. following on the work in the 1960s on geometric invariant theory. See also • Hitchin system References • Atiyah, M. (1957). "Vector bundles over an elliptic curve". Proc. London Math. Soc. VII: 414–452. doi:10.1112/plms/s3-7.1.414. Also in Collected Works vol. I • Birkhoff, George David (1909). "Singular points of ordinary linear differential equations". Transactions of the American Mathematical Society. 10 (4): 436–470. doi:10.2307/1988594. ISSN 0002-9947. JFM 40.0352.02. JSTOR 1988594. • Grothendieck, A. (1957). "Sur la classification des fibrés holomorphes sur la sphère de Riemann". Amer. J. Math. 79 (1): 121–138. doi:10.2307/2372388. JSTOR 2372388. • Weil, André (1938). "Zur algebraischen Theorie der algebraischen Funktionen". Journal für die reine und angewandte Mathematik. 179: 129–133. doi:10.1515/crll.1938.179.129. Topics in algebraic curves Rational curves • Five points determine a conic • Projective line • Rational normal curve • Riemann sphere • Twisted cubic Elliptic curves Analytic theory • Elliptic function • Elliptic integral • Fundamental pair of periods • Modular form Arithmetic theory • Counting points on elliptic curves • Division polynomials • Hasse's theorem on elliptic curves • Mazur's torsion theorem • Modular elliptic curve • Modularity theorem • Mordell–Weil theorem • Nagell–Lutz theorem • Supersingular elliptic curve • Schoof's algorithm • Schoof–Elkies–Atkin algorithm Applications • Elliptic curve cryptography • Elliptic curve primality Higher genus • De Franchis theorem • Faltings's theorem • Hurwitz's automorphisms theorem • Hurwitz surface • Hyperelliptic curve Plane curves • AF+BG theorem • Bézout's theorem • Bitangent • Cayley–Bacharach theorem • Conic section • Cramer's paradox • Cubic plane curve • Fermat curve • Genus–degree formula • Hilbert's sixteenth problem • Nagata's conjecture on curves • Plücker formula • Quartic plane curve • Real plane curve Riemann surfaces • Belyi's theorem • Bring's curve • Bolza surface • Compact Riemann surface • Dessin d'enfant • Differential of the first kind • Klein quartic • Riemann's existence theorem • Riemann–Roch theorem • Teichmüller space • Torelli theorem Constructions • Dual curve • Polar curve • Smooth completion Structure of curves Divisors on curves • Abel–Jacobi map • Brill–Noether theory • Clifford's theorem on special divisors • Gonality of an algebraic curve • Jacobian variety • Riemann–Roch theorem • Weierstrass point • Weil reciprocity law Moduli • ELSV formula • Gromov–Witten invariant • Hodge bundle • Moduli of algebraic curves • Stable curve Morphisms • Hasse–Witt matrix • Riemann–Hurwitz formula • Prym variety • Weber's theorem (Algebraic curves) Singularities • Acnode • Crunode • Cusp • Delta invariant • Tacnode Vector bundles • Birkhoff–Grothendieck theorem • Stable vector bundle • Vector bundles on algebraic curves
Wikipedia
Vector calculus identities The following are important identities involving derivatives and integrals in vector calculus. See also: Vector algebra relations Part of a series of articles about Calculus • Fundamental theorem • Limits • Continuity • Rolle's theorem • Mean value theorem • Inverse function theorem Differential Definitions • Derivative (generalizations) • Differential • infinitesimal • of a function • total Concepts • Differentiation notation • Second derivative • Implicit differentiation • Logarithmic differentiation • Related rates • Taylor's theorem Rules and identities • Sum • Product • Chain • Power • Quotient • L'Hôpital's rule • Inverse • General Leibniz • Faà di Bruno's formula • Reynolds Integral • Lists of integrals • Integral transform • Leibniz integral rule Definitions • Antiderivative • Integral (improper) • Riemann integral • Lebesgue integration • Contour integration • Integral of inverse functions Integration by • Parts • Discs • Cylindrical shells • Substitution (trigonometric, tangent half-angle, Euler) • Euler's formula • Partial fractions • Changing order • Reduction formulae • Differentiating under the integral sign • Risch algorithm Series • Geometric (arithmetico-geometric) • Harmonic • Alternating • Power • Binomial • Taylor Convergence tests • Summand limit (term test) • Ratio • Root • Integral • Direct comparison • Limit comparison • Alternating series • Cauchy condensation • Dirichlet • Abel Vector • Gradient • Divergence • Curl • Laplacian • Directional derivative • Identities Theorems • Gradient • Green's • Stokes' • Divergence • generalized Stokes Multivariable Formalisms • Matrix • Tensor • Exterior • Geometric Definitions • Partial derivative • Multiple integral • Line integral • Surface integral • Volume integral • Jacobian • Hessian Advanced • Calculus on Euclidean space • Generalized functions • Limit of distributions Specialized • Fractional • Malliavin • Stochastic • Variations Miscellaneous • Precalculus • History • Glossary • List of topics • Integration Bee • Mathematical analysis • Nonstandard analysis Operator notation Gradient Main article: Gradient For a function $f(x,y,z)$ in three-dimensional Cartesian coordinate variables, the gradient is the vector field: $\operatorname {grad} (f)=\nabla f={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x}},\ {\frac {\partial }{\partial y}},\ {\frac {\partial }{\partial z}}\end{pmatrix}}f={\frac {\partial f}{\partial x}}\mathbf {i} +{\frac {\partial f}{\partial y}}\mathbf {j} +{\frac {\partial f}{\partial z}}\mathbf {k} $ where i, j, k are the standard unit vectors for the x, y, z-axes. More generally, for a function of n variables $\psi (x_{1},\ldots ,x_{n})$, also called a scalar field, the gradient is the vector field: $\nabla \psi ={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x_{1}}},\ldots ,{\frac {\partial }{\partial x_{n}}}\end{pmatrix}}\psi ={\frac {\partial \psi }{\partial x_{1}}}\mathbf {e} _{1}+\dots +{\frac {\partial \psi }{\partial x_{n}}}\mathbf {e} _{n}.$ where $\mathbf {e} _{i}$ are orthogonal unit vectors in arbitrary directions. As the name implies, the gradient is proportional to and points in the direction of the function's most rapid (positive) change. For a vector field $\mathbf {A} =\left(A_{1},\ldots ,A_{n}\right)$, also called a tensor field of order 1, the gradient or total derivative is the n × n Jacobian matrix: $\mathbf {J} _{\mathbf {A} }=d\mathbf {A} =(\nabla \!\mathbf {A} )^{\textsf {T}}=\left({\frac {\partial A_{i}}{\partial x_{j}}}\right)_{\!ij}.$ For a tensor field $\mathbf {T} $ of any order k, the gradient $\operatorname {grad} (\mathbf {T} )=d\mathbf {T} =(\nabla \mathbf {T} )^{\textsf {T}}$ is a tensor field of order k + 1. For a tensor field $\mathbf {T} $ of order k > 0, the tensor field $\nabla \mathbf {T} $ of order k + 1 is defined by the recursive relation $(\nabla \mathbf {T} )\cdot \mathbf {C} =\nabla (\mathbf {T} \cdot \mathbf {C} )$ where $\mathbf {C} $ is an arbitrary constant vector. Divergence Main article: Divergence In Cartesian coordinates, the divergence of a continuously differentiable vector field $\mathbf {F} =F_{x}\mathbf {i} +F_{y}\mathbf {j} +F_{z}\mathbf {k} $ is the scalar-valued function: $\operatorname {div} \mathbf {F} =\nabla \cdot \mathbf {F} ={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x}},\ {\frac {\partial }{\partial y}},\ {\frac {\partial }{\partial z}}\end{pmatrix}}\cdot {\begin{pmatrix}F_{x},\ F_{y},\ F_{z}\end{pmatrix}}={\frac {\partial F_{x}}{\partial x}}+{\frac {\partial F_{y}}{\partial y}}+{\frac {\partial F_{z}}{\partial z}}.$ As the name implies the divergence is a measure of how much vectors are diverging. The divergence of a tensor field $\mathbf {T} $ of non-zero order k is written as $\operatorname {div} (\mathbf {T} )=\nabla \cdot \mathbf {T} $, a contraction to a tensor field of order k − 1. Specifically, the divergence of a vector is a scalar. The divergence of a higher order tensor field may be found by decomposing the tensor field into a sum of outer products and using the identity, $\nabla \cdot \left(\mathbf {A} \otimes \mathbf {T} \right)=\mathbf {T} (\nabla \cdot \mathbf {A} )+(\mathbf {A} \cdot \nabla )\mathbf {T} $ where $\mathbf {A} \cdot \nabla $ is the directional derivative in the direction of $\mathbf {A} $ multiplied by its magnitude. Specifically, for the outer product of two vectors, $\nabla \cdot \left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)=\mathbf {B} (\nabla \cdot \mathbf {A} )+(\mathbf {A} \cdot \nabla )\mathbf {B} .$ For a tensor field $\mathbf {T} $ of order k > 1, the tensor field $\nabla \cdot \mathbf {T} $ of order k − 1 is defined by the recursive relation $(\nabla \cdot \mathbf {T} )\cdot \mathbf {C} =\nabla \cdot (\mathbf {T} \cdot \mathbf {C} )$ where $\mathbf {C} $ is an arbitrary constant vector. Curl Main article: Curl (mathematics) In Cartesian coordinates, for $\mathbf {F} =F_{x}\mathbf {i} +F_{y}\mathbf {j} +F_{z}\mathbf {k} $ the curl is the vector field: ${\begin{aligned}\operatorname {curl} \mathbf {F} &=\nabla \times \mathbf {F} ={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x}},\ {\frac {\partial }{\partial y}},\ {\frac {\partial }{\partial z}}\end{pmatrix}}\times {\begin{pmatrix}F_{x},\ F_{y},\ F_{z}\end{pmatrix}}={\begin{vmatrix}\mathbf {i} &\mathbf {j} &\mathbf {k} \\{\frac {\partial }{\partial x}}&{\frac {\partial }{\partial y}}&{\frac {\partial }{\partial z}}\\F_{x}&F_{y}&F_{z}\end{vmatrix}}\\[1em]&=\left({\frac {\partial F_{z}}{\partial y}}-{\frac {\partial F_{y}}{\partial z}}\right)\mathbf {i} +\left({\frac {\partial F_{x}}{\partial z}}-{\frac {\partial F_{z}}{\partial x}}\right)\mathbf {j} +\left({\frac {\partial F_{y}}{\partial x}}-{\frac {\partial F_{x}}{\partial y}}\right)\mathbf {k} \end{aligned}}$ where i, j, and k are the unit vectors for the x-, y-, and z-axes, respectively. As the name implies the curl is a measure of how much nearby vectors tend in a circular direction. In Einstein notation, the vector field $\mathbf {F} ={\begin{pmatrix}F_{1},\ F_{2},\ F_{3}\end{pmatrix}}$ has curl given by: $\nabla \times \mathbf {F} =\varepsilon ^{ijk}\mathbf {e} _{i}{\frac {\partial F_{k}}{\partial x_{j}}}$ where $\varepsilon $ = ±1 or 0 is the Levi-Civita parity symbol. For a tensor field $\mathbf {T} $ of order k > 1, the tensor field $\nabla \times \mathbf {T} $ of order k is defined by the recursive relation $(\nabla \times \mathbf {T} )\cdot \mathbf {C} =\nabla \times (\mathbf {T} \cdot \mathbf {C} )$ where $\mathbf {C} $ is an arbitrary constant vector. A tensor field of order greater than one may be decomposed into a sum of outer products, and then the following identity may be used: $\nabla \times \left(\mathbf {A} \otimes \mathbf {T} \right)=(\nabla \times \mathbf {A} )\otimes \mathbf {T} -\mathbf {A} \times (\nabla \mathbf {T} ).$ Specifically, for the outer product of two vectors, $\nabla \times \left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)=(\nabla \times \mathbf {A} )\mathbf {B} ^{\textsf {T}}-\mathbf {A} \times (\nabla \mathbf {B} ).$ Laplacian Main article: Laplace operator In Cartesian coordinates, the Laplacian of a function $f(x,y,z)$ is $\Delta f=\nabla ^{2}\!f=(\nabla \cdot \nabla )f={\frac {\partial ^{2}\!f}{\partial x^{2}}}+{\frac {\partial ^{2}\!f}{\partial y^{2}}}+{\frac {\partial ^{2}\!f}{\partial z^{2}}}.$ The Laplacian is a measure of how much a function is changing over a small sphere centered at the point. When the Laplacian is equal to 0, the function is called a harmonic function. That is, $\Delta f=0.$ For a tensor field, $\mathbf {T} $, the Laplacian is generally written as: $\Delta \mathbf {T} =\nabla ^{2}\mathbf {T} =(\nabla \cdot \nabla )\mathbf {T} $ and is a tensor field of the same order. For a tensor field $\mathbf {T} $ of order k > 0, the tensor field $\nabla ^{2}\mathbf {T} $ of order k is defined by the recursive relation $\left(\nabla ^{2}\mathbf {T} \right)\cdot \mathbf {C} =\nabla ^{2}(\mathbf {T} \cdot \mathbf {C} )$ where $\mathbf {C} $ is an arbitrary constant vector. Special notations In Feynman subscript notation, $\nabla _{\mathbf {B} }\!\left(\mathbf {A{\cdot }B} \right)=\mathbf {A} {\times }\!\left(\nabla {\times }\mathbf {B} \right)+\left(\mathbf {A} {\cdot }\nabla \right)\mathbf {B} $ where the notation ∇B means the subscripted gradient operates on only the factor B.[1][2] Less general but similar is the Hestenes overdot notation in geometric algebra.[3] The above identity is then expressed as: ${\dot {\nabla }}\left(\mathbf {A} {\cdot }{\dot {\mathbf {B} }}\right)=\mathbf {A} {\times }\!\left(\nabla {\times }\mathbf {B} \right)+\left(\mathbf {A} {\cdot }\nabla \right)\mathbf {B} $ where overdots define the scope of the vector derivative. The dotted vector, in this case B, is differentiated, while the (undotted) A is held constant. For the remainder of this article, Feynman subscript notation will be used where appropriate. First derivative identities For scalar fields $\psi $, $\phi $ and vector fields $\mathbf {A} $, $\mathbf {B} $, we have the following derivative identities. Distributive properties ${\begin{aligned}\nabla (\psi +\phi )&=\nabla \psi +\nabla \phi \\\nabla (\mathbf {A} +\mathbf {B} )&=\nabla \mathbf {A} +\nabla \mathbf {B} \\\nabla \cdot (\mathbf {A} +\mathbf {B} )&=\nabla \cdot \mathbf {A} +\nabla \cdot \mathbf {B} \\\nabla \times (\mathbf {A} +\mathbf {B} )&=\nabla \times \mathbf {A} +\nabla \times \mathbf {B} \end{aligned}}$ First derivative associative properties ${\begin{aligned}(\mathbf {A} \cdot \nabla )\psi &=\mathbf {A} \cdot (\nabla \psi )\\(\mathbf {A} \cdot \nabla )\mathbf {B} &=\mathbf {A} \cdot (\nabla \mathbf {B} )\\(\mathbf {A} \times \nabla )\psi &=\mathbf {A} \times (\nabla \psi )\\(\mathbf {A} \times \nabla )\mathbf {B} &=\mathbf {A} \times (\nabla \mathbf {B} )\end{aligned}}$ Product rule for multiplication by a scalar We have the following generalizations of the product rule in single variable calculus. ${\begin{aligned}\nabla (\psi \phi )&=\phi \,\nabla \psi +\psi \,\nabla \phi \\\nabla (\psi \mathbf {A} )&=(\nabla \psi )\mathbf {A} ^{\textsf {T}}+\psi \nabla \mathbf {A} \ =\ \nabla \psi \otimes \mathbf {A} +\psi \,\nabla \mathbf {A} \\\nabla \cdot (\psi \mathbf {A} )&=\psi \,\nabla {\cdot }\mathbf {A} +(\nabla \psi )\,{\cdot }\mathbf {A} \\\nabla {\times }(\psi \mathbf {A} )&=\psi \,\nabla {\times }\mathbf {A} +(\nabla \psi ){\times }\mathbf {A} \\\nabla ^{2}(\psi \phi )&=\psi \,\nabla ^{2\!}\phi +2\,\nabla \!\psi \cdot \!\nabla \phi +\phi \,\nabla ^{2\!}\psi \end{aligned}}$ Quotient rule for division by a scalar ${\begin{aligned}\nabla \left({\frac {\psi }{\phi }}\right)&={\frac {\phi \,\nabla \psi -\psi \,\nabla \phi }{\phi ^{2}}}\\[1em]\nabla \left({\frac {\mathbf {A} }{\phi }}\right)&={\frac {\phi \,\nabla \mathbf {A} -\nabla \phi \otimes \mathbf {A} }{\phi ^{2}}}\\[1em]\nabla \cdot \left({\frac {\mathbf {A} }{\phi }}\right)&={\frac {\phi \,\nabla {\cdot }\mathbf {A} -\nabla \!\phi \cdot \mathbf {A} }{\phi ^{2}}}\\[1em]\nabla \times \left({\frac {\mathbf {A} }{\phi }}\right)&={\frac {\phi \,\nabla {\times }\mathbf {A} -\nabla \!\phi \,{\times }\,\mathbf {A} }{\phi ^{2}}}\\[1em]\nabla ^{2}\left({\frac {\psi }{\phi }}\right)&={\frac {\phi \,\nabla ^{2\!}\psi -2\,\phi \,\nabla \!\left({\frac {\psi }{\phi }}\right)\cdot \!\nabla \phi -\psi \,\nabla ^{2\!}\phi }{\phi ^{2}}}\end{aligned}}$ Chain rule Let $f(x)$ be a one-variable function from scalars to scalars, $\mathbf {r} (t)=(x_{1}(t),\ldots ,x_{n}(t))$ a parametrized curve, $\phi \!:\mathbb {R} ^{n}\to \mathbb {R} $ a function from vectors to scalars, and $\mathbf {A} \!:\mathbb {R} ^{n}\to \mathbb {R} ^{n}$ a vector field. We have the following special cases of the multi-variable chain rule. ${\begin{aligned}\nabla (f\circ \phi )&=\left(f'\circ \phi \right)\nabla \phi \\(\mathbf {r} \circ f)'&=(\mathbf {r} '\circ f)f'\\(\phi \circ \mathbf {r} )'&=(\nabla \phi \circ \mathbf {r} )\cdot \mathbf {r} '\\(\mathbf {A} \circ \mathbf {r} )'&=\mathbf {r} '\cdot (\nabla \mathbf {A} \circ \mathbf {r} )\\\nabla (\phi \circ \mathbf {A} )&=(\nabla \mathbf {A} )\cdot (\nabla \phi \circ \mathbf {A} )\\\nabla \cdot (\mathbf {r} \circ \phi )&=\nabla \phi \cdot (\mathbf {r} '\circ \phi )\\\nabla \times (\mathbf {r} \circ \phi )&=\nabla \phi \times (\mathbf {r} '\circ \phi )\end{aligned}}$ For a vector transformation $\mathbf {x} \!:\mathbb {R} ^{n}\to \mathbb {R} ^{n}$ we have: $\nabla \cdot (\mathbf {A} \circ \mathbf {x} )=\mathrm {tr} \left((\nabla \mathbf {x} )\cdot (\nabla \mathbf {A} \circ \mathbf {x} )\right)$ Here we take the trace of the dot product of two second-order tensors, which corresponds to the product of their matrices. Dot product rule ${\begin{aligned}\nabla (\mathbf {A} \cdot \mathbf {B} )&\ =\ (\mathbf {A} \cdot \nabla )\mathbf {B} \,+\,(\mathbf {B} \cdot \nabla )\mathbf {A} \,+\,\mathbf {A} {\times }(\nabla {\times }\mathbf {B} )\,+\,\mathbf {B} {\times }(\nabla {\times }\mathbf {A} )\\&\ =\ \mathbf {A} \cdot \mathbf {J} _{\mathbf {B} }+\mathbf {B} \cdot \mathbf {J} _{\mathbf {A} }\ =\ (\nabla \mathbf {B} )\cdot \mathbf {A} \,+\,(\nabla \mathbf {A} )\cdot \mathbf {B} \end{aligned}}$ where $\mathbf {J} _{\mathbf {A} }=(\nabla \!\mathbf {A} )^{\textsf {T}}=(\partial A_{i}/\partial x_{j})_{ij}$ denotes the Jacobian matrix of the vector field $\mathbf {A} =(A_{1},\ldots ,A_{n})$. Alternatively, using Feynman subscript notation, $\nabla (\mathbf {A} \cdot \mathbf {B} )=\nabla _{\mathbf {A} }(\mathbf {A} \cdot \mathbf {B} )+\nabla _{\mathbf {B} }(\mathbf {A} \cdot \mathbf {B} )\ .$ See these notes.[4] As a special case, when A = B, ${\tfrac {1}{2}}\nabla \left(\mathbf {A} \cdot \mathbf {A} \right)\ =\ \mathbf {A} \cdot \mathbf {J} _{\mathbf {A} }\ =\ (\nabla \mathbf {A} )\cdot \mathbf {A} \ =\ (\mathbf {A} {\cdot }\nabla )\mathbf {A} \,+\,\mathbf {A} {\times }(\nabla {\times }\mathbf {A} )\ =\ A\nabla A.$ The generalization of the dot product formula to Riemannian manifolds is a defining property of a Riemannian connection, which differentiates a vector field to give a vector-valued 1-form. Cross product rule ${\begin{aligned}\nabla \cdot (\mathbf {A} \times \mathbf {B} )&\ =\ (\nabla {\times }\mathbf {A} )\cdot \mathbf {B} \,-\,\mathbf {A} \cdot (\nabla {\times }\mathbf {B} )\\[5pt]\nabla \times (\mathbf {A} \times \mathbf {B} )&\ =\ \mathbf {A} (\nabla {\cdot }\mathbf {B} )\,-\,\mathbf {B} (\nabla {\cdot }\mathbf {A} )\,+\,(\mathbf {B} {\cdot }\nabla )\mathbf {A} \,-\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \\[2pt]&\ =\ \mathbf {A} (\nabla {\cdot }\mathbf {B} )\,+\,(\mathbf {B} {\cdot }\nabla )\mathbf {A} \,-\,(\mathbf {B} (\nabla {\cdot }\mathbf {A} )\,+\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} )\\[2pt]&\ =\ \nabla {\cdot }\left(\mathbf {B} \mathbf {A} ^{\textsf {T}}\right)\,-\,\nabla {\cdot }\left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)\\[2pt]&\ =\ \nabla {\cdot }\left(\mathbf {B} \mathbf {A} ^{\textsf {T}}\,-\,\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)\\[5pt]\mathbf {A} \times (\nabla \times \mathbf {B} )&\ =\ \nabla _{\mathbf {B} }(\mathbf {A} {\cdot }\mathbf {B} )\,-\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \\[2pt]&\ =\ \mathbf {A} \cdot \mathbf {J} _{\mathbf {B} }\,-\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \\[2pt]&\ =\ (\nabla \mathbf {B} )\cdot \mathbf {A} \,-\,\mathbf {A} \cdot (\nabla \mathbf {B} )\\[2pt]&\ =\ \mathbf {A} \cdot (\mathbf {J} _{\mathbf {B} }\,-\,\mathbf {J} _{\mathbf {B} }^{\textsf {T}})\\[5pt](\mathbf {A} \times \nabla )\times \mathbf {B} &\ =\ (\nabla \mathbf {B} )\cdot \mathbf {A} \,-\,\mathbf {A} (\nabla {\cdot }\mathbf {B} )\\[2pt]&\ =\ \mathbf {A} \times (\nabla \times \mathbf {B} )\,+\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \,-\,\mathbf {A} (\nabla {\cdot }\mathbf {B} )\\[5pt](\mathbf {A} \times \nabla )\cdot \mathbf {B} &\ =\ \mathbf {A} \cdot (\nabla {\times }\mathbf {B} )\end{aligned}}$ Note that the matrix $\mathbf {J} _{\mathbf {B} }\,-\,\mathbf {J} _{\mathbf {B} }^{\textsf {T}}$ is antisymmetric. Second derivative identities Divergence of curl is zero The divergence of the curl of any continuously twice-differentiable vector field A is always zero: $\nabla \cdot (\nabla \times \mathbf {A} )=0$ This is a special case of the vanishing of the square of the exterior derivative in the De Rham chain complex. Divergence of gradient is Laplacian The Laplacian of a scalar field is the divergence of its gradient: $\Delta \psi =\nabla ^{2}\psi =\nabla \cdot (\nabla \psi )$ The result is a scalar quantity. Divergence of divergence is not defined Divergence of a vector field A is a scalar, and you cannot take the divergence of a scalar quantity. Therefore: $\nabla \cdot (\nabla \cdot \mathbf {A} ){\text{ is undefined}}$ Curl of gradient is zero The curl of the gradient of any continuously twice-differentiable scalar field $\varphi $ (i.e., differentiability class $C^{2}$) is always the zero vector: $\nabla \times (\nabla \varphi )=\mathbf {0} $ It can be easily proved by expressing $\nabla \times (\nabla \varphi )$ in a Cartesian coordinate system with Schwarz's theorem (also called Clairaut's theorem on equality of mixed partials). This result is a special case of the vanishing of the square of the exterior derivative in the De Rham chain complex. Curl of curl $\nabla \times \left(\nabla \times \mathbf {A} \right)\ =\ \nabla (\nabla {\cdot }\mathbf {A} )\,-\,\nabla ^{2\!}\mathbf {A} $ Here ∇2 is the vector Laplacian operating on the vector field A. Curl of divergence is not defined The divergence of a vector field A is a scalar, and you cannot take curl of a scalar quantity. Therefore $\nabla \times (\nabla \cdot \mathbf {A} ){\text{ is undefined}}$ Second derivative associative properties ${\begin{aligned}(\nabla \cdot \nabla )\psi &=\nabla \cdot (\nabla \psi )=\nabla ^{2}\psi \\(\nabla \cdot \nabla )\mathbf {A} &=\nabla \cdot (\nabla \mathbf {A} )=\nabla ^{2}\mathbf {A} \\(\nabla \times \nabla )\psi &=\nabla \times (\nabla \psi )=\mathbf {0} \\(\nabla \times \nabla )\mathbf {A} &=\nabla \times (\nabla \mathbf {A} )=\mathbf {0} \end{aligned}}$ A mnemonic The figure to the right is a mnemonic for some of these identities. The abbreviations used are: • D: divergence, • C: curl, • G: gradient, • L: Laplacian, • CC: curl of curl. Each arrow is labeled with the result of an identity, specifically, the result of applying the operator at the arrow's tail to the operator at its head. The blue circle in the middle means curl of curl exists, whereas the other two red circles (dashed) mean that DD and GG do not exist. Summary of important identities Gradient • $\nabla (\psi +\phi )=\nabla \psi +\nabla \phi $ • $\nabla (\psi \phi )=\phi \nabla \psi +\psi \nabla \phi $ • $\nabla (\psi \mathbf {A} )=\nabla \psi \otimes \mathbf {A} +\psi \nabla \mathbf {A} $ • $\nabla (\mathbf {A} \cdot \mathbf {B} )=(\mathbf {A} \cdot \nabla )\mathbf {B} +(\mathbf {B} \cdot \nabla )\mathbf {A} +\mathbf {A} \times (\nabla \times \mathbf {B} )+\mathbf {B} \times (\nabla \times \mathbf {A} )$ Divergence • $\nabla \cdot (\mathbf {A} +\mathbf {B} )=\nabla \cdot \mathbf {A} +\nabla \cdot \mathbf {B} $ • $\nabla \cdot \left(\psi \mathbf {A} \right)=\psi \nabla \cdot \mathbf {A} +\mathbf {A} \cdot \nabla \psi $ • $\nabla \cdot \left(\mathbf {A} \times \mathbf {B} \right)=(\nabla \times \mathbf {A} )\cdot \mathbf {B} -(\nabla \times \mathbf {B} )\cdot \mathbf {A} $ Curl • $\nabla \times (\mathbf {A} +\mathbf {B} )=\nabla \times \mathbf {A} +\nabla \times \mathbf {B} $ • $\nabla \times \left(\psi \mathbf {A} \right)=\psi \,(\nabla \times \mathbf {A} )-(\mathbf {A} \times \nabla )\psi =\psi \,(\nabla \times \mathbf {A} )+(\nabla \psi )\times \mathbf {A} $ • $\nabla \times \left(\psi \nabla \phi \right)=\nabla \psi \times \nabla \phi $ • $\nabla \times \left(\mathbf {A} \times \mathbf {B} \right)=\mathbf {A} \left(\nabla \cdot \mathbf {B} \right)-\mathbf {B} \left(\nabla \cdot \mathbf {A} \right)+\left(\mathbf {B} \cdot \nabla \right)\mathbf {A} -\left(\mathbf {A} \cdot \nabla \right)\mathbf {B} $[5] Vector dot Del Operator • $(\mathbf {A} \cdot \nabla )\mathbf {B} ={\frac {1}{2}}{\bigg [}\nabla (\mathbf {A} \cdot \mathbf {B} )-\nabla \times (\mathbf {A} \times \mathbf {B} )-\mathbf {B} \times (\nabla \times \mathbf {A} )-\mathbf {A} \times (\nabla \times \mathbf {B} )-\mathbf {B} (\nabla \cdot \mathbf {A} )+\mathbf {A} (\nabla \cdot \mathbf {B} ){\bigg ]}$[6] • $(\mathbf {A} \cdot \nabla )\mathbf {A} ={\frac {1}{2}}\nabla |\mathbf {A} |^{2}-\mathbf {A} \times (\nabla \times \mathbf {A} )={\frac {1}{2}}\nabla |\mathbf {A} |^{2}+(\nabla \times \mathbf {A} )\times \mathbf {A} $ Second derivatives • $\nabla \cdot (\nabla \times \mathbf {A} )=0$ • $\nabla \times (\nabla \psi )=\mathbf {0} $ • $\nabla \cdot (\nabla \psi )=\nabla ^{2}\psi $ (scalar Laplacian) • $\nabla \left(\nabla \cdot \mathbf {A} \right)-\nabla \times \left(\nabla \times \mathbf {A} \right)=\nabla ^{2}\mathbf {A} $ (vector Laplacian) • $\nabla \cdot (\phi \nabla \psi )=\phi \nabla ^{2}\psi +\nabla \phi \cdot \nabla \psi $ • $\psi \nabla ^{2}\phi -\phi \nabla ^{2}\psi =\nabla \cdot \left(\psi \nabla \phi -\phi \nabla \psi \right)$ • $\nabla ^{2}(\phi \psi )=\phi \nabla ^{2}\psi +2(\nabla \phi )\cdot (\nabla \psi )+\left(\nabla ^{2}\phi \right)\psi $ • $\nabla ^{2}(\psi \mathbf {A} )=\mathbf {A} \nabla ^{2}\psi +2(\nabla \psi \cdot \nabla )\mathbf {A} +\psi \nabla ^{2}\mathbf {A} $ • $\nabla ^{2}(\mathbf {A} \cdot \mathbf {B} )=\mathbf {A} \cdot \nabla ^{2}\mathbf {B} -\mathbf {B} \cdot \nabla ^{2}\!\mathbf {A} +2\nabla \cdot ((\mathbf {B} \cdot \nabla )\mathbf {A} +\mathbf {B} \times (\nabla \times \mathbf {A} ))$ (Green's vector identity) Third derivatives • $\nabla ^{2}(\nabla \psi )=\nabla (\nabla \cdot (\nabla \psi ))=\nabla \left(\nabla ^{2}\psi \right)$ • $\nabla ^{2}(\nabla \cdot \mathbf {A} )=\nabla \cdot (\nabla (\nabla \cdot \mathbf {A} ))=\nabla \cdot \left(\nabla ^{2}\mathbf {A} \right)$ • $\nabla ^{2}(\nabla \times \mathbf {A} )=-\nabla \times (\nabla \times (\nabla \times \mathbf {A} ))=\nabla \times \left(\nabla ^{2}\mathbf {A} \right)$ Integration Below, the curly symbol ∂ means "boundary of" a surface or solid. Surface–volume integrals In the following surface–volume integral theorems, V denotes a three-dimensional volume with a corresponding two-dimensional boundary S = ∂V (a closed surface): • $\scriptstyle \partial V$ $\psi \,d\mathbf {S} \ =\ \iiint _{V}\nabla \psi \,dV$ • $\scriptstyle \partial V$ $\mathbf {A} \cdot d\mathbf {S} \ =\ \iiint _{V}\nabla \cdot \mathbf {A} \,dV$ (divergence theorem) • $\scriptstyle \partial V$ $\mathbf {A} \times d\mathbf {S} \ =\ -\iiint _{V}\nabla \times \mathbf {A} \,dV$ • $\scriptstyle \partial V$ $\psi \nabla \!\varphi \cdot d\mathbf {S} \ =\ \iiint _{V}\left(\psi \nabla ^{2}\!\varphi +\nabla \!\varphi \cdot \nabla \!\psi \right)\,dV$ (Green's first identity) • $\scriptstyle \partial V$ $\left(\psi \nabla \!\varphi -\varphi \nabla \!\psi \right)\cdot d\mathbf {S} \ =\ $ $\scriptstyle \partial V$ $\left(\psi {\frac {\partial \varphi }{\partial n}}-\varphi {\frac {\partial \psi }{\partial n}}\right)dS$ $\displaystyle \ =\ \iiint _{V}\left(\psi \nabla ^{2}\!\varphi -\varphi \nabla ^{2}\!\psi \right)\,dV$ (Green's second identity) • $\iiint _{V}\mathbf {A} \cdot \nabla \psi \,dV\ =\ $ $\scriptstyle \partial V$ $\psi \mathbf {A} \cdot d\mathbf {S} -\iiint _{V}\psi \nabla \cdot \mathbf {A} \,dV$ (integration by parts) • $\iiint _{V}\psi \nabla \cdot \mathbf {A} \,dV\ =\ $ $\scriptstyle \partial V$ $\psi \mathbf {A} \cdot d\mathbf {S} -\iiint _{V}\mathbf {A} \cdot \nabla \psi \,dV$ (integration by parts) • $\iiint _{V}\mathbf {A} \cdot \left(\nabla \times \mathbf {B} \right)\,dV\ =\ -$ $\scriptstyle \partial V$ $\left(\mathbf {A} \times \mathbf {B} \right)\cdot d\mathbf {S} +\iiint _{V}\left(\nabla \times \mathbf {A} \right)\cdot \mathbf {B} \,dV$ (integration by parts) Curve–surface integrals In the following curve–surface integral theorems, S denotes a 2d open surface with a corresponding 1d boundary C = ∂S (a closed curve): • $\oint _{\partial S}\mathbf {A} \cdot d{\boldsymbol {\ell }}\ =\ \iint _{S}\left(\nabla \times \mathbf {A} \right)\cdot d\mathbf {S} $ (Stokes' theorem) • $\oint _{\partial S}\psi \,d{\boldsymbol {\ell }}\ =\ -\iint _{S}\nabla \psi \times d\mathbf {S} $ • $\oint _{\partial S}\mathbf {A} \times d{\boldsymbol {\ell }}\ =\ -\iint _{S}\left(\nabla \mathbf {A} -(\nabla \cdot \mathbf {A} )\mathbf {1} \right)\cdot d\mathbf {S} \ =\ -\iint _{S}\left(d\mathbf {S} \times \nabla \right)\times \mathbf {A} $ Integration around a closed curve in the clockwise sense is the negative of the same line integral in the counterclockwise sense (analogous to interchanging the limits in a definite integral): ${\scriptstyle \partial S}$ $\mathbf {A} \cdot d{\boldsymbol {\ell }}=-$ ${\scriptstyle \partial S}$ $\mathbf {A} \cdot d{\boldsymbol {\ell }}.$ Endpoint-curve integrals In the following endpoint–curve integral theorems, P denotes a 1d open path with signed 0d boundary points $\mathbf {q} -\mathbf {p} =\partial P$ and integration along P is from $\mathbf {p} $ to $\mathbf {q} $: • $\psi |_{\partial P}=\psi (\mathbf {q} )-\psi (\mathbf {p} )=\int _{P}\nabla \psi \cdot d{\boldsymbol {\ell }}$ (gradient theorem) • $\mathbf {A} |_{\partial P}=\mathbf {A} (\mathbf {q} )-\mathbf {A} (\mathbf {p} )=\int _{P}\left(d{\boldsymbol {\ell }}\cdot \nabla \right)\mathbf {A} $ See also • Comparison of vector algebra and geometric algebra • Del in cylindrical and spherical coordinates – Mathematical gradient operator in certain coordinate systems • Differentiation rules – Rules for computing derivatives of functions • Exterior calculus identities • Exterior derivative – Operation which takes a certain tensor from p to p+1 forms • List of limits • Table of derivatives – Rules for computing derivatives of functionsPages displaying short descriptions of redirect targets • Vector algebra relations – Formulas about vectors in three-dimensional Euclidean space References 1. Feynman, R. P.; Leighton, R. B.; Sands, M. (1964). The Feynman Lectures on Physics. Addison-Wesley. Vol II, p. 27–4. ISBN 0-8053-9049-9. 2. Kholmetskii, A. L.; Missevitch, O. V. (2005). "The Faraday induction law in relativity theory". p. 4. arXiv:physics/0504223. 3. Doran, C.; Lasenby, A. (2003). Geometric algebra for physicists. Cambridge University Press. p. 169. ISBN 978-0-521-71595-9. 4. Kelly, P. (2013). "Chapter 1.14 Tensor Calculus 1: Tensor Fields" (PDF). Mechanics Lecture Notes Part III: Foundations of Continuum Mechanics. University of Auckland. Retrieved 7 December 2017. 5. "lecture15.pdf" (PDF). 6. Kuo, Kenneth K.; Acharya, Ragini (2012). Applications of turbulent and multi-phase combustion. Hoboken, N.J.: Wiley. p. 520. doi:10.1002/9781118127575.app1. ISBN 9781118127575. Archived from the original on 19 April 2021. Retrieved 19 April 2020. Further reading • Balanis, Constantine A. (23 May 1989). Advanced Engineering Electromagnetics. ISBN 0-471-62194-3. • Schey, H. M. (1997). Div Grad Curl and all that: An informal text on vector calculus. W. W. Norton & Company. ISBN 0-393-96997-5. • Griffiths, David J. (1999). Introduction to Electrodynamics. Prentice Hall. ISBN 0-13-805326-X.
Wikipedia
Array (data structure) In computer science, an array is a data structure consisting of a collection of elements (values or variables), of same memory size, each identified by at least one array index or key. An array is stored such that the position of each element can be computed from its index tuple by a mathematical formula.[1][2][3] The simplest type of data structure is a linear array, also called one-dimensional array. For example, an array of ten 32-bit (4-byte) integer variables, with indices 0 through 9, may be stored as ten words at memory addresses 2000, 2004, 2008, ..., 2036, (in hexadecimal: 0x7D0, 0x7D4, 0x7D8, ..., 0x7F4) so that the element with index i has the address 2000 + (i × 4).[4] The memory address of the first element of an array is called first address, foundation address, or base address. Because the mathematical concept of a matrix can be represented as a two-dimensional grid, two-dimensional arrays are also sometimes called "matrices". In some cases the term "vector" is used in computing to refer to an array, although tuples rather than vectors are the more mathematically correct equivalent. Tables are often implemented in the form of arrays, especially lookup tables; the word "table" is sometimes used as a synonym of array. Arrays are among the oldest and most important data structures, and are used by almost every program. They are also used to implement many other data structures, such as lists and strings. They effectively exploit the addressing logic of computers. In most modern computers and many external storage devices, the memory is a one-dimensional array of words, whose indices are their addresses. Processors, especially vector processors, are often optimized for array operations. Arrays are useful mostly because the element indices can be computed at run time. Among other things, this feature allows a single iterative statement to process arbitrarily many elements of an array. For that reason, the elements of an array data structure are required to have the same size and should use the same data representation. The set of valid index tuples and the addresses of the elements (and hence the element addressing formula) are usually,[3][5] but not always,[2] fixed while the array is in use. The term "array" may also refer to an array data type, a kind of data type provided by most high-level programming languages that consists of a collection of values or variables that can be selected by one or more indices computed at run-time. Array types are often implemented by array structures; however, in some languages they may be implemented by hash tables, linked lists, search trees, or other data structures. The term is also used, especially in the description of algorithms, to mean associative array or "abstract array", a theoretical computer science model (an abstract data type or ADT) intended to capture the essential properties of arrays. History The first digital computers used machine-language programming to set up and access array structures for data tables, vector and matrix computations, and for many other purposes. John von Neumann wrote the first array-sorting program (merge sort) in 1945, during the building of the first stored-program computer.[6] Array indexing was originally done by self-modifying code, and later using index registers and indirect addressing. Some mainframes designed in the 1960s, such as the Burroughs B5000 and its successors, used memory segmentation to perform index-bounds checking in hardware.[7] Assembly languages generally have no special support for arrays, other than what the machine itself provides. The earliest high-level programming languages, including FORTRAN (1957), Lisp (1958), COBOL (1960), and ALGOL 60 (1960), had support for multi-dimensional arrays, and so has C (1972). In C++ (1983), class templates exist for multi-dimensional arrays whose dimension is fixed at runtime[3][5] as well as for runtime-flexible arrays.[2] Applications Arrays are used to implement mathematical vectors and matrices, as well as other kinds of rectangular tables. Many databases, small and large, consist of (or include) one-dimensional arrays whose elements are records. Arrays are used to implement other data structures, such as lists, heaps, hash tables, deques, queues, stacks, strings, and VLists. Array-based implementations of other data structures are frequently simple and space-efficient (implicit data structures), requiring little space overhead, but may have poor space complexity, particularly when modified, compared to tree-based data structures (compare a sorted array to a search tree). One or more large arrays are sometimes used to emulate in-program dynamic memory allocation, particularly memory pool allocation. Historically, this has sometimes been the only way to allocate "dynamic memory" portably. Arrays can be used to determine partial or complete control flow in programs, as a compact alternative to (otherwise repetitive) multiple IF statements. They are known in this context as control tables and are used in conjunction with a purpose built interpreter whose control flow is altered according to values contained in the array. The array may contain subroutine pointers (or relative subroutine numbers that can be acted upon by SWITCH statements) that direct the path of the execution. Element identifier and addressing formulas When data objects are stored in an array, individual objects are selected by an index that is usually a non-negative scalar integer. Indexes are also called subscripts. An index maps the array value to a stored object. There are three ways in which the elements of an array can be indexed: 0 (zero-based indexing) The first element of the array is indexed by subscript of 0.[8] 1 (one-based indexing) The first element of the array is indexed by subscript of 1. n (n-based indexing) The base index of an array can be freely chosen. Usually programming languages allowing n-based indexing also allow negative index values and other scalar data types like enumerations, or characters may be used as an array index. Using zero based indexing is the design choice of many influential programming languages, including C, Java and Lisp. This leads to simpler implementation where the subscript refers to an offset from the starting position of an array, so the first element has an offset of zero. Arrays can have multiple dimensions, thus it is not uncommon to access an array using multiple indices. For example, a two-dimensional array A with three rows and four columns might provide access to the element at the 2nd row and 4th column by the expression A[1][3] in the case of a zero-based indexing system. Thus two indices are used for a two-dimensional array, three for a three-dimensional array, and n for an n-dimensional array. The number of indices needed to specify an element is called the dimension, dimensionality, or rank of the array. In standard arrays, each index is restricted to a certain range of consecutive integers (or consecutive values of some enumerated type), and the address of an element is computed by a "linear" formula on the indices. One-dimensional arrays A one-dimensional array (or single dimension array) is a type of linear array. Accessing its elements involves a single subscript which can either represent a row or column index. As an example consider the C declaration int anArrayName[10]; which declares a one-dimensional array of ten integers. Here, the array can store ten elements of type int . This array has indices starting from zero through nine. For example, the expressions anArrayName[0] and anArrayName[9] are the first and last elements respectively. For a vector with linear addressing, the element with index i is located at the address B + c × i, where B is a fixed base address and c a fixed constant, sometimes called the address increment or stride. If the valid element indices begin at 0, the constant B is simply the address of the first element of the array. For this reason, the C programming language specifies that array indices always begin at 0; and many programmers will call that element "zeroth" rather than "first". However, one can choose the index of the first element by an appropriate choice of the base address B. For example, if the array has five elements, indexed 1 through 5, and the base address B is replaced by B + 30c, then the indices of those same elements will be 31 to 35. If the numbering does not start at 0, the constant B may not be the address of any element. Multidimensional arrays For a multidimensional array, the element with indices i,j would have address B + c · i + d · j, where the coefficients c and d are the row and column address increments, respectively. More generally, in a k-dimensional array, the address of an element with indices i1, i2, ..., ik is B + c1 · i1 + c2 · i2 + … + ck · ik. For example: int a[2][3]; This means that array a has 2 rows and 3 columns, and the array is of integer type. Here we can store 6 elements they will be stored linearly but starting from first row linear then continuing with second row. The above array will be stored as a11, a12, a13, a21, a22, a23. This formula requires only k multiplications and k additions, for any array that can fit in memory. Moreover, if any coefficient is a fixed power of 2, the multiplication can be replaced by bit shifting. The coefficients ck must be chosen so that every valid index tuple maps to the address of a distinct element. If the minimum legal value for every index is 0, then B is the address of the element whose indices are all zero. As in the one-dimensional case, the element indices may be changed by changing the base address B. Thus, if a two-dimensional array has rows and columns indexed from 1 to 10 and 1 to 20, respectively, then replacing B by B + c1 − 3c2 will cause them to be renumbered from 0 through 9 and 4 through 23, respectively. Taking advantage of this feature, some languages (like FORTRAN 77) specify that array indices begin at 1, as in mathematical tradition while other languages (like Fortran 90, Pascal and Algol) let the user choose the minimum value for each index. Dope vectors The addressing formula is completely defined by the dimension d, the base address B, and the increments c1, c2, ..., ck. It is often useful to pack these parameters into a record called the array's descriptor or stride vector or dope vector.[2][3] The size of each element, and the minimum and maximum values allowed for each index may also be included in the dope vector. The dope vector is a complete handle for the array, and is a convenient way to pass arrays as arguments to procedures. Many useful array slicing operations (such as selecting a sub-array, swapping indices, or reversing the direction of the indices) can be performed very efficiently by manipulating the dope vector.[2] Compact layouts Often the coefficients are chosen so that the elements occupy a contiguous area of memory. However, that is not necessary. Even if arrays are always created with contiguous elements, some array slicing operations may create non-contiguous sub-arrays from them. There are two systematic compact layouts for a two-dimensional array. For example, consider the matrix $A={\begin{bmatrix}1&2&3\\4&5&6\\7&8&9\end{bmatrix}}.$ In the row-major order layout (adopted by C for statically declared arrays), the elements in each row are stored in consecutive positions and all of the elements of a row have a lower address than any of the elements of a consecutive row: 123456789 In column-major order (traditionally used by Fortran), the elements in each column are consecutive in memory and all of the elements of a column have a lower address than any of the elements of a consecutive column: 147258369 For arrays with three or more indices, "row major order" puts in consecutive positions any two elements whose index tuples differ only by one in the last index. "Column major order" is analogous with respect to the first index. In systems which use processor cache or virtual memory, scanning an array is much faster if successive elements are stored in consecutive positions in memory, rather than sparsely scattered. This is known as spatial locality, which is a type of locality of reference. Many algorithms that use multidimensional arrays will scan them in a predictable order. A programmer (or a sophisticated compiler) may use this information to choose between row- or column-major layout for each array. For example, when computing the product A·B of two matrices, it would be best to have A stored in row-major order, and B in column-major order. Resizing Static arrays have a size that is fixed when they are created and consequently do not allow elements to be inserted or removed. However, by allocating a new array and copying the contents of the old array to it, it is possible to effectively implement a dynamic version of an array; see dynamic array. If this operation is done infrequently, insertions at the end of the array require only amortized constant time. Some array data structures do not reallocate storage, but do store a count of the number of elements of the array in use, called the count or size. This effectively makes the array a dynamic array with a fixed maximum size or capacity; Pascal strings are examples of this. Non-linear formulas More complicated (non-linear) formulas are occasionally used. For a compact two-dimensional triangular array, for instance, the addressing formula is a polynomial of degree 2. Efficiency Both store and select take (deterministic worst case) constant time. Arrays take linear (O(n)) space in the number of elements n that they hold. In an array with element size k and on a machine with a cache line size of B bytes, iterating through an array of n elements requires the minimum of ceiling(nk/B) cache misses, because its elements occupy contiguous memory locations. This is roughly a factor of B/k better than the number of cache misses needed to access n elements at random memory locations. As a consequence, sequential iteration over an array is noticeably faster in practice than iteration over many other data structures, a property called locality of reference (this does not mean however, that using a perfect hash or trivial hash within the same (local) array, will not be even faster - and achievable in constant time). Libraries provide low-level optimized facilities for copying ranges of memory (such as memcpy) which can be used to move contiguous blocks of array elements significantly faster than can be achieved through individual element access. The speedup of such optimized routines varies by array element size, architecture, and implementation. Memory-wise, arrays are compact data structures with no per-element overhead. There may be a per-array overhead (e.g., to store index bounds) but this is language-dependent. It can also happen that elements stored in an array require less memory than the same elements stored in individual variables, because several array elements can be stored in a single word; such arrays are often called packed arrays. An extreme (but commonly used) case is the bit array, where every bit represents a single element. A single octet can thus hold up to 256 different combinations of up to 8 different conditions, in the most compact form. Array accesses with statically predictable access patterns are a major source of data parallelism. Comparison with other data structures Comparison of list data structures Peek (index) Mutate (insert or delete) at … Excess space, average Beginning End Middle Linked list Θ(n) Θ(1) Θ(1), known end element; Θ(n), unknown end element Peek time + Θ(1)[9][10] Θ(n) Array Θ(1) — — — 0 Dynamic array Θ(1) Θ(n) Θ(1) amortized Θ(n) Θ(n)[11] Balanced tree Θ(log n) Θ(log n) Θ(log n) Θ(log n) Θ(n) Random-access list Θ(log n)[12] Θ(1) —[12] —[12] Θ(n) Hashed array tree Θ(1) Θ(n) Θ(1) amortized Θ(n) Θ(√n) Dynamic arrays or growable arrays are similar to arrays but add the ability to insert and delete elements; adding and deleting at the end is particularly efficient. However, they reserve linear (Θ(n)) additional storage, whereas arrays do not reserve additional storage. Associative arrays provide a mechanism for array-like functionality without huge storage overheads when the index values are sparse. For example, an array that contains values only at indexes 1 and 2 billion may benefit from using such a structure. Specialized associative arrays with integer keys include Patricia tries, Judy arrays, and van Emde Boas trees. Balanced trees require O(log n) time for indexed access, but also permit inserting or deleting elements in O(log n) time,[13] whereas growable arrays require linear (Θ(n)) time to insert or delete elements at an arbitrary position. Linked lists allow constant time removal and insertion in the middle but take linear time for indexed access. Their memory use is typically worse than arrays, but is still linear. An Iliffe vector is an alternative to a multidimensional array structure. It uses a one-dimensional array of references to arrays of one dimension less. For two dimensions, in particular, this alternative structure would be a vector of pointers to vectors, one for each row(pointer on c or c++). Thus an element in row i and column j of an array A would be accessed by double indexing (A[i][j] in typical notation). This alternative structure allows jagged arrays, where each row may have a different size—or, in general, where the valid range of each index depends on the values of all preceding indices. It also saves one multiplication (by the column address increment) replacing it by a bit shift (to index the vector of row pointers) and one extra memory access (fetching the row address), which may be worthwhile in some architectures. Dimension The dimension of an array is the number of indices needed to select an element. Thus, if the array is seen as a function on a set of possible index combinations, it is the dimension of the space of which its domain is a discrete subset. Thus a one-dimensional array is a list of data, a two-dimensional array is a rectangle of data,[14] a three-dimensional array a block of data, etc. This should not be confused with the dimension of the set of all matrices with a given domain, that is, the number of elements in the array. For example, an array with 5 rows and 4 columns is two-dimensional, but such matrices form a 20-dimensional space. Similarly, a three-dimensional vector can be represented by a one-dimensional array of size three. See also • Dynamic array • Parallel array • Variable-length array • Bit array • Array slicing • Offset (computer science) • Row- and column-major order • Stride of an array References 1. Black, Paul E. (13 November 2008). "array". Dictionary of Algorithms and Data Structures. National Institute of Standards and Technology. Retrieved 22 August 2010. 2. Bjoern Andres; Ullrich Koethe; Thorben Kroeger; Hamprecht (2010). "Runtime-Flexible Multi-dimensional Arrays and Views for C++98 and C++0x". arXiv:1008.2909 [cs.DS]. 3. Garcia, Ronald; Lumsdaine, Andrew (2005). "MultiArray: a C++ library for generic programming with arrays". Software: Practice and Experience. 35 (2): 159–188. doi:10.1002/spe.630. ISSN 0038-0644. S2CID 10890293. 4. David R. Richardson (2002), The Book on Data Structures. iUniverse, 112 pages. ISBN 0-595-24039-9, ISBN 978-0-595-24039-5. 5. Veldhuizen, Todd L. (December 1998). Arrays in Blitz++ (PDF). Computing in Object-Oriented Parallel Environments. Lecture Notes in Computer Science. Vol. 1505. Springer Berlin Heidelberg. pp. 223–230. doi:10.1007/3-540-49372-7_24. ISBN 978-3-540-65387-5. Archived from the original (PDF) on 9 November 2016. 6. Knuth, Donald (1998). Sorting and Searching. The Art of Computer Programming. Vol. 3. Reading, MA: Addison-Wesley Professional. p. 159. 7. Levy, Henry M. (1984), Capability-based Computer Systems, Digital Press, p. 22, ISBN 9780932376220. 8. "Array Code Examples - PHP Array Functions - PHP code". Computer Programming Web programming Tips. Archived from the original on 13 April 2011. Retrieved 8 April 2011. In most computer languages array index (counting) starts from 0, not from 1. Index of the first element of the array is 0, index of the second element of the array is 1, and so on. In array of names below you can see indexes and values. 9. Day 1 Keynote - Bjarne Stroustrup: C++11 Style at GoingNative 2012 on channel9.msdn.com from minute 45 or foil 44 10. Number crunching: Why you should never, ever, EVER use linked-list in your code again at kjellkod.wordpress.com 11. Brodnik, Andrej; Carlsson, Svante; Sedgewick, Robert; Munro, JI; Demaine, ED (1999), Resizable Arrays in Optimal Time and Space (Technical Report CS-99-09) (PDF), Department of Computer Science, University of Waterloo 12. Chris Okasaki (1995). "Purely Functional Random-Access Lists". Proceedings of the Seventh International Conference on Functional Programming Languages and Computer Architecture: 86–95. doi:10.1145/224164.224187. 13. "Counted B-Trees". 14. "Two-Dimensional Arrays \ Processing.org". processing.org. Retrieved 1 May 2020. External links Wikimedia Commons has media related to Array data structure. Look up array in Wiktionary, the free dictionary. • Data Structures/Arrays at Wikibooks Well-known data structures Types • Collection • Container Abstract • Associative array • Multimap • Retrieval Data Structure • List • Stack • Queue • Double-ended queue • Priority queue • Double-ended priority queue • Set • Multiset • Disjoint-set Arrays • Bit array • Circular buffer • Dynamic array • Hash table • Hashed array tree • Sparse matrix Linked • Association list • Linked list • Skip list • Unrolled linked list • XOR linked list Trees • B-tree • Binary search tree • AA tree • AVL tree • Red–black tree • Self-balancing tree • Splay tree • Heap • Binary heap • Binomial heap • Fibonacci heap • R-tree • R* tree • R+ tree • Hilbert R-tree • Trie • Hash tree Graphs • Binary decision diagram • Directed acyclic graph • Directed acyclic word graph • List of data structures Parallel computing General • Distributed computing • Parallel computing • Massively parallel • Cloud computing • High-performance computing • Multiprocessing • Manycore processor • GPGPU • Computer network • Systolic array Levels • Bit • Instruction • Thread • Task • Data • Memory • Loop • Pipeline Multithreading • Temporal • Simultaneous (SMT) • Speculative (SpMT) • Preemptive • Cooperative • Clustered multi-thread (CMT) • Hardware scout Theory • PRAM model • PEM model • Analysis of parallel algorithms • Amdahl's law • Gustafson's law • Cost efficiency • Karp–Flatt metric • Slowdown • Speedup Elements • Process • Thread • Fiber • Instruction window • Array Coordination • Multiprocessing • Memory coherence • Cache coherence • Cache invalidation • Barrier • Synchronization • Application checkpointing Programming • Stream processing • Dataflow programming • Models • Implicit parallelism • Explicit parallelism • Concurrency • Non-blocking algorithm Hardware • Flynn's taxonomy • SISD • SIMD • Array processing (SIMT) • Pipelined processing • Associative processing • MISD • MIMD • Dataflow architecture • Pipelined processor • Superscalar processor • Vector processor • Multiprocessor • symmetric • asymmetric • Memory • shared • distributed • distributed shared • UMA • NUMA • COMA • Massively parallel computer • Computer cluster • Beowulf cluster • Grid computer • Hardware acceleration APIs • Ateji PX • Boost • Chapel • HPX • Charm++ • Cilk • Coarray Fortran • CUDA • Dryad • C++ AMP • Global Arrays • GPUOpen • MPI • OpenMP • OpenCL • OpenHMPP • OpenACC • Parallel Extensions • PVM • pthreads • RaftLib • ROCm • UPC • TBB • ZPL Problems • Automatic parallelization • Deadlock • Deterministic algorithm • Embarrassingly parallel • Parallel slowdown • Race condition • Software lockout • Scalability • Starvation •  Category: Parallel computing Authority control: National • Germany
Wikipedia
Tensor density In differential geometry, a tensor density or relative tensor is a generalization of the tensor field concept. A tensor density transforms as a tensor field when passing from one coordinate system to another (see tensor field), except that it is additionally multiplied or weighted by a power W of the Jacobian determinant of the coordinate transition function or its absolute value. A tensor density with a single index is called a vector density. A distinction is made among (authentic) tensor densities, pseudotensor densities, even tensor densities and odd tensor densities. Sometimes tensor densities with a negative weight W are called tensor capacity.[1][2][3] A tensor density can also be regarded as a section of the tensor product of a tensor bundle with a density bundle. Motivation In physics and related fields, it is often useful to work with the components of an algebraic object rather than the object itself. An example would be decomposing a vector into a sum of basis vectors weighted by some coefficients such as ${\vec {v}}=c_{1}{\vec {e}}_{1}+c_{2}{\vec {e}}_{2}+c_{3}{\vec {e}}_{3}$ where ${\vec {v}}$ is a vector in 3-dimensional Euclidean space, $c_{i}\in \mathbb {R} ^{n}{\text{ and }}{\vec {e}}_{i}$ are the usual standard basis vectors in Euclidean space. This is usually necessary for computational purposes, and can often be insightful when algebraic objects represent complex abstractions but their components have concrete interpretations. However, with this identification, one has to be careful to track changes of the underlying basis in which the quantity is expanded; it may in the course of a computation become expedient to change the basis while the vector ${\vec {v}}$ remains fixed in physical space. More generally, if an algebraic object represents a geometric object, but is expressed in terms of a particular basis, then it is necessary to, when the basis is changed, also change the representation. Physicists will often call this representation of a geometric object a tensor if it transforms under a sequence of linear maps given a linear change of basis (although confusingly others call the underlying geometric object which hasn't changed under the coordinate transformation a "tensor", a convention this article strictly avoids). In general there are representations which transform in arbitrary ways depending on how the geometric invariant is reconstructed from the representation. In certain special cases it is convenient to use representations which transform almost like tensors, but with an additional, nonlinear factor in the transformation. A prototypical example is a matrix representing the cross product (area of spanned parallelogram) on $\mathbb {R} ^{2}.$ The representation is given by in the standard basis by ${\vec {u}}\times {\vec {v}}={\begin{bmatrix}u_{1}&u_{2}\end{bmatrix}}{\begin{bmatrix}0&1\\-1&0\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}=u_{1}v_{2}-u_{2}v_{1}$ If we now try to express this same expression in a basis other than the standard basis, then the components of the vectors will change, say according to $ {\begin{bmatrix}u'_{1}&u'_{2}\end{bmatrix}}^{\textsf {T}}=A{\begin{bmatrix}u_{1}&u_{2}\end{bmatrix}}^{\textsf {T}}$ where $A$ is some 2 by 2 matrix of real numbers. Given that the area of the spanned parallelogram is a geometric invariant, it cannot have changed under the change of basis, and so the new representation of this matrix must be: $\left(A^{-1}\right)^{\textsf {T}}{\begin{bmatrix}0&1\\-1&0\end{bmatrix}}A^{-1}$ which, when expanded is just the original expression but multiplied by the determinant of $A^{-1},$ which is also $ {\frac {1}{\det A}}.$ In fact this representation could be thought of as a two index tensor transformation, but instead, it is computationally easier to think of the tensor transformation rule as multiplication by $ {\frac {1}{\det A}},$ rather than as 2 matrix multiplications (In fact in higher dimensions, the natural extension of this is $n,n\times n$ matrix multiplications, which for large $n$ is completely infeasible). Objects which transform in this way are called tensor densities because they arise naturally when considering problems regarding areas and volumes, and so are frequently used in integration. Definition Some authors classify tensor densities into the two types called (authentic) tensor densities and pseudotensor densities in this article. Other authors classify them differently, into the types called even tensor densities and odd tensor densities. When a tensor density weight is an integer there is an equivalence between these approaches that depends upon whether the integer is even or odd. Note that these classifications elucidate the different ways that tensor densities may transform somewhat pathologically under orientation-reversing coordinate transformations. Regardless of their classifications into these types, there is only one way that tensor densities transform under orientation-preserving coordinate transformations. In this article we have chosen the convention that assigns a weight of +2 to $g=\det \left(g_{\rho \sigma }\right)$, the determinant of the metric tensor expressed with covariant indices. With this choice, classical densities, like charge density, will be represented by tensor densities of weight +1. Some authors use a sign convention for weights that is the negation of that presented here.[4] In contrast to the meaning used in this article, in general relativity "pseudotensor" sometimes means an object that does not transform like a tensor or relative tensor of any weight. Tensor and pseudotensor densities For example, a mixed rank-two (authentic) tensor density of weight $W$ transforms as:[5][6] ${\mathfrak {T}}_{\beta }^{\alpha }=\left(\det {\left[{\frac {\partial {\bar {x}}^{\iota }}{\partial {x}^{\gamma }}}\right]}\right)^{W}\,{\frac {\partial {x}^{\alpha }}{\partial {\bar {x}}^{\delta }}}\,{\frac {\partial {\bar {x}}^{\epsilon }}{\partial {x}^{\beta }}}\,{\bar {\mathfrak {T}}}_{\epsilon }^{\delta }\,,$     ((authentic) tensor density of (integer) weight W) where ${\bar {\mathfrak {T}}}$ is the rank-two tensor density in the ${\bar {x}}$ coordinate system, ${\mathfrak {T}}$ is the transformed tensor density in the ${x}$ coordinate system; and we use the Jacobian determinant. Because the determinant can be negative, which it is for an orientation-reversing coordinate transformation, this formula is applicable only when $W$ is an integer. (However, see even and odd tensor densities below.) We say that a tensor density is a pseudotensor density when there is an additional sign flip under an orientation-reversing coordinate transformation. A mixed rank-two pseudotensor density of weight $W$ transforms as ${\mathfrak {T}}_{\beta }^{\alpha }=\operatorname {sgn} \left(\det {\left[{\frac {\partial {\bar {x}}^{\iota }}{\partial {x}^{\gamma }}}\right]}\right)\left(\det {\left[{\frac {\partial {\bar {x}}^{\iota }}{\partial {x}^{\gamma }}}\right]}\right)^{W}\,{\frac {\partial {x}^{\alpha }}{\partial {\bar {x}}^{\delta }}}\,{\frac {\partial {\bar {x}}^{\epsilon }}{\partial {x}^{\beta }}}\,{\bar {\mathfrak {T}}}_{\epsilon }^{\delta }\,,$     (pseudotensor density of (integer) weight W) where sgn( ) is a function that returns +1 when its argument is positive or −1 when its argument is negative. Even and odd tensor densities The transformations for even and odd tensor densities have the benefit of being well defined even when $W$ is not an integer. Thus one can speak of, say, an odd tensor density of weight +2 or an even tensor density of weight −1/2. When $W$ is an even integer the above formula for an (authentic) tensor density can be rewritten as ${\mathfrak {T}}_{\beta }^{\alpha }=\left\vert \det {\left[{\frac {\partial {\bar {x}}^{\iota }}{\partial {x}^{\gamma }}}\right]}\right\vert ^{W}\,{\frac {\partial {x}^{\alpha }}{\partial {\bar {x}}^{\delta }}}\,{\frac {\partial {\bar {x}}^{\epsilon }}{\partial {x}^{\beta }}}\,{\bar {\mathfrak {T}}}_{\epsilon }^{\delta }\,.$     (even tensor density of weight W) Similarly, when $W$ is an odd integer the formula for an (authentic) tensor density can be rewritten as ${\mathfrak {T}}_{\beta }^{\alpha }=\operatorname {sgn} \left(\det {\left[{\frac {\partial {\bar {x}}^{\iota }}{\partial {x}^{\gamma }}}\right]}\right)\left\vert \det {\left[{\frac {\partial {\bar {x}}^{\iota }}{\partial {x}^{\gamma }}}\right]}\right\vert ^{W}\,{\frac {\partial {x}^{\alpha }}{\partial {\bar {x}}^{\delta }}}\,{\frac {\partial {\bar {x}}^{\epsilon }}{\partial {x}^{\beta }}}\,{\bar {\mathfrak {T}}}_{\epsilon }^{\delta }\,.$     (odd tensor density of weight W) Weights of zero and one A tensor density of any type that has weight zero is also called an absolute tensor. An (even) authentic tensor density of weight zero is also called an ordinary tensor. If a weight is not specified but the word "relative" or "density" is used in a context where a specific weight is needed, it is usually assumed that the weight is +1. Algebraic properties 1. A linear combination (also known as a weighted sum) of tensor densities of the same type and weight $W$ is again a tensor density of that type and weight. 2. A product of two tensor densities of any types, and with weights $W_{1}$ and $W_{2}$, is a tensor density of weight $W_{1}+W_{2}.$ A product of authentic tensor densities and pseudotensor densities will be an authentic tensor density when an even number of the factors are pseudotensor densities; it will be a pseudotensor density when an odd number of the factors are pseudotensor densities. Similarly, a product of even tensor densities and odd tensor densities will be an even tensor density when an even number of the factors are odd tensor densities; it will be an odd tensor density when an odd number of the factors are odd tensor densities. 3. The contraction of indices on a tensor density with weight $W$ again yields a tensor density of weight $W.$[7] 4. Using (2) and (3) one sees that raising and lowering indices using the metric tensor (weight 0) leaves the weight unchanged.[8] Matrix inversion and matrix determinant of tensor densities If ${\mathfrak {T}}_{\alpha \beta }$ is a non-singular matrix and a rank-two tensor density of weight $W$ with covariant indices then its matrix inverse will be a rank-two tensor density of weight −$W$ with contravariant indices. Similar statements apply when the two indices are contravariant or are mixed covariant and contravariant. If ${\mathfrak {T}}_{\alpha \beta }$ is a rank-two tensor density of weight $W$ with covariant indices then the matrix determinant $\det {\mathfrak {T}}_{\alpha \beta }$ will have weight $NW+2,$ where $N$ is the number of space-time dimensions. If ${\mathfrak {T}}^{\alpha \beta }$ is a rank-two tensor density of weight $W$ with contravariant indices then the matrix determinant $\det {\mathfrak {T}}^{\alpha \beta }$ will have weight $NW-2.$ The matrix determinant $\det {\mathfrak {T}}_{~\beta }^{\alpha }$ will have weight $NW.$ General relativity General relativity $G_{\mu \nu }+\Lambda g_{\mu \nu }={\kappa }T_{\mu \nu }$ • Introduction • History • Timeline • Tests • Mathematical formulation Fundamental concepts • Equivalence principle • Special relativity • World line • Pseudo-Riemannian manifold Phenomena • Kepler problem • Gravitational lensing • Gravitational waves • Frame-dragging • Geodetic effect • Event horizon • Singularity • Black hole Spacetime • Spacetime diagrams • Minkowski spacetime • Einstein–Rosen bridge • Equations • Formalisms Equations • Linearized gravity • Einstein field equations • Friedmann • Geodesics • Mathisson–Papapetrou–Dixon • Hamilton–Jacobi–Einstein Formalisms • ADM • BSSN • Post-Newtonian Advanced theory • Kaluza–Klein theory • Quantum gravity Solutions • Schwarzschild (interior) • Reissner–Nordström • Gödel • Kerr • Kerr–Newman • Kasner • Lemaître–Tolman • Taub–NUT • Milne • Robertson–Walker • Oppenheimer-Snyder • pp-wave • van Stockum dust • Weyl−Lewis−Papapetrou Scientists • Einstein • Lorentz • Hilbert • Poincaré • Schwarzschild • de Sitter • Reissner • Nordström • Weyl • Eddington • Friedman • Milne • Zwicky • Lemaître • Oppenheimer • Gödel • Wheeler • Robertson • Bardeen • Walker • Kerr • Chandrasekhar • Ehlers • Penrose • Hawking • Raychaudhuri • Taylor • Hulse • van Stockum • Taub • Newman • Yau • Thorne • others •  Physics portal •  Category Relation of Jacobian determinant and metric tensor Any non-singular ordinary tensor $T_{\mu \nu }$ transforms as $T_{\mu \nu }={\frac {\partial {\bar {x}}^{\kappa }}{\partial {x}^{\mu }}}{\bar {T}}_{\kappa \lambda }{\frac {\partial {\bar {x}}^{\lambda }}{\partial {x}^{\nu }}}\,,$ where the right-hand side can be viewed as the product of three matrices. Taking the determinant of both sides of the equation (using that the determinant of a matrix product is the product of the determinants), dividing both sides by $\det \left({\bar {T}}_{\kappa \lambda }\right),$ and taking their square root gives $\left\vert \det {\left[{\frac {\partial {\bar {x}}^{\iota }}{\partial {x}^{\gamma }}}\right]}\right\vert ={\sqrt {\frac {\det({T}_{\mu \nu })}{\det \left({\bar {T}}_{\kappa \lambda }\right)}}}\,.$ When the tensor $T$ is the metric tensor, ${g}_{\kappa \lambda },$ and ${\bar {x}}^{\iota }$ is a locally inertial coordinate system where ${\bar {g}}_{\kappa \lambda }=\eta _{\kappa \lambda }=$ diag(−1,+1,+1,+1), the Minkowski metric, then $\det \left({\bar {g}}_{\kappa \lambda }\right)=\det(\eta _{\kappa \lambda })=$ −1 and so $\left\vert \det {\left[{\frac {\partial {\bar {x}}^{\iota }}{\partial {x}^{\gamma }}}\right]}\right\vert ={\sqrt {-{g}}}\,,$ where ${g}=\det \left({g}_{\mu \nu }\right)$ is the determinant of the metric tensor ${g}_{\mu \nu }.$ Use of metric tensor to manipulate tensor densities Consequently, an even tensor density, ${\mathfrak {T}}_{\nu \dots }^{\mu \dots },$ of weight W, can be written in the form ${\mathfrak {T}}_{\nu \dots }^{\mu \dots }={\sqrt {-g}}\;^{W}T_{\nu \dots }^{\mu \dots }\,,$ where $T_{\nu \dots }^{\mu \dots }\,$ is an ordinary tensor. In a locally inertial coordinate system, where $g_{\kappa \lambda }=\eta _{\kappa \lambda },$ it will be the case that ${\mathfrak {T}}_{\nu \dots }^{\mu \dots }$ and $T_{\nu \dots }^{\mu \dots }\,$ will be represented with the same numbers. When using the metric connection (Levi-Civita connection), the covariant derivative of an even tensor density is defined as ${\mathfrak {T}}_{\nu \dots ;\alpha }^{\mu \dots }={\sqrt {-g}}\;^{W}T_{\nu \dots ;\alpha }^{\mu \dots }={\sqrt {-g}}\;^{W}\left({\sqrt {-g}}\;^{-W}{\mathfrak {T}}_{\nu \dots }^{\mu \dots }\right)_{;\alpha }\,.$ ;\alpha }^{\mu \dots }={\sqrt {-g}}\;^{W}T_{\nu \dots ;\alpha }^{\mu \dots }={\sqrt {-g}}\;^{W}\left({\sqrt {-g}}\;^{-W}{\mathfrak {T}}_{\nu \dots }^{\mu \dots }\right)_{;\alpha }\,.} For an arbitrary connection, the covariant derivative is defined by adding an extra term, namely $-W\,\Gamma _{~\delta \alpha }^{\delta }\,{\mathfrak {T}}_{\nu \dots }^{\mu \dots }$ to the expression that would be appropriate for the covariant derivative of an ordinary tensor. Equivalently, the product rule is obeyed $\left({\mathfrak {T}}_{\nu \dots }^{\mu \dots }{\mathfrak {S}}_{\tau \dots }^{\sigma \dots }\right)_{;\alpha }=\left({\mathfrak {T}}_{\nu \dots ;\alpha }^{\mu \dots }\right){\mathfrak {S}}_{\tau \dots }^{\sigma \dots }+{\mathfrak {T}}_{\nu \dots }^{\mu \dots }\left({\mathfrak {S}}_{\tau \dots ;\alpha }^{\sigma \dots }\right)\,,$ ;\alpha }^{\mu \dots }\right){\mathfrak {S}}_{\tau \dots }^{\sigma \dots }+{\mathfrak {T}}_{\nu \dots }^{\mu \dots }\left({\mathfrak {S}}_{\tau \dots ;\alpha }^{\sigma \dots }\right)\,,} where, for the metric connection, the covariant derivative of any function of $g_{\kappa \lambda }$ is always zero, ${\begin{aligned}g_{\kappa \lambda ;\alpha }&=0\\\left({\sqrt {-g}}\;^{W}\right)_{;\alpha }&=\left({\sqrt {-g}}\;^{W}\right)_{,\alpha }-W\Gamma _{~\delta \alpha }^{\delta }{\sqrt {-g}}\;^{W}={\frac {W}{2}}g^{\kappa \lambda }g_{\kappa \lambda ,\alpha }{\sqrt {-g}}\;^{W}-W\Gamma _{~\delta \alpha }^{\delta }{\sqrt {-g}}\;^{W}=0\,.\end{aligned}}$ ;\alpha }&=0\\\left({\sqrt {-g}}\;^{W}\right)_{;\alpha }&=\left({\sqrt {-g}}\;^{W}\right)_{,\alpha }-W\Gamma _{~\delta \alpha }^{\delta }{\sqrt {-g}}\;^{W}={\frac {W}{2}}g^{\kappa \lambda }g_{\kappa \lambda ,\alpha }{\sqrt {-g}}\;^{W}-W\Gamma _{~\delta \alpha }^{\delta }{\sqrt {-g}}\;^{W}=0\,.\end{aligned}}} Examples The expression ${\sqrt {-g}}$ is a scalar density. By the convention of this article it has a weight of +1. The density of electric current ${\mathfrak {J}}^{\mu }$ (for example, ${\mathfrak {J}}^{2}$ is the amount of electric charge crossing the 3-volume element $dx^{3}\,dx^{4}\,dx^{1}$ divided by that element — do not use the metric in this calculation) is a contravariant vector density of weight +1. It is often written as ${\mathfrak {J}}^{\mu }=J^{\mu }{\sqrt {-g}}$ or ${\mathfrak {J}}^{\mu }=\varepsilon ^{\mu \alpha \beta \gamma }{\mathcal {J}}_{\alpha \beta \gamma }/3!,$ where $J^{\mu }\,$ and the differential form ${\mathcal {J}}_{\alpha \beta \gamma }$ are absolute tensors, and where $\varepsilon ^{\mu \alpha \beta \gamma }$ is the Levi-Civita symbol; see below. The density of Lorentz force ${\mathfrak {f}}_{\mu }$ (that is, the linear momentum transferred from the electromagnetic field to matter within a 4-volume element $dx^{1}\,dx^{2}\,dx^{3}\,dx^{4}$ divided by that element — do not use the metric in this calculation) is a covariant vector density of weight +1. In N-dimensional space-time, the Levi-Civita symbol may be regarded as either a rank-N covariant (odd) authentic tensor density of weight −1 (εα1⋯αN) or a rank-N contravariant (odd) authentic tensor density of weight +1 (εα1⋯αN). Notice that the Levi-Civita symbol (so regarded) does not obey the usual convention for raising or lowering of indices with the metric tensor. That is, it is true that $\varepsilon ^{\alpha \beta \gamma \delta }\,g_{\alpha \kappa }\,g_{\beta \lambda }\,g_{\gamma \mu }g_{\delta \nu }\,=\,\varepsilon _{\kappa \lambda \mu \nu }\,g\,,$ but in general relativity, where $g=\det \left(g_{\rho \sigma }\right)$ is always negative, this is never equal to $\varepsilon _{\kappa \lambda \mu \nu }.$ The determinant of the metric tensor, $g=\det \left(g_{\rho \sigma }\right)={\frac {1}{4!}}\varepsilon ^{\alpha \beta \gamma \delta }\varepsilon ^{\kappa \lambda \mu \nu }g_{\alpha \kappa }g_{\beta \lambda }g_{\gamma \mu }g_{\delta \nu }\,,$ is an (even) authentic scalar density of weight +2, being the contraction of the product of 2 (odd) authentic tensor densities of weight +1 and four (even) authentic tensor densities of weight 0. See also • Action (physics) – Physical quantity of dimension energy × time • Conservation law – Scientific law regarding conservation of a physical property • Noether's theorem – Statement relating differentiable symmetries to conserved quantities • Pseudotensor – Type of physical quantity • Relative scalar • Variational principle – Scientific principles enabling the use of the calculus of variations Notes 1. Weinreich, Gabriel (July 6, 1998). Geometrical Vectors. pp. 112, 115. ISBN 978-0226890487. 2. Papastavridis, John G. (Dec 18, 1998). Tensor Calculus and Analytical Dynamics. CRC Press. ISBN 978-0849385148. 3. Ruiz-Tolosa, Castillo, Juan R., Enrique (30 Mar 2006). From Vectors to Tensors. Springer Science & Business Media. ISBN 978-3540228875.{{cite book}}: CS1 maint: multiple names: authors list (link) 4. E.g. Weinberg 1972 pp 98. The chosen convention involves in the formulae below the Jacobian determinant of the inverse transition x → x, while the opposite convention considers the forward transition x → x resulting in a flip of sign of the weight. 5. M.R. Spiegel; S. Lipcshutz; D. Spellman (2009). Vector Analysis (2nd ed.). New York: Schaum's Outline Series. p. 198. ISBN 978-0-07-161545-7. 6. C.B. Parker (1994). McGraw Hill Encyclopaedia of Physics (2nd ed.). p. 1417. ISBN 0-07-051400-3. 7. Weinberg 1972 p 100. 8. Weinberg 1972 p 100. References • Spivak, Michael (1999), A Comprehensive Introduction to Differential Geometry, Vol I (3rd ed.), p. 134. • Kuptsov, L.P. (2001) [1994], "Tensor Density", Encyclopedia of Mathematics, EMS Press. • Charles Misner; Kip S Thorne & John Archibald Wheeler (1973). Gravitation. W. H. Freeman. p. 501ff. ISBN 0-7167-0344-0.{{cite book}}: CS1 maint: multiple names: authors list (link) • Weinberg, Steven (1972), Gravitation and Cosmology, John Wiley & sons, Inc, ISBN 0-471-92567-5 Tensors Glossary of tensor theory Scope Mathematics • Coordinate system • Differential geometry • Dyadic algebra • Euclidean geometry • Exterior calculus • Multilinear algebra • Tensor algebra • Tensor calculus • Physics • Engineering • Computer vision • Continuum mechanics • Electromagnetism • General relativity • Transport phenomena Notation • Abstract index notation • Einstein notation • Index notation • Multi-index notation • Penrose graphical notation • Ricci calculus • Tetrad (index notation) • Van der Waerden notation • Voigt notation Tensor definitions • Tensor (intrinsic definition) • Tensor field • Tensor density • Tensors in curvilinear coordinates • Mixed tensor • Antisymmetric tensor • Symmetric tensor • Tensor operator • Tensor bundle • Two-point tensor Operations • Covariant derivative • Exterior covariant derivative • Exterior derivative • Exterior product • Hodge star operator • Lie derivative • Raising and lowering indices • Symmetrization • Tensor contraction • Tensor product • Transpose (2nd-order tensors) Related abstractions • Affine connection • Basis • Cartan formalism (physics) • Connection form • Covariance and contravariance of vectors • Differential form • Dimension • Exterior form • Fiber bundle • Geodesic • Levi-Civita connection • Linear map • Manifold • Matrix • Multivector • Pseudotensor • Spinor • Vector • Vector space Notable tensors Mathematics • Kronecker delta • Levi-Civita symbol • Metric tensor • Nonmetricity tensor • Ricci curvature • Riemann curvature tensor • Torsion tensor • Weyl tensor Physics • Moment of inertia • Angular momentum tensor • Spin tensor • Cauchy stress tensor • stress–energy tensor • Einstein tensor • EM tensor • Gluon field strength tensor • Metric tensor (GR) Mathematicians • Élie Cartan • Augustin-Louis Cauchy • Elwin Bruno Christoffel • Albert Einstein • Leonhard Euler • Carl Friedrich Gauss • Hermann Grassmann • Tullio Levi-Civita • Gregorio Ricci-Curbastro • Bernhard Riemann • Jan Arnoldus Schouten • Woldemar Voigt • Hermann Weyl Manifolds (Glossary) Basic concepts • Topological manifold • Atlas • Differentiable/Smooth manifold • Differential structure • Smooth atlas • Submanifold • Riemannian manifold • Smooth map • Submersion • Pushforward • Tangent space • Differential form • Vector field Main results (list) • Atiyah–Singer index • Darboux's • De Rham's • Frobenius • Generalized Stokes • Hopf–Rinow • Noether's • Sard's • Whitney embedding Maps • Curve • Diffeomorphism • Local • Geodesic • Exponential map • in Lie theory • Foliation • Immersion • Integral curve • Lie derivative • Section • Submersion Types of manifolds • Closed • (Almost) Complex • (Almost) Contact • Fibered • Finsler • Flat • G-structure • Hadamard • Hermitian • Hyperbolic • Kähler • Kenmotsu • Lie group • Lie algebra • Manifold with boundary • Oriented • Parallelizable • Poisson • Prime • Quaternionic • Hypercomplex • (Pseudo−, Sub−) Riemannian • Rizza • (Almost) Symplectic • Tame Tensors Vectors • Distribution • Lie bracket • Pushforward • Tangent space • bundle • Torsion • Vector field • Vector flow Covectors • Closed/Exact • Covariant derivative • Cotangent space • bundle • De Rham cohomology • Differential form • Vector-valued • Exterior derivative • Interior product • Pullback • Ricci curvature • flow • Riemann curvature tensor • Tensor field • density • Volume form • Wedge product Bundles • Adjoint • Affine • Associated • Cotangent • Dual • Fiber • (Co) Fibration • Jet • Lie algebra • (Stable) Normal • Principal • Spinor • Subbundle • Tangent • Tensor • Vector Connections • Affine • Cartan • Ehresmann • Form • Generalized • Koszul • Levi-Civita • Principal • Vector • Parallel transport Related • Classification of manifolds • Gauge theory • History • Morse theory • Moving frame • Singularity theory Generalizations • Banach manifold • Diffeology • Diffiety • Fréchet manifold • K-theory • Orbifold • Secondary calculus • over commutative algebras • Sheaf • Stratifold • Supermanifold • Stratified space
Wikipedia
Vector field reconstruction Vector field reconstruction[1] is a method of creating a vector field from experimental or computer generated data, usually with the goal of finding a differential equation model of the system. A differential equation model is one that describes the value of dependent variables as they evolve in time or space by giving equations involving those variables and their derivatives with respect to some independent variables, usually time and/or space. An ordinary differential equation is one in which the system's dependent variables are functions of only one independent variable. Many physical, chemical, biological and electrical systems are well described by ordinary differential equations. Frequently we assume a system is governed by differential equations, but we do not have exact knowledge of the influence of various factors on the state of the system. For instance, we may have an electrical circuit that in theory is described by a system of ordinary differential equations, but due to the tolerance of resistors, variations of the supply voltage or interference from outside influences we do not know the exact parameters of the system. For some systems, especially those that support chaos, a small change in parameter values can cause a large change in the behavior of the system, so an accurate model is extremely important. Therefore, it may be necessary to construct more exact differential equations by building them up based on the actual system performance rather than a theoretical model. Ideally, one would measure all the dynamical variables involved over an extended period of time, using many different initial conditions, then build or fine tune a differential equation model based on these measurements. In some cases we may not even know enough about the processes involved in a system to even formulate a model. In other cases, we may have access to only one dynamical variable for our measurements, i.e., we have a scalar time series. If we only have a scalar time series, we need to use the method of time delay embedding or derivative coordinates to get a large enough set of dynamical variables to describe the system. In a nutshell, once we have a set of measurements of the system state over some period of time, we find the derivatives of these measurements, which gives us a local vector field, then determine a global vector field consistent with this local field. This is usually done by a least squares fit to the derivative data. Formulation In the best possible case, one has data streams of measurements of all the system variables, equally spaced in time, say s1(t), s2(t), ... , sk(t) for t = t1, t2,..., tn, beginning at several different initial conditions. Then the task of finding a vector field, and thus a differential equation model consists of fitting functions, for instance, a cubic spline, to the data to obtain a set of continuous time functions x1(t), x2(t), ... , xk(t), computing time derivatives dx1/dt, dx2/dt,...,dxk/dt of the functions, then making a least squares fit using some sort of orthogonal basis functions (orthogonal polynomials, radial basis functions, etc.) to each component of the tangent vectors to find a global vector field. A differential equation then can be read off the global vector field. There are various methods of creating the basis functions for the least squares fit. The most common method is the Gram–Schmidt process. Which creates a set of orthogonal basis vectors, which can then easily be normalized. This method begins by first selecting any standard basis β={v1, v2,...,vn}. Next, set the first vector v1=u1. Then, we set u2=v2-proju1v2. This process is repeated to for k vectors, with the final vector being uk= vk-Σ(j=1)(k-1)projukvk. This then creates a set of orthogonal standard basis vectors. The reason for using a standard orthogonal basis rather than a standard basis arises from the creation of the least squares fitting done next. Creating a least-squares fit begins by assuming some function, in the case of the reconstruction an nth degree polynomial, and fitting the curve to the data using constants. The accuracy of the fit can be increased by increasing the degree of the polynomial being used to fit the data. If a set of non-orthogonal standard basis functions was used, it becomes necessary to recalculate the constant coefficients of the function describing the fit. However, by using the orthogonal set of basis functions, it is not necessary to recalculate the constant coefficients. Applications Vector field reconstruction has several applications, and many different approaches. Some mathematicians have not only used radial basis functions and polynomials to reconstruct a vector field, but they have used Lyapunov exponents and singular value decomposition.[2] Gouesbet and Letellier used a multivariate polynomial approximation and least squares to reconstruct their vector field. This method was applied to the Rössler system, and the Lorenz system, as well as thermal lens oscillations. The Rossler system, Lorenz system and Thermal lens oscillation follows the differential equations in standard system as X'=Y, Y'=Z and Z'=F(X,Y,Z) where F(X,Y,Z) is known as the standard function.[3] Implementation issues In some situation the model is not very efficient and difficulties can arise if the model has a large number of coefficients and demonstrates a divergent solution. For example, nonautonomous differential equations give the previously described results.[4] In this case the modification of the standard approach in application gives a better way of further development of global vector reconstruction. Usually the system being modeled in this way is a chaotic dynamical system, because chaotic systems explore a large part of the phase space and the estimate of the global dynamics based on the local dynamics will be better than with a system exploring only a small part of the space. Frequently, one has only a single scalar time series measurement from a system known to have more than one degree of freedom. The time series may not even be from a system variable, but may be instead of a function of all the variables, such as temperature in a stirred tank reactor using several chemical species. In this case, one must use the technique of delay coordinate embedding,[5] where a state vector consisting of the data at time t and several delayed versions of the data is constructed. A comprehensive review of the topic is available from [6] References 1. Letellier, C.; Le Sceller, L.; Maréchal, E.; Dutertre, P.; Maheu, B.; et al. (1995-05-01). "Global vector field reconstruction from a chaotic experimental signal in copper electrodissolution". Physical Review E. American Physical Society (APS). 51 (5): 4262–4266. Bibcode:1995PhRvE..51.4262L. doi:10.1103/physreve.51.4262. ISSN 1063-651X. PMID 9963137. 2. Wei-Dong, Liu; Ren, K. F; Meunier-Guttin-Cluzel, S; Gouesbet, G (2003). "Global vector-field reconstruction of nonlinear dynamical systems from a time series with SVD method and validation with Lyapunov exponents". Chinese Physics. IOP Publishing. 12 (12): 1366–1373. Bibcode:2003ChPhy..12.1366L. doi:10.1088/1009-1963/12/12/005. ISSN 1009-1963. 3. Gouesbet, G.; Letellier, C. (1994-06-01). "Global vector-field reconstruction by using a multivariate polynomial L2 approximation on nets". Physical Review E. American Physical Society (APS). 49 (6): 4955–4972. Bibcode:1994PhRvE..49.4955G. doi:10.1103/physreve.49.4955. ISSN 1063-651X. PMID 9961817. 4. Bezruchko, Boris P.; Smirnov, Dmitry A. (2000-12-20). "Constructing nonautonomous differential equations from experimental time series". Physical Review E. American Physical Society (APS). 63 (1): 016207. Bibcode:2000PhRvE..63a6207B. doi:10.1103/physreve.63.016207. ISSN 1063-651X. PMID 11304335. 5. Embedology, Tim Sauer, James A. Yorke, and Martin Casdagli, Santa Fe Institute working paper 6. G. Gouesbet, S. Meunier-Guttin-Cluzel and O. Ménard, editors. Chaos and its reconstruction. Novascience Publishers, New-York (2003)
Wikipedia
Vector fields in cylindrical and spherical coordinates Note: This page uses common physics notation for spherical coordinates, in which $\theta $ is the angle between the z axis and the radius vector connecting the origin to the point in question, while $\phi $ is the angle between the projection of the radius vector onto the x-y plane and the x axis. Several other definitions are in use, and so care must be taken in comparing different sources.[1] Cylindrical coordinate system Vector fields Vectors are defined in cylindrical coordinates by (ρ, φ, z), where • ρ is the length of the vector projected onto the xy-plane, • φ is the angle between the projection of the vector onto the xy-plane (i.e. ρ) and the positive x-axis (0 ≤ φ < 2π), • z is the regular z-coordinate. (ρ, φ, z) is given in Cartesian coordinates by: ${\begin{bmatrix}\rho \\\phi \\z\end{bmatrix}}={\begin{bmatrix}{\sqrt {x^{2}+y^{2}}}\\\operatorname {arctan} (y/x)\\z\end{bmatrix}},\ \ \ 0\leq \phi <2\pi ,$ or inversely by: ${\begin{bmatrix}x\\y\\z\end{bmatrix}}={\begin{bmatrix}\rho \cos \phi \\\rho \sin \phi \\z\end{bmatrix}}.$ Any vector field can be written in terms of the unit vectors as: $\mathbf {A} =A_{x}\mathbf {\hat {x}} +A_{y}\mathbf {\hat {y}} +A_{z}\mathbf {\hat {z}} =A_{\rho }\mathbf {\hat {\rho }} +A_{\phi }{\boldsymbol {\hat {\phi }}}+A_{z}\mathbf {\hat {z}} $ The cylindrical unit vectors are related to the Cartesian unit vectors by: ${\begin{bmatrix}\mathbf {\hat {\rho }} \\{\boldsymbol {\hat {\phi }}}\\\mathbf {\hat {z}} \end{bmatrix}}={\begin{bmatrix}\cos \phi &\sin \phi &0\\-\sin \phi &\cos \phi &0\\0&0&1\end{bmatrix}}{\begin{bmatrix}\mathbf {\hat {x}} \\\mathbf {\hat {y}} \\\mathbf {\hat {z}} \end{bmatrix}}$ Note: the matrix is an orthogonal matrix, that is, its inverse is simply its transpose. Time derivative of a vector field To find out how the vector field A changes in time, the time derivatives should be calculated. For this purpose Newton's notation will be used for the time derivative (${\dot {\mathbf {A} }}$). In Cartesian coordinates this is simply: ${\dot {\mathbf {A} }}={\dot {A}}_{x}{\hat {\mathbf {x} }}+{\dot {A}}_{y}{\hat {\mathbf {y} }}+{\dot {A}}_{z}{\hat {\mathbf {z} }}$ However, in cylindrical coordinates this becomes: ${\dot {\mathbf {A} }}={\dot {A}}_{\rho }{\hat {\boldsymbol {\rho }}}+A_{\rho }{\dot {\hat {\boldsymbol {\rho }}}}+{\dot {A}}_{\phi }{\hat {\boldsymbol {\phi }}}+A_{\phi }{\dot {\hat {\boldsymbol {\phi }}}}+{\dot {A}}_{z}{\hat {\boldsymbol {z}}}+A_{z}{\dot {\hat {\boldsymbol {z}}}}$ The time derivatives of the unit vectors are needed. They are given by: ${\begin{aligned}{\dot {\hat {\mathbf {\rho } }}}&={\dot {\phi }}{\hat {\boldsymbol {\phi }}}\\{\dot {\hat {\boldsymbol {\phi }}}}&=-{\dot {\phi }}{\hat {\mathbf {\rho } }}\\{\dot {\hat {\mathbf {z} }}}&=0\end{aligned}}$ So the time derivative simplifies to: ${\dot {\mathbf {A} }}={\hat {\boldsymbol {\rho }}}\left({\dot {A}}_{\rho }-A_{\phi }{\dot {\phi }}\right)+{\hat {\boldsymbol {\phi }}}\left({\dot {A}}_{\phi }+A_{\rho }{\dot {\phi }}\right)+{\hat {\mathbf {z} }}{\dot {A}}_{z}$ Second time derivative of a vector field The second time derivative is of interest in physics, as it is found in equations of motion for classical mechanical systems. The second time derivative of a vector field in cylindrical coordinates is given by: $\mathbf {\ddot {A}} =\mathbf {\hat {\rho }} \left({\ddot {A}}_{\rho }-A_{\phi }{\ddot {\phi }}-2{\dot {A}}_{\phi }{\dot {\phi }}-A_{\rho }{\dot {\phi }}^{2}\right)+{\boldsymbol {\hat {\phi }}}\left({\ddot {A}}_{\phi }+A_{\rho }{\ddot {\phi }}+2{\dot {A}}_{\rho }{\dot {\phi }}-A_{\phi }{\dot {\phi }}^{2}\right)+\mathbf {\hat {z}} {\ddot {A}}_{z}$ To understand this expression, A is substituted for P, where P is the vector (ρ, φ, z). This means that $\mathbf {A} =\mathbf {P} =\rho \mathbf {\hat {\rho }} +z\mathbf {\hat {z}} $. After substituting, the result is given: ${\ddot {\mathbf {P} }}=\mathbf {\hat {\rho }} \left({\ddot {\rho }}-\rho {\dot {\phi }}^{2}\right)+{\boldsymbol {\hat {\phi }}}\left(\rho {\ddot {\phi }}+2{\dot {\rho }}{\dot {\phi }}\right)+\mathbf {\hat {z}} {\ddot {z}}$ In mechanics, the terms of this expression are called: ${\begin{aligned}{\ddot {\rho }}\mathbf {\hat {\rho }} &={\text{central outward acceleration}}\\-\rho {\dot {\phi }}^{2}\mathbf {\hat {\rho }} &={\text{centripetal acceleration}}\\\rho {\ddot {\phi }}{\boldsymbol {\hat {\phi }}}&={\text{angular acceleration}}\\2{\dot {\rho }}{\dot {\phi }}{\boldsymbol {\hat {\phi }}}&={\text{Coriolis effect}}\\{\ddot {z}}\mathbf {\hat {z}} &={\text{z-acceleration}}\end{aligned}}$ Spherical coordinate system Vector fields Vectors are defined in spherical coordinates by (r, θ, φ), where • r is the length of the vector, • θ is the angle between the positive Z-axis and the vector in question (0 ≤ θ ≤ π), and • φ is the angle between the projection of the vector onto the xy-plane and the positive X-axis (0 ≤ φ < 2π). (r, θ, φ) is given in Cartesian coordinates by: ${\begin{bmatrix}r\\\theta \\\phi \end{bmatrix}}={\begin{bmatrix}{\sqrt {x^{2}+y^{2}+z^{2}}}\\\arccos(z/{\sqrt {x^{2}+y^{2}+z^{2}}})\\\arctan(y/x)\end{bmatrix}},\ \ \ 0\leq \theta \leq \pi ,\ \ \ 0\leq \phi <2\pi ,$ or inversely by: ${\begin{bmatrix}x\\y\\z\end{bmatrix}}={\begin{bmatrix}r\sin \theta \cos \phi \\r\sin \theta \sin \phi \\r\cos \theta \end{bmatrix}}.$ Any vector field can be written in terms of the unit vectors as: $\mathbf {A} =A_{x}\mathbf {\hat {x}} +A_{y}\mathbf {\hat {y}} +A_{z}\mathbf {\hat {z}} =A_{r}{\boldsymbol {\hat {r}}}+A_{\theta }{\boldsymbol {\hat {\theta }}}+A_{\phi }{\boldsymbol {\hat {\phi }}}$ The spherical unit vectors are related to the Cartesian unit vectors by: ${\begin{bmatrix}{\boldsymbol {\hat {r}}}\\{\boldsymbol {\hat {\theta }}}\\{\boldsymbol {\hat {\phi }}}\end{bmatrix}}={\begin{bmatrix}\sin \theta \cos \phi &\sin \theta \sin \phi &\cos \theta \\\cos \theta \cos \phi &\cos \theta \sin \phi &-\sin \theta \\-\sin \phi &\cos \phi &0\end{bmatrix}}{\begin{bmatrix}\mathbf {\hat {x}} \\\mathbf {\hat {y}} \\\mathbf {\hat {z}} \end{bmatrix}}$ Note: the matrix is an orthogonal matrix, that is, its inverse is simply its transpose. The Cartesian unit vectors are thus related to the spherical unit vectors by: ${\begin{bmatrix}\mathbf {\hat {x}} \\\mathbf {\hat {y}} \\\mathbf {\hat {z}} \end{bmatrix}}={\begin{bmatrix}\sin \theta \cos \phi &\cos \theta \cos \phi &-\sin \phi \\\sin \theta \sin \phi &\cos \theta \sin \phi &\cos \phi \\\cos \theta &-\sin \theta &0\end{bmatrix}}{\begin{bmatrix}{\boldsymbol {\hat {r}}}\\{\boldsymbol {\hat {\theta }}}\\{\boldsymbol {\hat {\phi }}}\end{bmatrix}}$ Time derivative of a vector field To find out how the vector field A changes in time, the time derivatives should be calculated. In Cartesian coordinates this is simply: $\mathbf {\dot {A}} ={\dot {A}}_{x}\mathbf {\hat {x}} +{\dot {A}}_{y}\mathbf {\hat {y}} +{\dot {A}}_{z}\mathbf {\hat {z}} $ However, in spherical coordinates this becomes: $\mathbf {\dot {A}} ={\dot {A}}_{r}{\boldsymbol {\hat {r}}}+A_{r}{\boldsymbol {\dot {\hat {r}}}}+{\dot {A}}_{\theta }{\boldsymbol {\hat {\theta }}}+A_{\theta }{\boldsymbol {\dot {\hat {\theta }}}}+{\dot {A}}_{\phi }{\boldsymbol {\hat {\phi }}}+A_{\phi }{\boldsymbol {\dot {\hat {\phi }}}}$ The time derivatives of the unit vectors are needed. They are given by: ${\begin{aligned}{\boldsymbol {\dot {\hat {r}}}}&={\dot {\theta }}{\boldsymbol {\hat {\theta }}}+{\dot {\phi }}\sin \theta {\boldsymbol {\hat {\phi }}}\\{\boldsymbol {\dot {\hat {\theta }}}}&=-{\dot {\theta }}{\boldsymbol {\hat {r}}}+{\dot {\phi }}\cos \theta {\boldsymbol {\hat {\phi }}}\\{\boldsymbol {\dot {\hat {\phi }}}}&=-{\dot {\phi }}\sin \theta {\boldsymbol {\hat {r}}}-{\dot {\phi }}\cos \theta {\boldsymbol {\hat {\theta }}}\end{aligned}}$ Thus the time derivative becomes: $\mathbf {\dot {A}} ={\boldsymbol {\hat {r}}}\left({\dot {A}}_{r}-A_{\theta }{\dot {\theta }}-A_{\phi }{\dot {\phi }}\sin \theta \right)+{\boldsymbol {\hat {\theta }}}\left({\dot {A}}_{\theta }+A_{r}{\dot {\theta }}-A_{\phi }{\dot {\phi }}\cos \theta \right)+{\boldsymbol {\hat {\phi }}}\left({\dot {A}}_{\phi }+A_{r}{\dot {\phi }}\sin \theta +A_{\theta }{\dot {\phi }}\cos \theta \right)$ See also • Del in cylindrical and spherical coordinates for the specification of gradient, divergence, curl, and Laplacian in various coordinate systems. References 1. Wolfram Mathworld, spherical coordinates
Wikipedia
Vector space In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called vectors, may be added together and multiplied ("scaled") by numbers called scalars. Scalars are often real numbers, but can be complex numbers or, more generally, elements of any field. The operations of vector addition and scalar multiplication must satisfy certain requirements, called vector axioms. The terms real vector space and complex vector space are often used to specify the nature of the scalars: real coordinate space or complex coordinate space. Not to be confused with Vector field. "Linear space" redirects here. For a structure in incidence geometry, see Linear space (geometry). Vector spaces generalize Euclidean vectors, which allow modeling of physical quantities, such as forces and velocity, that have not only a magnitude, but also a direction. The concept of vector spaces is fundamental for linear algebra, together with the concept of matrices, which allows computing in vector spaces. This provides a concise and synthetic way for manipulating and studying systems of linear equations. Vector spaces are characterized by their dimension, which, roughly speaking, specifies the number of independent directions in the space. This means that, for two vector spaces over a given field and with the same dimension, the properties that depend only on the vector-space structure are exactly the same (technically the vector spaces are isomorphic). A vector space is finite-dimensional if its dimension is a natural number. Otherwise, it is infinite-dimensional, and its dimension is an infinite cardinal. Finite-dimensional vector spaces occur naturally in geometry and related areas. Infinite-dimensional vector spaces occur in many areas of mathematics. For example, polynomial rings are countably infinite-dimensional vector spaces, and many function spaces have the cardinality of the continuum as a dimension. Many vector spaces that are considered in mathematics are also endowed with other structures. This is the case of algebras, which include field extensions, polynomial rings, associative algebras and Lie algebras. This is also the case of topological vector spaces, which include function spaces, inner product spaces, normed spaces, Hilbert spaces and Banach spaces. Algebraic structures Group-like • Group • Semigroup / Monoid • Rack and quandle • Quasigroup and loop • Abelian group • Magma • Lie group Group theory Ring-like • Ring • Rng • Semiring • Near-ring • Commutative ring • Domain • Integral domain • Field • Division ring • Lie ring Ring theory Lattice-like • Lattice • Semilattice • Complemented lattice • Total order • Heyting algebra • Boolean algebra • Map of lattices • Lattice theory Module-like • Module • Group with operators • Vector space • Linear algebra Algebra-like • Algebra • Associative • Non-associative • Composition algebra • Lie algebra • Graded • Bialgebra • Hopf algebra Definition and basic properties In this article, vectors are represented in boldface to distinguish them from scalars.[nb 1] A vector space over a field F is a non-empty set V together with two binary operations that satisfy the eight axioms listed below. In this context, the elements of V are commonly called vectors, and the elements of F are called scalars. • The first operation, called vector addition or simply addition assigns to any two vectors v and w in V a third vector in V which is commonly written as v + w, and called the sum of these two vectors. • The second operation, called scalar multiplication,assigns to any scalar a in F and any vector v in V another vector in V, which is denoted av.[nb 2] To have a vector space, the eight following axioms must be satisfied for every u, v and w in V, and a and b in F.[1] AxiomMeaning Associativity of vector additionu + (v + w) = (u + v) + w Commutativity of vector additionu + v = v + u Identity element of vector additionThere exists an element 0 ∈ V, called the zero vector, such that v + 0 = v for all v ∈ V. Inverse elements of vector additionFor every v ∈ V, there exists an element −v ∈ V, called the additive inverse of v, such that v + (−v) = 0. Compatibility of scalar multiplication with field multiplicationa(bv) = (ab)v [nb 3] Identity element of scalar multiplication1v = v, where 1 denotes the multiplicative identity in F. Distributivity of scalar multiplication with respect to vector addition  a(u + v) = au + av Distributivity of scalar multiplication with respect to field addition(a + b)v = av + bv When the scalar field is the real numbers the vector space is called a real vector space. When the scalar field is the complex numbers, the vector space is called a complex vector space. These two cases are the most common ones, but vector spaces with scalars in an arbitrary field F are also commonly considered. Such a vector space is called an F-vector space or a vector space over F. An equivalent definition of a vector space can be given, which is much more concise but less elementary: the first four axioms (related to vector addition) say that a vector space is an abelian group under addition, and the four remaining axioms (related to the scalar multiplication), say that this operation defines a ring homomorphism from the field F into the endomorphism ring of this group. Subtraction of two vectors can be defined as $\mathbf {v} -\mathbf {w} =\mathbf {v} +(-\mathbf {w} ).$ Direct consequences of the axioms include that, for every $s\in F$ and $\mathbf {v} \in V,$ one has • $0\mathbf {v} =\mathbf {0} ,$ • $s\mathbf {0} =\mathbf {0} ,$ • $(-1)\mathbf {v} =-\mathbf {v} ,$ • $s\mathbf {v} =\mathbf {0} $ implies $s=0$ or $\mathbf {v} =\mathbf {0} .$ Even more concisely, a vector space is an $F$-module, where $F$ is a field. Related concepts and properties Linear combination Given a set G of elements of a F-vector space V, a linear combination of elements of G is an element of V of the form $a_{1}\mathbf {g} _{1}+a_{2}\mathbf {g} _{2}+\cdots +a_{k}\mathbf {g} _{k},$ where $a_{1},\ldots ,a_{k}\in F$ and $\mathbf {g} _{1},\ldots ,\mathbf {g} _{k}\in G.$ The scalars $a_{1},\ldots ,a_{k}$ are called the coefficients of the linear combination. Linear independence The elements of a subset G of a F-vector space V are said to be linearly independent if no element of G can be written as a linear combination of the other elements of G. Equivalently, they are linearly independent if two linear combinations of elements of G define the same element of V if and only if they have the same coefficients. Also equivalently, they are linearly independent if a linear combination results in the zero vector if and only if all its coefficients are zero. Linear subspace A linear subspace or vector subspace W of a vector space V is a non-empty subset of V that is closed under vector addition and scalar multiplication; that is, the sum of two elements of W and the product of an element of V by a scalar belong to W. This implies that every linear combination of elements of W belongs to W. A linear subspace is a vector space for the induced addition and scalar multiplication; this means that the closure property implies that the axioms of a vector space are satisfied. The closure property also implies that every intersection of linear subspaces is a linear subspace. Linear span Given a subset G of a vector space V, the linear span or simply the span of G is the smallest linear subspace of V that contains G, in the sense that it is the intersection of all linear subspaces that contain G. The span of G is also the set of all linear combinations of elements of G. If W is the span of G, one says that G spans or generates W, and that G is a spanning set or a generating set of W. Basis and dimension A subset of a vector space is a basis if its elements are linearly independent and span the vector space. Every vector space has at least one basis, generally many (see Basis (linear algebra) § Proof that every vector space has a basis). Moreover, all bases of a vector space have the same cardinality, which is called the dimension of the vector space (see Dimension theorem for vector spaces). This is a fundamental property of vector spaces, which is detailed in the remainder of the section. Bases are a fundamental tool for the study of vector spaces, especially when the dimension is finite. In the infinite-dimensional case, the existence of infinite bases, often called Hamel bases, depend on the axiom of choice. It follows that, in general, no base can be explicitly described. For example, the real numbers form an infinite-dimensional vector space over the rational numbers, for which no specific basis is known. Consider a basis $(\mathbf {b} _{1},\mathbf {b} _{2},\ldots ,\mathbf {b} _{n})$ of a vector space V of dimension n over a field F. The definition of a basis implies that every $\mathbf {v} \in V$ may be written $\mathbf {v} =a_{1}\mathbf {b} _{1}+\cdots +a_{n}\mathbf {b} _{n},$ with $a_{1},\dots ,a_{n}$ in F, and that this decomposition is unique. The scalars $a_{1},\ldots ,a_{n}$ are called the coordinates of v on the basis. They are also said to be the coefficients of the decomposition of v on the basis. One also says that the n-tuple of the coordinates is the coordinate vector of v on the basis, since the set $F^{n}$ of the n-tuples of elements of F is a vector space for componentwise addition and scalar multiplication, whose dimension is n. The one-to-one correspondence between vectors and their coordinate vectors maps vector addition to vector addition and scalar multiplication to scalar multiplication. It is thus a vector space isomorphism, which allows translating reasonings and computations on vectors into reasonings and computations on their coordinates. If, in turn, these coordinates are arranged as matrices, these reasonings and computations on coordinates can be expressed concisely as reasonings and computations on matrices. Moreover, a linear equation relating matrices can be expanded into a system of linear equations, and, conversely, every such system can be compacted into a linear equation on matrices. In summary, finite-dimensional linear algebra may be expressed in three equivalent languages: • Vector spaces, which provide concise and coordinate-free statements, • Matrices, which are convenient for expressing concisely explicit computations, • Systems of linear equations, which provide more elementary formulations. History Vector spaces stem from affine geometry, via the introduction of coordinates in the plane or three-dimensional space. Around 1636, French mathematicians René Descartes and Pierre de Fermat founded analytic geometry by identifying solutions to an equation of two variables with points on a plane curve.[2] To achieve geometric solutions without using coordinates, Bolzano introduced, in 1804, certain operations on points, lines and planes, which are predecessors of vectors.[3] Möbius (1827) introduced the notion of barycentric coordinates. Bellavitis (1833) introduced an equivalence relation on directed line segments that share the same length and direction which he called equipollence. A Euclidean vector is then an equivalence class of that relation.[4] Vectors were reconsidered with the presentation of complex numbers by Argand and Hamilton and the inception of quaternions by the latter.[5] They are elements in R2 and R4; treating them using linear combinations goes back to Laguerre in 1867, who also defined systems of linear equations. In 1857, Cayley introduced the matrix notation which allows for a harmonization and simplification of linear maps. Around the same time, Grassmann studied the barycentric calculus initiated by Möbius. He envisaged sets of abstract objects endowed with operations.[6] In his work, the concepts of linear independence and dimension, as well as scalar products are present. Actually Grassmann's 1844 work exceeds the framework of vector spaces, since his considering multiplication, too, led him to what are today called algebras. Italian mathematician Peano was the first to give the modern definition of vector spaces and linear maps in 1888,[7] although he called them "linear systems".[8] An important development of vector spaces is due to the construction of function spaces by Henri Lebesgue. This was later formalized by Banach and Hilbert, around 1920.[9] At that time, algebra and the new field of functional analysis began to interact, notably with key concepts such as spaces of p-integrable functions and Hilbert spaces.[10] Also at this time, the first studies concerning infinite-dimensional vector spaces were done. Examples Main article: Examples of vector spaces Arrows in the plane The first example of a vector space consists of arrows in a fixed plane, starting at one fixed point. This is used in physics to describe forces or velocities. Given any two such arrows, v and w, the parallelogram spanned by these two arrows contains one diagonal arrow that starts at the origin, too. This new arrow is called the sum of the two arrows, and is denoted v + w. In the special case of two arrows on the same line, their sum is the arrow on this line whose length is the sum or the difference of the lengths, depending on whether the arrows have the same direction. Another operation that can be done with arrows is scaling: given any positive real number a, the arrow that has the same direction as v, but is dilated or shrunk by multiplying its length by a, is called multiplication of v by a. It is denoted av. When a is negative, av is defined as the arrow pointing in the opposite direction instead. The following shows a few examples: if a = 2, the resulting vector aw has the same direction as w, but is stretched to the double length of w (right image below). Equivalently, 2w is the sum w + w. Moreover, (−1)v = −v has the opposite direction and the same length as v (blue vector pointing down in the right image). Second example: ordered pairs of numbers A second key example of a vector space is provided by pairs of real numbers x and y. (The order of the components x and y is significant, so such a pair is also called an ordered pair.) Such a pair is written as (x, y). The sum of two such pairs and multiplication of a pair with a number is defined as follows: $(x_{1},y_{1})+(x_{2},y_{2})=(x_{1}+x_{2},y_{1}+y_{2})$ and $a(x,y)=(ax,ay).$ The first example above reduces to this example, if an arrow is represented by a pair of Cartesian coordinates of its endpoint. Coordinate space The simplest example of a vector space over a field F is the field F itself (as it is an abelian group for addition, a part of the requirements to be a field), equipped with its addition (It becomes vector addition.) and multiplication (It becomes scalar multiplication.). More generally, all n-tuples (sequences of length n) $(a_{1},a_{2},\dots ,a_{n})$ of elements ai of F form a vector space that is usually denoted Fn and called a coordinate space.[11] The case n = 1 is the above-mentioned simplest example, in which the field F is also regarded as a vector space over itself. The case F = R and n = 2 (so R2) was discussed in the introduction above. Complex numbers and other field extensions The set of complex numbers C, that is, numbers that can be written in the form x + iy for real numbers x and y where i is the imaginary unit, form a vector space over the reals with the usual addition and multiplication: (x + iy) + (a + ib) = (x + a) + i(y + b) and c ⋅ (x + iy) = (c ⋅ x) + i(c ⋅ y) for real numbers x, y, a, b and c. The various axioms of a vector space follow from the fact that the same rules hold for complex number arithmetic. In fact, the example of complex numbers is essentially the same as (that is, it is isomorphic to) the vector space of ordered pairs of real numbers mentioned above: if we think of the complex number x + i y as representing the ordered pair (x, y) in the complex plane then we see that the rules for addition and scalar multiplication correspond exactly to those in the earlier example. More generally, field extensions provide another class of examples of vector spaces, particularly in algebra and algebraic number theory: a field F containing a smaller field E is an E-vector space, by the given multiplication and addition operations of F.[12] For example, the complex numbers are a vector space over R, and the field extension $\mathbf {Q} (i{\sqrt {5}})$ is a vector space over Q. Function spaces Main article: Function space Functions from any fixed set Ω to a field F also form vector spaces, by performing addition and scalar multiplication pointwise. That is, the sum of two functions f and g is the function $(f+g)$ given by $(f+g)(w)=f(w)+g(w),$ and similarly for multiplication. Such function spaces occur in many geometric situations, when Ω is the real line or an interval, or other subsets of R. Many notions in topology and analysis, such as continuity, integrability or differentiability are well-behaved with respect to linearity: sums and scalar multiples of functions possessing such a property still have that property.[13] Therefore, the set of such functions are vector spaces, whose study belongs to functional analysis. Linear equations Main articles: Linear equation, Linear differential equation, and Systems of linear equations Systems of homogeneous linear equations are closely tied to vector spaces.[14] For example, the solutions of ${\begin{alignedat}{9}&&a\,&&+\,3b\,&\,+&\,&c&\,=0\\4&&a\,&&+\,2b\,&\,+&\,2&c&\,=0\\\end{alignedat}}$ are given by triples with arbitrary $a,$ $b=a/2,$ and $c=-5a/2.$ They form a vector space: sums and scalar multiples of such triples still satisfy the same ratios of the three variables; thus they are solutions, too. Matrices can be used to condense multiple linear equations as above into one vector equation, namely $A\mathbf {x} =\mathbf {0} ,$ where $A={\begin{bmatrix}1&3&1\\4&2&2\end{bmatrix}}$ is the matrix containing the coefficients of the given equations, $\mathbf {x} $ is the vector $(a,b,c),$ $A\mathbf {x} $ denotes the matrix product, and $\mathbf {0} =(0,0)$ is the zero vector. In a similar vein, the solutions of homogeneous linear differential equations form vector spaces. For example, $f^{\prime \prime }(x)+2f^{\prime }(x)+f(x)=0$ yields $f(x)=ae^{-x}+bxe^{-x},$ where $a$ and $b$ are arbitrary constants, and $e^{x}$ is the natural exponential function. Linear maps and matrices Main article: Linear map The relation of two vector spaces can be expressed by linear map or linear transformation. They are functions that reflect the vector space structure, that is, they preserve sums and scalar multiplication: $f(\mathbf {v} +\mathbf {w} )=f(\mathbf {v} )+f(\mathbf {w} ),{\text{ and }}$ $f(a\cdot \mathbf {v} )=a\cdot f(\mathbf {v} )$ for all $\mathbf {v} $ and $\mathbf {w} $ in $V,$ all $a$ in $F.$[15] An isomorphism is a linear map f : V → W such that there exists an inverse map g : W → V, which is a map such that the two possible compositions f ∘ g : W → W and g ∘ f : V → V are identity maps. Equivalently, f is both one-to-one (injective) and onto (surjective).[16] If there exists an isomorphism between V and W, the two spaces are said to be isomorphic; they are then essentially identical as vector spaces, since all identities holding in V are, via f, transported to similar ones in W, and vice versa via g. For example, the "arrows in the plane" and "ordered pairs of numbers" vector spaces in the introduction are isomorphic: a planar arrow v departing at the origin of some (fixed) coordinate system can be expressed as an ordered pair by considering the x- and y-component of the arrow, as shown in the image at the right. Conversely, given a pair (x, y), the arrow going by x to the right (or to the left, if x is negative), and y up (down, if y is negative) turns back the arrow v. Linear maps V → W between two vector spaces form a vector space HomF(V, W), also denoted L(V, W), or 𝓛(V, W).[17] The space of linear maps from V to F is called the dual vector space, denoted V∗.[18] Via the injective natural map V → V∗∗, any vector space can be embedded into its bidual; the map is an isomorphism if and only if the space is finite-dimensional.[19] Once a basis of V is chosen, linear maps f : V → W are completely determined by specifying the images of the basis vectors, because any element of V is expressed uniquely as a linear combination of them.[20] If dim V = dim W, a 1-to-1 correspondence between fixed bases of V and W gives rise to a linear map that maps any basis element of V to the corresponding basis element of W. It is an isomorphism, by its very definition.[21] Therefore, two vector spaces over a given field are isomorphic if their dimensions agree and vice versa. Another way to express this is that any vector space over a given field is completely classified (up to isomorphism) by its dimension, a single number. In particular, any n-dimensional F-vector space V is isomorphic to Fn. There is, however, no "canonical" or preferred isomorphism; actually an isomorphism φ : Fn → V is equivalent to the choice of a basis of V, by mapping the standard basis of Fn to V, via φ. The freedom of choosing a convenient basis is particularly useful in the infinite-dimensional context; see below. Matrices Main articles: Matrix and Determinant Matrices are a useful notion to encode linear maps.[22] They are written as a rectangular array of scalars as in the image at the right. Any m-by-n matrix $A$ gives rise to a linear map from Fn to Fm, by the following $\mathbf {x} =(x_{1},x_{2},\ldots ,x_{n})\mapsto \left(\sum _{j=1}^{n}a_{1j}x_{j},\sum _{j=1}^{n}a_{2j}x_{j},\ldots ,\sum _{j=1}^{n}a_{mj}x_{j}\right),$ where $\sum $ denotes summation, or, using the matrix multiplication of the matrix $A$ with the coordinate vector $\mathbf {x} :$ :} $\mathbf {x} \mapsto A\mathbf {x} .$ Moreover, after choosing bases of V and W, any linear map f : V → W is uniquely represented by a matrix via this assignment.[23] The determinant det (A) of a square matrix A is a scalar that tells whether the associated map is an isomorphism or not: to be so it is sufficient and necessary that the determinant is nonzero.[24] The linear transformation of Rn corresponding to a real n-by-n matrix is orientation preserving if and only if its determinant is positive. Eigenvalues and eigenvectors Main article: Eigenvalues and eigenvectors Endomorphisms, linear maps f : V → V, are particularly important since in this case vectors v can be compared with their image under f, f(v). Any nonzero vector v satisfying λv = f(v), where λ is a scalar, is called an eigenvector of f with eigenvalue λ.[nb 4][25] Equivalently, v is an element of the kernel of the difference f − λ · Id (where Id is the identity map V → V). If V is finite-dimensional, this can be rephrased using determinants: f having eigenvalue λ is equivalent to $\det(f-\lambda \cdot \operatorname {Id} )=0.$ By spelling out the definition of the determinant, the expression on the left hand side can be seen to be a polynomial function in λ, called the characteristic polynomial of f.[26] If the field F is large enough to contain a zero of this polynomial (which automatically happens for F algebraically closed, such as F = C) any linear map has at least one eigenvector. The vector space V may or may not possess an eigenbasis, a basis consisting of eigenvectors. This phenomenon is governed by the Jordan canonical form of the map.[27][nb 5] The set of all eigenvectors corresponding to a particular eigenvalue of f forms a vector space known as the eigenspace corresponding to the eigenvalue (and f) in question. To achieve the spectral theorem, the corresponding statement in the infinite-dimensional case, the machinery of functional analysis is needed, see below. Basic constructions In addition to the above concrete examples, there are a number of standard linear algebraic constructions that yield vector spaces related to given ones. In addition to the definitions given below, they are also characterized by universal properties, which determine an object $X$ by specifying the linear maps from $X$ to any other vector space. Subspaces and quotient spaces Main articles: Linear subspace and Quotient vector space A nonempty subset $W$ of a vector space $V$ that is closed under addition and scalar multiplication (and therefore contains the $\mathbf {0} $-vector of $V$) is called a linear subspace of $V,$ or simply a subspace of $V,$ when the ambient space is unambiguously a vector space.[28][nb 6] Subspaces of $V$ are vector spaces (over the same field) in their own right. The intersection of all subspaces containing a given set $S$ of vectors is called its span, and it is the smallest subspace of $V$ containing the set $S.$Expressed in terms of elements, the span is the subspace consisting of all the linear combinations of elements of $S.$[29] A linear subspace of dimension 1 is a vector line. A linear subspace of dimension 2 is a vector plane. A linear subspace that contains all elements but one of a basis of the ambient space is a vector hyperplane. In a vector space of finite dimension $n,$ a vector hyperplane is thus a subspace of dimension $n-1.$ The counterpart to subspaces are quotient vector spaces.[30] Given any subspace $W\subseteq V,$ the quotient space $V/W$ ("$V$ modulo $W$") is defined as follows: as a set, it consists of $\mathbf {v} +W=\{\mathbf {v} +\mathbf {w} :\mathbf {w} \in W\},$ :\mathbf {w} \in W\},} where $\mathbf {v} $ is an arbitrary vector in $V.$ The sum of two such elements $\mathbf {v} _{1}+W$ and $\mathbf {v} _{2}+W$ is $\left(\mathbf {v} _{1}+\mathbf {v} _{2}\right)+W,$ and scalar multiplication is given by $a\cdot (\mathbf {v} +W)=(a\cdot \mathbf {v} )+W.$ The key point in this definition is that $\mathbf {v} _{1}+W=\mathbf {v} _{2}+W$ if and only if the difference of $\mathbf {v} _{1}$ and $\mathbf {v} _{2}$ lies in $W.$[nb 7] This way, the quotient space "forgets" information that is contained in the subspace $W.$ The kernel $\ker(f)$ of a linear map $f:V\to W$ consists of vectors $\mathbf {v} $ that are mapped to $\mathbf {0} $ in $W.$[31] The kernel and the image $\operatorname {im} (f)=\{f(\mathbf {v} ):\mathbf {v} \in V\}$ are subspaces of $V$ and $W,$ respectively.[32] The existence of kernels and images is part of the statement that the category of vector spaces (over a fixed field $F$) is an abelian category, that is, a corpus of mathematical objects and structure-preserving maps between them (a category) that behaves much like the category of abelian groups.[33] Because of this, many statements such as the first isomorphism theorem (also called rank–nullity theorem in matrix-related terms) $V/\ker(f)\;\equiv \;\operatorname {im} (f)$ and the second and third isomorphism theorem can be formulated and proven in a way very similar to the corresponding statements for groups. An important example is the kernel of a linear map $\mathbf {x} \mapsto A\mathbf {x} $ for some fixed matrix $A,$ as above. The kernel of this map is the subspace of vectors $\mathbf {x} $ such that $A\mathbf {x} =\mathbf {0} ,$ which is precisely the set of solutions to the system of homogeneous linear equations belonging to $A.$ This concept also extends to linear differential equations $a_{0}f+a_{1}{\frac {df}{dx}}+a_{2}{\frac {d^{2}f}{dx^{2}}}+\cdots +a_{n}{\frac {d^{n}f}{dx^{n}}}=0,$ where the coefficients $a_{i}$ are functions in $x,$ too. In the corresponding map $f\mapsto D(f)=\sum _{i=0}^{n}a_{i}{\frac {d^{i}f}{dx^{i}}},$ the derivatives of the function $f$ appear linearly (as opposed to $f^{\prime \prime }(x)^{2},$ for example). Since differentiation is a linear procedure (that is, $(f+g)^{\prime }=f^{\prime }+g^{\prime }$ and $(c\cdot f)^{\prime }=c\cdot f^{\prime }$ for a constant $c$) this assignment is linear, called a linear differential operator. In particular, the solutions to the differential equation $D(f)=0$ form a vector space (over R or C). Direct product and direct sum Main articles: Direct product and Direct sum of modules The direct product of vector spaces and the direct sum of vector spaces are two ways of combining an indexed family of vector spaces into a new vector space. The direct product $\textstyle {\prod _{i\in I}V_{i}}$ of a family of vector spaces $V_{i}$ consists of the set of all tuples $\left(\mathbf {v} _{i}\right)_{i\in I},$ which specify for each index $i$ in some index set $I$ an element $\mathbf {v} _{i}$ of $V_{i}.$[34] Addition and scalar multiplication is performed componentwise. A variant of this construction is the direct sum $ \bigoplus _{i\in I}V_{i}$ (also called coproduct and denoted $ \coprod _{i\in I}V_{i}$), where only tuples with finitely many nonzero vectors are allowed. If the index set $I$ is finite, the two constructions agree, but in general they are different. Tensor product Main article: Tensor product of vector spaces The tensor product $V\otimes _{F}W,$ or simply $V\otimes W,$ of two vector spaces $V$ and $W$ is one of the central notions of multilinear algebra which deals with extending notions such as linear maps to several variables. A map $g:V\times W\to X$ from the Cartesian product $V\times W$ is called bilinear if $g$ is linear in both variables $\mathbf {v} $ and $\mathbf {w} .$ That is to say, for fixed $\mathbf {w} $ the map $\mathbf {v} \mapsto g(\mathbf {v} ,\mathbf {w} )$ is linear in the sense above and likewise for fixed $\mathbf {v} .$ The tensor product is a particular vector space that is a universal recipient of bilinear maps $g,$ as follows. It is defined as the vector space consisting of finite (formal) sums of symbols called tensors $\mathbf {v} _{1}\otimes \mathbf {w} _{1}+\mathbf {v} _{2}\otimes \mathbf {w} _{2}+\cdots +\mathbf {v} _{n}\otimes \mathbf {w} _{n},$ subject to the rules[35] ${\begin{alignedat}{6}a\cdot (\mathbf {v} \otimes \mathbf {w} )~&=~(a\cdot \mathbf {v} )\otimes \mathbf {w} ~=~\mathbf {v} \otimes (a\cdot \mathbf {w} ),&&~~{\text{ where }}a{\text{ is a scalar}}\\(\mathbf {v} _{1}+\mathbf {v} _{2})\otimes \mathbf {w} ~&=~\mathbf {v} _{1}\otimes \mathbf {w} +\mathbf {v} _{2}\otimes \mathbf {w} &&\\\mathbf {v} \otimes (\mathbf {w} _{1}+\mathbf {w} _{2})~&=~\mathbf {v} \otimes \mathbf {w} _{1}+\mathbf {v} \otimes \mathbf {w} _{2}.&&\\\end{alignedat}}$ These rules ensure that the map $f$ from the $V\times W$ to $V\otimes W$ that maps a tuple $(\mathbf {v} ,\mathbf {w} )$ to $\mathbf {v} \otimes \mathbf {w} $ is bilinear. The universality states that given any vector space $X$ and any bilinear map $g:V\times W\to X,$ there exists a unique map $u,$ shown in the diagram with a dotted arrow, whose composition with $f$ equals $g:$ $u(\mathbf {v} \otimes \mathbf {w} )=g(\mathbf {v} ,\mathbf {w} ).$[36] This is called the universal property of the tensor product, an instance of the method—much used in advanced abstract algebra—to indirectly define objects by specifying maps from or to this object. Vector spaces with additional structure From the point of view of linear algebra, vector spaces are completely understood insofar as any vector space over a given field is characterized, up to isomorphism, by its dimension. However, vector spaces per se do not offer a framework to deal with the question—crucial to analysis—whether a sequence of functions converges to another function. Likewise, linear algebra is not adapted to deal with infinite series, since the addition operation allows only finitely many terms to be added. Therefore, the needs of functional analysis require considering additional structures. A vector space may be given a partial order $\,\leq ,\,$ under which some vectors can be compared.[37] For example, $n$-dimensional real space $\mathbf {R} ^{n}$ can be ordered by comparing its vectors componentwise. Ordered vector spaces, for example Riesz spaces, are fundamental to Lebesgue integration, which relies on the ability to express a function as a difference of two positive functions $f=f^{+}-f^{-}.$ where $f^{+}$ denotes the positive part of $f$ and $f^{-}$ the negative part.[38] Normed vector spaces and inner product spaces Main articles: Normed vector space and Inner product space "Measuring" vectors is done by specifying a norm, a datum which measures lengths of vectors, or by an inner product, which measures angles between vectors. Norms and inner products are denoted $|\mathbf {v} |$ and $\langle \mathbf {v} ,\mathbf {w} \rangle ,$ respectively. The datum of an inner product entails that lengths of vectors can be defined too, by defining the associated norm $ |\mathbf {v} |:={\sqrt {\langle \mathbf {v} ,\mathbf {v} \rangle }}.$ Vector spaces endowed with such data are known as normed vector spaces and inner product spaces, respectively.[39] Coordinate space $F^{n}$ can be equipped with the standard dot product: $\langle \mathbf {x} ,\mathbf {y} \rangle =\mathbf {x} \cdot \mathbf {y} =x_{1}y_{1}+\cdots +x_{n}y_{n}.$ In $\mathbf {R} ^{2},$ this reflects the common notion of the angle between two vectors $\mathbf {x} $ and $\mathbf {y} ,$ by the law of cosines: $\mathbf {x} \cdot \mathbf {y} =\cos \left(\angle (\mathbf {x} ,\mathbf {y} )\right)\cdot |\mathbf {x} |\cdot |\mathbf {y} |.$ Because of this, two vectors satisfying $\langle \mathbf {x} ,\mathbf {y} \rangle =0$ are called orthogonal. An important variant of the standard dot product is used in Minkowski space: $\mathbf {R} ^{4}$ endowed with the Lorentz product[40] $\langle \mathbf {x} |\mathbf {y} \rangle =x_{1}y_{1}+x_{2}y_{2}+x_{3}y_{3}-x_{4}y_{4}.$ In contrast to the standard dot product, it is not positive definite: $\langle \mathbf {x} |\mathbf {x} \rangle $ also takes negative values, for example, for $\mathbf {x} =(0,0,0,1).$ Singling out the fourth coordinate—corresponding to time, as opposed to three space-dimensions—makes it useful for the mathematical treatment of special relativity. Topological vector spaces Main article: Topological vector space Convergence questions are treated by considering vector spaces $V$ carrying a compatible topology, a structure that allows one to talk about elements being close to each other.[41][42] Compatible here means that addition and scalar multiplication have to be continuous maps. Roughly, if $\mathbf {x} $ and $\mathbf {y} $ in $V,$ and $a$ in $F$ vary by a bounded amount, then so do $\mathbf {x} +\mathbf {y} $ and $a\mathbf {x} .$[nb 8] To make sense of specifying the amount a scalar changes, the field $F$ also has to carry a topology in this context; a common choice are the reals or the complex numbers. In such topological vector spaces one can consider series of vectors. The infinite sum $\sum _{i=1}^{\infty }f_{i}~=~\lim _{n\to \infty }f_{1}+\cdots +f_{n}$ denotes the limit of the corresponding finite partial sums of the sequence $f_{1},f_{2},\ldots $ of elements of $V.$ For example, the $f_{i}$ could be (real or complex) functions belonging to some function space $V,$ in which case the series is a function series. The mode of convergence of the series depends on the topology imposed on the function space. In such cases, pointwise convergence and uniform convergence are two prominent examples. A way to ensure the existence of limits of certain infinite series is to restrict attention to spaces where any Cauchy sequence has a limit; such a vector space is called complete. Roughly, a vector space is complete provided that it contains all necessary limits. For example, the vector space of polynomials on the unit interval $[0,1],$ equipped with the topology of uniform convergence is not complete because any continuous function on $[0,1]$ can be uniformly approximated by a sequence of polynomials, by the Weierstrass approximation theorem.[43] In contrast, the space of all continuous functions on $[0,1]$ with the same topology is complete.[44] A norm gives rise to a topology by defining that a sequence of vectors $\mathbf {v} _{n}$ converges to $\mathbf {v} $ if and only if $\lim _{n\to \infty }|\mathbf {v} _{n}-\mathbf {v} |=0.$ Banach and Hilbert spaces are complete topological vector spaces whose topologies are given, respectively, by a norm and an inner product. Their study—a key piece of functional analysis—focuses on infinite-dimensional vector spaces, since all norms on finite-dimensional topological vector spaces give rise to the same notion of convergence.[45] The image at the right shows the equivalence of the $1$-norm and $\infty $-norm on $\mathbf {R} ^{2}:$ as the unit "balls" enclose each other, a sequence converges to zero in one norm if and only if it so does in the other norm. In the infinite-dimensional case, however, there will generally be inequivalent topologies, which makes the study of topological vector spaces richer than that of vector spaces without additional data. From a conceptual point of view, all notions related to topological vector spaces should match the topology. For example, instead of considering all linear maps (also called functionals) $V\to W,$ maps between topological vector spaces are required to be continuous.[46] In particular, the (topological) dual space $V^{*}$ consists of continuous functionals $V\to \mathbf {R} $ (or to $\mathbf {C} $). The fundamental Hahn–Banach theorem is concerned with separating subspaces of appropriate topological vector spaces by continuous functionals.[47] Banach spaces Main article: Banach space Banach spaces, introduced by Stefan Banach, are complete normed vector spaces.[48] A first example is the vector space $\ell ^{p}$ consisting of infinite vectors with real entries $\mathbf {x} =\left(x_{1},x_{2},\ldots ,x_{n},\ldots \right)$ whose $p$-norm $(1\leq p\leq \infty )$ given by $\|\mathbf {x} \|_{\infty }:=\sup _{i}|x_{i}|\qquad {\text{ for }}p=\infty ,{\text{ and }}$ $\|\mathbf {x} \|_{p}:=\left(\sum _{i}|x_{i}|^{p}\right)^{\frac {1}{p}}\qquad {\text{ for }}p<\infty .$ The topologies on the infinite-dimensional space $\ell ^{p}$ are inequivalent for different $p.$ For example, the sequence of vectors $\mathbf {x} _{n}=\left(2^{-n},2^{-n},\ldots ,2^{-n},0,0,\ldots \right),$ in which the first $2^{n}$ components are $2^{-n}$ and the following ones are $0,$ converges to the zero vector for $p=\infty ,$ but does not for $p=1:$ $\|\mathbf {x} _{n}\|_{\infty }=\sup(2^{-n},0)=2^{-n}\to 0,$ but $\|\mathbf {x} _{n}\|_{1}=\sum _{i=1}^{2^{n}}2^{-n}=2^{n}\cdot 2^{-n}=1.$ More generally than sequences of real numbers, functions $f:\Omega \to \mathbb {R} $ are endowed with a norm that replaces the above sum by the Lebesgue integral $\|f\|_{p}:=\left(\int _{\Omega }|f(x)|^{p}\,{d\mu (x)}\right)^{\frac {1}{p}}.$ The space of integrable functions on a given domain $\Omega $ (for example an interval) satisfying $\|f\|_{p}<\infty ,$ and equipped with this norm are called Lebesgue spaces, denoted $L^{\;\!p}(\Omega ).$[nb 9] These spaces are complete.[49] (If one uses the Riemann integral instead, the space is not complete, which may be seen as a justification for Lebesgue's integration theory.[nb 10]) Concretely this means that for any sequence of Lebesgue-integrable functions $f_{1},f_{2},\ldots ,f_{n},\ldots $ with $\|f_{n}\|_{p}<\infty ,$ satisfying the condition $\lim _{k,\ n\to \infty }\int _{\Omega }\left|f_{k}(x)-f_{n}(x)\right|^{p}\,{d\mu (x)}=0$ there exists a function $f(x)$ belonging to the vector space $L^{\;\!p}(\Omega )$ such that $\lim _{k\to \infty }\int _{\Omega }\left|f(x)-f_{k}(x)\right|^{p}\,{d\mu (x)}=0.$ Imposing boundedness conditions not only on the function, but also on its derivatives leads to Sobolev spaces.[50] Hilbert spaces Main article: Hilbert space Complete inner product spaces are known as Hilbert spaces, in honor of David Hilbert.[51] The Hilbert space $L^{2}(\Omega ),$ with inner product given by $\langle f\ ,\ g\rangle =\int _{\Omega }f(x){\overline {g(x)}}\,dx,$ where ${\overline {g(x)}}$ denotes the complex conjugate of $g(x),$[52][nb 11] is a key case. By definition, in a Hilbert space any Cauchy sequence converges to a limit. Conversely, finding a sequence of functions $f_{n}$ with desirable properties that approximates a given limit function, is equally crucial. Early analysis, in the guise of the Taylor approximation, established an approximation of differentiable functions $f$ by polynomials.[53] By the Stone–Weierstrass theorem, every continuous function on $[a,b]$ can be approximated as closely as desired by a polynomial.[54] A similar approximation technique by trigonometric functions is commonly called Fourier expansion, and is much applied in engineering, see below. More generally, and more conceptually, the theorem yields a simple description of what "basic functions", or, in abstract Hilbert spaces, what basic vectors suffice to generate a Hilbert space $H,$ in the sense that the closure of their span (that is, finite linear combinations and limits of those) is the whole space. Such a set of functions is called a basis of $H,$ its cardinality is known as the Hilbert space dimension.[nb 12] Not only does the theorem exhibit suitable basis functions as sufficient for approximation purposes, but also together with the Gram–Schmidt process, it enables one to construct a basis of orthogonal vectors.[55] Such orthogonal bases are the Hilbert space generalization of the coordinate axes in finite-dimensional Euclidean space. The solutions to various differential equations can be interpreted in terms of Hilbert spaces. For example, a great many fields in physics and engineering lead to such equations and frequently solutions with particular physical properties are used as basis functions, often orthogonal.[56] As an example from physics, the time-dependent Schrödinger equation in quantum mechanics describes the change of physical properties in time by means of a partial differential equation, whose solutions are called wavefunctions.[57] Definite values for physical properties such as energy, or momentum, correspond to eigenvalues of a certain (linear) differential operator and the associated wavefunctions are called eigenstates. The spectral theorem decomposes a linear compact operator acting on functions in terms of these eigenfunctions and their eigenvalues.[58] Algebras over fields Main articles: Algebra over a field and Lie algebra General vector spaces do not possess a multiplication between vectors. A vector space equipped with an additional bilinear operator defining the multiplication of two vectors is an algebra over a field.[59] Many algebras stem from functions on some geometrical object: since functions with values in a given field can be multiplied pointwise, these entities form algebras. The Stone–Weierstrass theorem, for example, relies on Banach algebras which are both Banach spaces and algebras. Commutative algebra makes great use of rings of polynomials in one or several variables, introduced above. Their multiplication is both commutative and associative. These rings and their quotients form the basis of algebraic geometry, because they are rings of functions of algebraic geometric objects.[60] Another crucial example are Lie algebras, which are neither commutative nor associative, but the failure to be so is limited by the constraints ($[x,y]$ denotes the product of $x$ and $y$): • $[x,y]=-[y,x]$ (anticommutativity), and • $[x,[y,z]]+[y,[z,x]]+[z,[x,y]]=0$ (Jacobi identity).[61] Examples include the vector space of $n$-by-$n$ matrices, with $[x,y]=xy-yx,$ the commutator of two matrices, and $\mathbf {R} ^{3},$ endowed with the cross product. The tensor algebra $\operatorname {T} (V)$ is a formal way of adding products to any vector space $V$ to obtain an algebra.[62] As a vector space, it is spanned by symbols, called simple tensors $\mathbf {v} _{1}\otimes \mathbf {v} _{2}\otimes \cdots \otimes \mathbf {v} _{n},$ where the degree $n$ varies. The multiplication is given by concatenating such symbols, imposing the distributive law under addition, and requiring that scalar multiplication commute with the tensor product ⊗, much the same way as with the tensor product of two vector spaces introduced above. In general, there are no relations between $\mathbf {v} _{1}\otimes \mathbf {v} _{2}$ and $\mathbf {v} _{2}\otimes \mathbf {v} _{1}.$ Forcing two such elements to be equal leads to the symmetric algebra, whereas forcing $\mathbf {v} _{1}\otimes \mathbf {v} _{2}=-\mathbf {v} _{2}\otimes \mathbf {v} _{1}$ yields the exterior algebra.[63] When a field, $F$ is explicitly stated, a common term used is $F$-algebra. Related structures Vector bundles Main articles: Vector bundle and Tangent bundle A vector bundle is a family of vector spaces parametrized continuously by a topological space X.[64] More precisely, a vector bundle over X is a topological space E equipped with a continuous map $\pi :E\to X$ such that for every x in X, the fiber π−1(x) is a vector space. The case dim V = 1 is called a line bundle. For any vector space V, the projection X × V → X makes the product X × V into a "trivial" vector bundle. Vector bundles over X are required to be locally a product of X and some (fixed) vector space V: for every x in X, there is a neighborhood U of x such that the restriction of π to π−1(U) is isomorphic[nb 13] to the trivial bundle U × V → U. Despite their locally trivial character, vector bundles may (depending on the shape of the underlying space X) be "twisted" in the large (that is, the bundle need not be (globally isomorphic to) the trivial bundle X × V). For example, the Möbius strip can be seen as a line bundle over the circle S1 (by identifying open intervals with the real line). It is, however, different from the cylinder S1 × R, because the latter is orientable whereas the former is not.[65] Properties of certain vector bundles provide information about the underlying topological space. For example, the tangent bundle consists of the collection of tangent spaces parametrized by the points of a differentiable manifold. The tangent bundle of the circle S1 is globally isomorphic to S1 × R, since there is a global nonzero vector field on S1.[nb 14] In contrast, by the hairy ball theorem, there is no (tangent) vector field on the 2-sphere S2 which is everywhere nonzero.[66] K-theory studies the isomorphism classes of all vector bundles over some topological space.[67] In addition to deepening topological and geometrical insight, it has purely algebraic consequences, such as the classification of finite-dimensional real division algebras: R, C, the quaternions H and the octonions O. The cotangent bundle of a differentiable manifold consists, at every point of the manifold, of the dual of the tangent space, the cotangent space. Sections of that bundle are known as differential one-forms. Modules Main article: Module Modules are to rings what vector spaces are to fields: the same axioms, applied to a ring R instead of a field F, yield modules.[68] The theory of modules, compared to that of vector spaces, is complicated by the presence of ring elements that do not have multiplicative inverses. For example, modules need not have bases, as the Z-module (that is, abelian group) Z/2Z shows; those modules that do (including all vector spaces) are known as free modules. Nevertheless, a vector space can be compactly defined as a module over a ring which is a field, with the elements being called vectors. Some authors use the term vector space to mean modules over a division ring.[69] The algebro-geometric interpretation of commutative rings via their spectrum allows the development of concepts such as locally free modules, the algebraic counterpart to vector bundles. Affine and projective spaces Main articles: Affine space and Projective space Roughly, affine spaces are vector spaces whose origins are not specified.[70] More precisely, an affine space is a set with a free transitive vector space action. In particular, a vector space is an affine space over itself, by the map $V\times V\to W,\;(\mathbf {v} ,\mathbf {a} )\mapsto \mathbf {a} +\mathbf {v} .$ If W is a vector space, then an affine subspace is a subset of W obtained by translating a linear subspace V by a fixed vector x ∈ W; this space is denoted by x + V (it is a coset of V in W) and consists of all vectors of the form x + v for v ∈ V. An important example is the space of solutions of a system of inhomogeneous linear equations $A\mathbf {v} =\mathbf {b} $ generalizing the homogeneous case above, which can be found by setting $\mathbf {b} =\mathbf {0} $ in this equation.[71] The space of solutions is the affine subspace x + V where x is a particular solution of the equation, and V is the space of solutions of the homogeneous equation (the nullspace of A). The set of one-dimensional subspaces of a fixed finite-dimensional vector space V is known as projective space; it may be used to formalize the idea of parallel lines intersecting at infinity.[72] Grassmannians and flag manifolds generalize this by parametrizing linear subspaces of fixed dimension k and flags of subspaces, respectively. Related concepts Specific vectors in a vector space • Zero vector (sometimes also called null vector and denoted by $\mathbf {0} $), the additive identity in a vector space. In a normed vector space, it is the unique vector of norm zero. In a Euclidean vector space, it is the unique vector of length zero.[73] • Basis vector, an element of a given basis of a vector space. • Unit vector, a vector in a normed vector space whose norm is 1, or a Euclidean vector of length one.[73] • Isotropic vector or null vector, in a vector space with a quadratic form, a non-zero vector for which the form is zero. If a null vector exists, the quadratic form is said an isotropic quadratic form. Vectors in specific vector spaces • Column vector, a matrix with only one column. The column vectors with a fixed number of rows form a vector space. • Row vector, a matrix with only one row. The row vectors with a fixed number of columns form a vector space. • Coordinate vector, the n-tuple of the coordinates of a vector on a basis of n elements. For a vector space over a field F, these n-tuples form the vector space $F^{n}$ (where the operation are pointwise addition and scalar multiplication). • Displacement vector, a vector that specifies the change in position of a point relative to a previous position. Displacement vectors belong to the vector space of translations. • Position vector of a point, the displacement vector from a reference point (called the origin) to the point. A position vector represents the position of a point in a Euclidean space or an affine space. • Velocity vector, the derivative, with respect to time, of the position vector. It does not depend of the choice of the origin, and, thus belongs to the vector space of translations. • Pseudovector, also called axial vector • Covector, an element of the dual of a vector space. In an inner product space, the inner product defines an isomorphism between the space and its dual, which may make difficult to distinguish a covector from a vector. The distinction becomes apparent when one changes coordinates (non-orthogonally). • Tangent vector, an element of the tangent space of a curve, a surface or, more generally, a differential manifold at a given point (these tangent spaces are naturally endowed with a structure of vector space) • Normal vector or simply normal, in a Euclidean space or, more generally, in an inner product space, a vector that is perpendicular to a tangent space at a point. • Gradient, the coordinates vector of the partial derivatives of a function of several real variables. In a Euclidean space the gradient gives the magnitude and direction of maximum increase of a scalar field. The gradient is a covector that is normal to a level curve. • Four-vector, in the theory of relativity, a vector in a four-dimensional real vector space called Minkowski space See also • Vector (mathematics and physics), for a list of various kinds of vectors • Cartesian coordinate system • Graded vector space • Metric space • P-vector • Riesz–Fischer theorem • Space (mathematics) • Ordered vector space Notes 1. It is also common, especially in physics, to denote vectors with an arrow on top: ${\vec {v}}.$ It is also common, especially in higher mathematics, to not use any typographical method for distinguishing vectors from other mathematical objects. 2. Scalar multiplication is not to be confused with the scalar product, which is an additional operation on some specific vector spaces, called inner product spaces. Scalar multiplication is a multiplication of a vector by a scalar that produces a vector, while the scalar product is a multiplication of two vectors that produces a scalar. 3. This axiom is not an associative property, since it refers to two different operations, scalar multiplication and field multiplication. So, it is independent from the associativity of field multiplication, which is assumed by field axioms. 4. The nomenclature derives from German "eigen", which means own or proper. 5. See also Jordan–Chevalley decomposition. 6. This is typically the case when a vector space is also considered as an affine space. In this case, a linear subspace contains the zero vector, while an affine subspace does not necessarily contain it. 7. Some authors (such as Roman 2005) choose to start with this equivalence relation and derive the concrete shape of $V/W$ from this. 8. This requirement implies that the topology gives rise to a uniform structure, Bourbaki 1989, ch. II 9. The triangle inequality for $\|f+g\|_{p}\leq \|f\|_{p}+\|g\|_{p}$ is provided by the Minkowski inequality. For technical reasons, in the context of functions one has to identify functions that agree almost everywhere to get a norm, and not only a seminorm. 10. "Many functions in $L^{2}$ of Lebesgue measure, being unbounded, cannot be integrated with the classical Riemann integral. So spaces of Riemann integrable functions would not be complete in the $L^{2}$ norm, and the orthogonal decomposition would not apply to them. This shows one of the advantages of Lebesgue integration.", Dudley 1989, §5.3, p. 125 11. For $p\neq 2,$ $L^{p}(\Omega )$ is not a Hilbert space. 12. A basis of a Hilbert space is not the same thing as a basis in the sense of linear algebra above. For distinction, the latter is then called a Hamel basis. 13. That is, there is a homeomorphism from π−1(U) to V × U which restricts to linear isomorphisms between fibers. 14. A line bundle, such as the tangent bundle of S1 is trivial if and only if there is a section that vanishes nowhere, see Husemoller 1994, Corollary 8.3. The sections of the tangent bundle are just vector fields. Citations 1. Roman 2005, ch. 1, p. 27 2. Bourbaki 1969, ch. "Algèbre linéaire et algèbre multilinéaire", pp. 78–91. 3. Bolzano 1804. 4. Dorier (1995) 5. Hamilton 1853. 6. Grassmann 2000. 7. Peano 1888, ch. IX. 8. Guo, Hongyu (2021-06-16). What Are Tensors Exactly?. World Scientific. ISBN 978-981-12-4103-1. 9. Banach 1922. 10. Dorier 1995, Moore 1995. 11. Lang 1987, ch. I.1 12. Lang 2002, ch. V.1 13. Lang 1993, ch. XII.3., p. 335 14. Lang 1987, ch. VI.3. 15. Roman 2005, ch. 2, p. 45 16. Lang 1987, ch. IV.4, Corollary, p. 106 17. Lang 1987, Example IV.2.6 18. Lang 1987, ch. VI.6 19. Halmos 1974, p. 28, Ex. 9 20. Lang 1987, Theorem IV.2.1, p. 95 21. Roman 2005, Th. 2.5 and 2.6, p. 49 22. Lang 1987, ch. V.1 23. Lang 1987, ch. V.3., Corollary, p. 106 24. Lang 1987, Theorem VII.9.8, p. 198 25. Roman 2005, ch. 8, p. 135–156 26. Lang 1987, ch. IX.4 27. Roman 2005, ch. 8, p. 140. 28. Roman 2005, ch. 1, p. 29 29. Roman 2005, ch. 1, p. 35 30. Roman 2005, ch. 3, p. 64 31. Lang 1987, ch. IV.3. 32. Roman 2005, ch. 2, p. 48 33. Mac Lane 1998 34. Roman 2005, ch. 1, pp. 31–32 35. Lang 2002, ch. XVI.1 36. Roman 2005, Th. 14.3. See also Yoneda lemma. 37. Schaefer & Wolff 1999, pp. 204–205 38. Bourbaki 2004, ch. 2, p. 48 39. Roman 2005, ch. 9 40. Naber 2003, ch. 1.2 41. Treves 1967 42. Bourbaki 1987 43. Kreyszig 1989, §4.11-5 44. Kreyszig 1989, §1.5-5 45. Choquet 1966, Proposition III.7.2 46. Treves 1967, p. 34–36 47. Lang 1983, Cor. 4.1.2, p. 69 48. Treves 1967, ch. 11 49. Treves 1967, Theorem 11.2, p. 102 50. Evans 1998, ch. 5 51. Treves 1967, ch. 12 52. Dennery & Krzywicki 1996, p.190 53. Lang 1993, Th. XIII.6, p. 349 54. Lang 1993, Th. III.1.1 55. Choquet 1966, Lemma III.16.11 56. Kreyszig 1999, Chapter 11 57. Griffiths 1995, Chapter 1 58. Lang 1993, ch. XVII.3 59. Lang 2002, ch. III.1, p. 121 60. Eisenbud 1995, ch. 1.6 61. Varadarajan 1974 62. Lang 2002, ch. XVI.7 63. Lang 2002, ch. XVI.8 64. Spivak 1999, ch. 3 65. Kreyszig 1991, §34, p. 108 66. Eisenberg & Guy 1979 67. Atiyah 1989 68. Artin 1991, ch. 12 69. Grillet, Pierre Antoine. Abstract algebra. Vol. 242. Springer Science & Business Media, 2007. 70. Meyer 2000, Example 5.13.5, p. 436 71. Meyer 2000, Exercise 5.13.15–17, p. 442 72. Coxeter 1987 73. Weisstein, Eric W. "Vector". mathworld.wolfram.com. Retrieved 2020-08-19. References Algebra • Artin, Michael (1991), Algebra, Prentice Hall, ISBN 978-0-89871-510-1 • Blass, Andreas (1984), "Existence of bases implies the axiom of choice" (PDF), Axiomatic set theory (Boulder, Colorado, 1983), Contemporary Mathematics, vol. 31, Providence, R.I.: American Mathematical Society, pp. 31–33, MR 0763890 • Brown, William A. (1991), Matrices and vector spaces, New York: M. Dekker, ISBN 978-0-8247-8419-5 • Lang, Serge (1987), Linear algebra, Berlin, New York: Springer-Verlag, ISBN 978-0-387-96412-6 • Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, vol. 211 (Revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4, MR 1878556 • Mac Lane, Saunders (1999), Algebra (3rd ed.), pp. 193–222, ISBN 978-0-8218-1646-2 • Meyer, Carl D. (2000), Matrix Analysis and Applied Linear Algebra, SIAM, ISBN 978-0-89871-454-8 • Roman, Steven (2005), Advanced Linear Algebra, Graduate Texts in Mathematics, vol. 135 (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-24766-3 • Spindler, Karlheinz (1993), Abstract Algebra with Applications: Volume 1: Vector spaces and groups, CRC, ISBN 978-0-8247-9144-5 • van der Waerden, Bartel Leendert (1993), Algebra (in German) (9th ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-56799-8 Analysis • Bourbaki, Nicolas (1987), Topological vector spaces, Elements of mathematics, Berlin, New York: Springer-Verlag, ISBN 978-3-540-13627-9 • Bourbaki, Nicolas (2004), Integration I, Berlin, New York: Springer-Verlag, ISBN 978-3-540-41129-1 • Braun, Martin (1993), Differential equations and their applications: an introduction to applied mathematics, Berlin, New York: Springer-Verlag, ISBN 978-0-387-97894-9 • BSE-3 (2001) [1994], "Tangent plane", Encyclopedia of Mathematics, EMS Press • Choquet, Gustave (1966), Topology, Boston, MA: Academic Press • Dennery, Philippe; Krzywicki, Andre (1996), Mathematics for Physicists, Courier Dover Publications, ISBN 978-0-486-69193-0 • Dudley, Richard M. (1989), Real analysis and probability, The Wadsworth & Brooks/Cole Mathematics Series, Pacific Grove, CA: Wadsworth & Brooks/Cole Advanced Books & Software, ISBN 978-0-534-10050-6 • Dunham, William (2005), The Calculus Gallery, Princeton University Press, ISBN 978-0-691-09565-3 • Evans, Lawrence C. (1998), Partial differential equations, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-0772-9 • Folland, Gerald B. (1992), Fourier Analysis and Its Applications, Brooks-Cole, ISBN 978-0-534-17094-3 • Gasquet, Claude; Witomski, Patrick (1999), Fourier Analysis and Applications: Filtering, Numerical Computation, Wavelets, Texts in Applied Mathematics, New York: Springer-Verlag, ISBN 978-0-387-98485-8 • Ifeachor, Emmanuel C.; Jervis, Barrie W. (2001), Digital Signal Processing: A Practical Approach (2nd ed.), Harlow, Essex, England: Prentice-Hall (published 2002), ISBN 978-0-201-59619-9 • Krantz, Steven G. (1999), A Panorama of Harmonic Analysis, Carus Mathematical Monographs, Washington, DC: Mathematical Association of America, ISBN 978-0-88385-031-2 • Kreyszig, Erwin (1988), Advanced Engineering Mathematics (6th ed.), New York: John Wiley & Sons, ISBN 978-0-471-85824-9 • Kreyszig, Erwin (1989), Introductory functional analysis with applications, Wiley Classics Library, New York: John Wiley & Sons, ISBN 978-0-471-50459-7, MR 0992618 • Lang, Serge (1983), Real analysis, Addison-Wesley, ISBN 978-0-201-14179-5 • Lang, Serge (1993), Real and functional analysis, Berlin, New York: Springer-Verlag, ISBN 978-0-387-94001-4 • Loomis, Lynn H. (1953), An introduction to abstract harmonic analysis, The University series in higher mathematics, Toronto-New York–London: D. Van Nostrand Company, Inc., pp. x+190, hdl:2027/uc1.b4250788 • Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834. • Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135. • Treves, François (1967), Topological vector spaces, distributions and kernels, Boston, MA: Academic Press Historical references • Banach, Stefan (1922), "Sur les opérations dans les ensembles abstraits et leur application aux équations intégrales (On operations in abstract sets and their application to integral equations)" (PDF), Fundamenta Mathematicae (in French), 3: 133–181, doi:10.4064/fm-3-1-133-181, ISSN 0016-2736 • Bolzano, Bernard (1804), Betrachtungen über einige Gegenstände der Elementargeometrie (Considerations of some aspects of elementary geometry) (in German) • Bellavitis, Giuso (1833), "Sopra alcune applicazioni di un nuovo metodo di geometria analitica", Il poligrafo giornale di scienze, lettre ed arti, Verona, 13: 53–61. • Bourbaki, Nicolas (1969), Éléments d'histoire des mathématiques (Elements of history of mathematics) (in French), Paris: Hermann • Dorier, Jean-Luc (1995), "A general outline of the genesis of vector space theory", Historia Mathematica, 22 (3): 227–261, doi:10.1006/hmat.1995.1024, MR 1347828 • Fourier, Jean Baptiste Joseph (1822), Théorie analytique de la chaleur (in French), Chez Firmin Didot, père et fils • Grassmann, Hermann (1844), Die Lineale Ausdehnungslehre - Ein neuer Zweig der Mathematik (in German), O. Wigand, reprint: Grassmann, Hermann (2000), Kannenberg, L.C. (ed.), Extension Theory, translated by Kannenberg, Lloyd C., Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-2031-5 • Hamilton, William Rowan (1853), Lectures on Quaternions, Royal Irish Academy • Möbius, August Ferdinand (1827), Der Barycentrische Calcul : ein neues Hülfsmittel zur analytischen Behandlung der Geometrie (Barycentric calculus: a new utility for an analytic treatment of geometry) (in German), archived from the original on 2006-11-23 • Moore, Gregory H. (1995), "The axiomatization of linear algebra: 1875–1940", Historia Mathematica, 22 (3): 262–303, doi:10.1006/hmat.1995.1025 • Peano, Giuseppe (1888), Calcolo Geometrico secondo l'Ausdehnungslehre di H. Grassmann preceduto dalle Operazioni della Logica Deduttiva (in Italian), Turin{{citation}}: CS1 maint: location missing publisher (link) • Peano, G. (1901) Formulario mathematico: vct axioms via Internet Archive Further references • Ashcroft, Neil; Mermin, N. David (1976), Solid State Physics, Toronto: Thomson Learning, ISBN 978-0-03-083993-1 • Atiyah, Michael Francis (1989), K-theory, Advanced Book Classics (2nd ed.), Addison-Wesley, ISBN 978-0-201-09394-0, MR 1043170 • Bourbaki, Nicolas (1998), Elements of Mathematics : Algebra I Chapters 1-3, Berlin, New York: Springer-Verlag, ISBN 978-3-540-64243-5 • Bourbaki, Nicolas (1989), General Topology. Chapters 1-4, Berlin, New York: Springer-Verlag, ISBN 978-3-540-64241-1 • Coxeter, Harold Scott MacDonald (1987), Projective Geometry (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-96532-1 • Eisenberg, Murray; Guy, Robert (1979), "A proof of the hairy ball theorem", The American Mathematical Monthly, 86 (7): 572–574, doi:10.2307/2320587, JSTOR 2320587 • Eisenbud, David (1995), Commutative algebra, Graduate Texts in Mathematics, vol. 150, Berlin, New York: Springer-Verlag, ISBN 978-0-387-94269-8, MR 1322960 • Goldrei, Derek (1996), Classic Set Theory: A guided independent study (1st ed.), London: Chapman and Hall, ISBN 978-0-412-60610-6 • Griffiths, David J. (1995), Introduction to Quantum Mechanics, Upper Saddle River, NJ: Prentice Hall, ISBN 978-0-13-124405-4 • Halmos, Paul R. (1974), Finite-dimensional vector spaces, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90093-3 • Halpern, James D. (Jun 1966), "Bases in Vector Spaces and the Axiom of Choice", Proceedings of the American Mathematical Society, 17 (3): 670–673, doi:10.2307/2035388, JSTOR 2035388 • Hughes-Hallett, Deborah; McCallum, William G.; Gleason, Andrew M. (2013), Calculus : Single and Multivariable (6 ed.), John Wiley & Sons, ISBN 978-0470-88861-2 • Husemoller, Dale (1994), Fibre Bundles (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-94087-8 • Jost, Jürgen (2005), Riemannian Geometry and Geometric Analysis (4th ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-25907-7 • Kreyszig, Erwin (1991), Differential geometry, New York: Dover Publications, pp. xiv+352, ISBN 978-0-486-66721-8 • Kreyszig, Erwin (1999), Advanced Engineering Mathematics (8th ed.), New York: John Wiley & Sons, ISBN 978-0-471-15496-9 • Luenberger, David (1997), Optimization by vector space methods, New York: John Wiley & Sons, ISBN 978-0-471-18117-0 • Mac Lane, Saunders (1998), Categories for the Working Mathematician (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-98403-2 • Misner, Charles W.; Thorne, Kip; Wheeler, John Archibald (1973), Gravitation, W. H. Freeman, ISBN 978-0-7167-0344-0 • Naber, Gregory L. (2003), The geometry of Minkowski spacetime, New York: Dover Publications, ISBN 978-0-486-43235-9, MR 2044239 • Schönhage, A.; Strassen, Volker (1971), "Schnelle Multiplikation großer Zahlen (Fast multiplication of big numbers)", Computing (in German), 7 (3–4): 281–292, doi:10.1007/bf02242355, ISSN 0010-485X, S2CID 9738629 • Spivak, Michael (1999), A Comprehensive Introduction to Differential Geometry (Volume Two), Houston, TX: Publish or Perish • Stewart, Ian (1975), Galois Theory, Chapman and Hall Mathematics Series, London: Chapman and Hall, ISBN 978-0-412-10800-6 • Varadarajan, V. S. (1974), Lie groups, Lie algebras, and their representations, Prentice Hall, ISBN 978-0-13-535732-3 • Wallace, G.K. (Feb 1992), "The JPEG still picture compression standard" (PDF), IEEE Transactions on Consumer Electronics, 38 (1): xviii–xxxiv, CiteSeerX 10.1.1.318.4292, doi:10.1109/30.125072, ISSN 0098-3063, archived from the original (PDF) on 2007-01-13, retrieved 2017-10-25 • Weibel, Charles A. (1994). An introduction to homological algebra. Cambridge Studies in Advanced Mathematics. Vol. 38. Cambridge University Press. ISBN 978-0-521-55987-4. MR 1269324. OCLC 36131259. External links The Wikibook Linear Algebra has a page on the topic of: Real vector spaces The Wikibook Linear Algebra has a page on the topic of: Vector spaces • "Vector space", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Linear algebra • Outline • Glossary Basic concepts • Scalar • Vector • Vector space • Scalar multiplication • Vector projection • Linear span • Linear map • Linear projection • Linear independence • Linear combination • Basis • Change of basis • Row and column vectors • Row and column spaces • Kernel • Eigenvalues and eigenvectors • Transpose • Linear equations Matrices • Block • Decomposition • Invertible • Minor • Multiplication • Rank • Transformation • Cramer's rule • Gaussian elimination Bilinear • Orthogonality • Dot product • Hadamard product • Inner product space • Outer product • Kronecker product • Gram–Schmidt process Multilinear algebra • Determinant • Cross product • Triple product • Seven-dimensional cross product • Geometric algebra • Exterior algebra • Bivector • Multivector • Tensor • Outermorphism Vector space constructions • Dual • Direct sum • Function space • Quotient • Subspace • Tensor product Numerical • Floating-point • Numerical stability • Basic Linear Algebra Subprograms • Sparse matrix • Comparison of linear algebra libraries • Category • Mathematics portal • Commons • Wikibooks • Wikiversity Authority control: National • France • BnF data • Germany • Israel • United States • Czech Republic
Wikipedia
Vector logic Vector logic[1][2] is an algebraic model of elementary logic based on matrix algebra. Vector logic assumes that the truth values map on vectors, and that the monadic and dyadic operations are executed by matrix operators. "Vector logic" has also been used to refer to the representation of classical propositional logic as a vector space,[3][4] in which the unit vectors are propositional variables. Predicate logic can be represented as a vector space of the same type in which the axes represent the predicate letters $S$ and $P$.[5] In the vector space for propositional logic the origin represents the false, F, and the infinite periphery represents the true, T, whereas in the space for predicate logic the origin represents "nothing" and the periphery represents the flight from nothing, or "something". Overview Classic binary logic is represented by a small set of mathematical functions depending on one (monadic) or two (dyadic) variables. In the binary set, the value 1 corresponds to true and the value 0 to false. A two-valued vector logic requires a correspondence between the truth-values true (t) and false (f), and two q-dimensional normalized real-valued column vectors s and n, hence: $t\mapsto s$    and    $f\mapsto n$ (where $q\geq 2$ is an arbitrary natural number, and "normalized" means that the length of the vector is 1; usually s and n are orthogonal vectors). This correspondence generates a space of vector truth-values: V2 = {s,n}. The basic logical operations defined using this set of vectors lead to matrix operators. The operations of vector logic are based on the scalar product between q-dimensional column vectors: $u^{T}v=\langle u,v\rangle $: the orthonormality between vectors s and n implies that $\langle u,v\rangle =1$ if $u=v$, and $\langle u,v\rangle =0$ if $u\neq v$, where $u,v\in \{s,n\}$. Monadic operators The monadic operators result from the application $Mon:V_{2}\to V_{2}$, and the associated matrices have q rows and q columns. The two basic monadic operators for this two-valued vector logic are the identity and the negation: • Identity: A logical identity ID(p) is represented by matrix $I=ss^{T}+nn^{T}$. This matrix operates as follows: Ip = p, p ∈ V2; due to the orthogonality of s with respect to n, we have $Is=ss^{T}s+nn^{T}s=s\langle s,s\rangle +n\langle n,s\rangle =s$, and similarly $In=n$. It is important to note that this vector logic identity matrix is not generally an identity matrix in the sense of matrix algebra. • Negation: A logical negation ¬p is represented by matrix $N=ns^{T}+sn^{T}$ Consequently, Ns = n and Nn = s. The involutory behavior of the logical negation, namely that ¬(¬p) equals p, corresponds with the fact that N2 = I. Dyadic operators The 16 two-valued dyadic operators correspond to functions of the type $Dyad:V_{2}\otimes V_{2}\to V_{2}$; the dyadic matrices have q2 rows and q columns. The matrices that execute these dyadic operations are based on the properties of the Kronecker product. Two properties of this product are essential for the formalism of vector logic: 1. The mixed-product property If A, B, C and D are matrices of such size that one can form the matrix products AC and BD, then $(A\otimes B)(C\otimes D)=AC\otimes BD$ 2. Distributive transpose The operation of transposition is distributive over the Kronecker product: $(A\otimes B)^{T}=A^{T}\otimes B^{T}.$ Using these properties, expressions for dyadic logic functions can be obtained: • Conjunction. The conjunction (p∧q) is executed by a matrix that acts on two vector truth-values: $C(u\otimes v)$ .This matrix reproduces the features of the classical conjunction truth-table in its formulation: $C=s(s\otimes s)^{T}+n(s\otimes n)^{T}+n(n\otimes s)^{T}+n(n\otimes n)^{T}$ and verifies $C(s\otimes s)=s,$ and $C(s\otimes n)=C(n\otimes s)=C(n\otimes n)=n.$ • Disjunction. The disjunction (p∨q) is executed by the matrix $D=s(s\otimes s)^{T}+s(s\otimes n)^{T}+s(n\otimes s)^{T}+n(n\otimes n)^{T},$ resulting in $D(s\otimes s)=D(s\otimes n)=D(n\otimes s)=s$ and $D(n\otimes n)=n.$ • Implication. The implication corresponds in classical logic to the expression p → q ≡ ¬p ∨ q. The vector logic version of this equivalence leads to a matrix that represents this implication in vector logic: $L=D(N\otimes I)$. The explicit expression for this implication is: $L=s(s\otimes s)^{T}+n(s\otimes n)^{T}+s(n\otimes s)^{T}+s(n\otimes n)^{T},$ and the properties of classical implication are satisfied: $L(s\otimes s)=L(n\otimes s)=L(n\otimes n)=s$ and $L(s\otimes n)=n.$ • Equivalence and Exclusive or. In vector logic the equivalence p≡q is represented by the following matrix: $E=s(s\otimes s)^{T}+n(s\otimes n)^{T}+n(n\otimes s)^{T}+s(n\otimes n)^{T}$ with $E(s\otimes s)=E(n\otimes n)=s$ and $E(s\otimes n)=E(n\otimes s)=n.$ The Exclusive or is the negation of the equivalence, ¬(p≡q); it corresponds with the matrix $X=NE$ given by $X=n(s\otimes s)^{T}+s(s\otimes n)^{T}+s(n\otimes s)^{T}+n(n\otimes n)^{T},$ with $X(s\otimes s)=X(n\otimes n)=n$ and $X(s\otimes n)=X(n\otimes s)=s.$ • NAND and NOR The matrices S and P correspond to the Sheffer (NAND) and the Peirce (NOR) operations, respectively: $S=NC$ $P=ND$ Numerical examples Here are numerical examples of some basic logical gates implemented as matrices for two different sets of 2-dimensional orthonormal vectors for s and n. Set 1: $s={\begin{bmatrix}1\\0\end{bmatrix}}\quad n={\begin{bmatrix}0\\1\end{bmatrix}}$ In this case the identity and negation operators are the identity and anti-diagonal identity matrices:, $I={\begin{bmatrix}1&0\\0&1\end{bmatrix}},\quad N={\begin{bmatrix}0&1\\1&0\end{bmatrix}}$ and the matrices for conjunction, disjunction and implication are $C={\begin{bmatrix}1&0&0&0\\0&1&1&1\end{bmatrix}},\quad D={\begin{bmatrix}1&1&1&0\\0&0&0&1\end{bmatrix}},\quad L={\begin{bmatrix}1&0&1&1\\0&1&0&0\end{bmatrix}}$ respectively. Set 2: $s={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1\\1\end{bmatrix}}\quad n={\frac {1}{\sqrt {2}}}{\begin{bmatrix}1\\-1\end{bmatrix}}$ Here the identity operator is the identity matrix, but the negation operator is no longer the anti-diagonal identity matrix : $I={\begin{bmatrix}1&0\\0&1\end{bmatrix}},\quad N={\begin{bmatrix}1&0\\0&-1\end{bmatrix}}$ The resulting matrices for conjunction, disjunction and implication are: $C={\frac {1}{\sqrt {2}}}{\begin{bmatrix}2&0&0&0\\-1&1&1&1\end{bmatrix}},\quad D={\frac {1}{\sqrt {2}}}{\begin{bmatrix}2&0&0&0\\1&1&1&-1\end{bmatrix}},\quad L={\begin{bmatrix}2&0&0&0\\1&1&-1&1\end{bmatrix}}$ respectively. De Morgan's law In the two-valued logic, the conjunction and the disjunction operations satisfy the De Morgan's law: p∧q≡¬(¬p∨¬q), and its dual: p∨q≡¬(¬p∧¬q)). For the two-valued vector logic this law is also verified: $C(u\otimes v)=ND(Nu\otimes Nv)$, where u and v are two logic vectors. The Kronecker product implies the following factorization: $C(u\otimes v)=ND(N\otimes N)(u\otimes v).$ Then it can be proved that in the two-dimensional vector logic the De Morgan's law is a law involving operators, and not only a law concerning operations:[6] $C=ND(N\otimes N)$ Law of contraposition In the classical propositional calculus, the law of contraposition p → q ≡ ¬q → ¬p is proved because the equivalence holds for all the possible combinations of truth-values of p and q.[7] Instead, in vector logic, the law of contraposition emerges from a chain of equalities within the rules of matrix algebra and Kronecker products, as shown in what follows: $L(u\otimes v)=D(N\otimes I)(u\otimes v)=D(Nu\otimes v)=D(Nu\otimes NNv)=$ $D(NNv\otimes Nu)=D(N\otimes I)(Nv\otimes Nu)=L(Nv\otimes Nu)$ This result is based in the fact that D, the disjunction matrix, represents a commutative operation. Many-valued two-dimensional logic Many-valued logic was developed by many researchers, particularly by Jan Łukasiewicz and allows extending logical operations to truth-values that include uncertainties.[8] In the case of two-valued vector logic, uncertainties in the truth values can be introduced using vectors with s and n weighted by probabilities. Let $f=\epsilon s+\delta n$, with $\epsilon ,\delta \in [0,1],\epsilon +\delta =1$ be this kind of "probabilistic" vectors. Here, the many-valued character of the logic is introduced a posteriori via the uncertainties introduced in the inputs.[1] Scalar projections of vector outputs The outputs of this many-valued logic can be projected on scalar functions and generate a particular class of probabilistic logic with similarities with the many-valued logic of Reichenbach.[9][10][11] Given two vectors $u=\alpha s+\beta n$ and $v=\alpha 's+\beta 'n$ and a dyadic logical matrix $G$, a scalar probabilistic logic is provided by the projection over vector s: $Val(\mathrm {scalars} )=s^{T}G(\mathrm {vectors} )$ Here are the main results of these projections: $NOT(\alpha )=s^{T}Nu=1-\alpha $ $OR(\alpha ,\alpha ')=s^{T}D(u\otimes v)=\alpha +\alpha '-\alpha \alpha '$ $AND(\alpha ,\alpha ')=s^{T}C(u\otimes v)=\alpha \alpha '$ $IMPL(\alpha ,\alpha ')=s^{T}L(u\otimes v)=1-\alpha (1-\alpha ')$ $XOR(\alpha ,\alpha ')=s^{T}X(u\otimes v)=\alpha +\alpha '-2\alpha \alpha '$ The associated negations are: $NOR(\alpha ,\alpha ')=1-OR(\alpha ,\alpha ')$ $NAND(\alpha ,\alpha ')=1-AND(\alpha ,\alpha ')$ $EQUI(\alpha ,\alpha ')=1-XOR(\alpha ,\alpha ')$ If the scalar values belong to the set {0, ½, 1}, this many-valued scalar logic is for many of the operators almost identical to the 3-valued logic of Łukasiewicz. Also, it has been proved that when the monadic or dyadic operators act over probabilistic vectors belonging to this set, the output is also an element of this set.[6] Square root of NOT This operator was originally defined for qubits in the framework of quantum computing.[12][13] In vector logic, this operator can be extended for arbitrary orthonormal truth values.[2][14] There are, in fact, two square roots of NOT: $A=({\sqrt {N}})_{1}={\frac {1}{2}}(1+i)I+{\frac {1}{2}}(1-i)N$, and $B=({\sqrt {N}})_{2}={\frac {1}{2}}(1-i)I+{\frac {1}{2}}(1+i)N$, with $i={\sqrt {-1}}$. $A$ and $B$ are complex conjugates: $B=A^{*}$, and note that $A^{2}=B^{2}=N$, and $AB=BA=I$. Another interesting point is the analogy with the two square roots of -1. The positive root $+({\sqrt {-1}})$ corresponds to $({\sqrt {N}})_{1}=IA$, and the negative root $-({\sqrt {-1}})$ corresponds to $({\sqrt {N}})_{2}=NA$; as a consequence, $NA=B$. History Early attempts to use linear algebra to represent logic operations can be referred to Peirce and Copilowish,[15] particularly in the use of logical matrices to interpret the calculus of relations. The approach has been inspired in neural network models based on the use of high-dimensional matrices and vectors.[16][17] Vector logic is a direct translation into a matrix–vector formalism of the classical Boolean polynomials.[18] This kind of formalism has been applied to develop a fuzzy logic in terms of complex numbers.[19] Other matrix and vector approaches to logical calculus have been developed in the framework of quantum physics, computer science and optics.[20][21] The Indian biophysicist G.N. Ramachandran developed a formalism using algebraic matrices and vectors to represent many operations of classical Jain logic known as Syad and Saptbhangi; see Indian logic.[22] It requires independent affirmative evidence for each assertion in a proposition, and does not make the assumption for binary complementation. Boolean polynomials George Boole established the development of logical operations as polynomials.[18] For the case of monadic operators (such as identity or negation), the Boolean polynomials look as follows: $f(x)=f(1)x+f(0)(1-x)$ The four different monadic operations result from the different binary values for the coefficients. Identity operation requires f(1) = 1 and f(0) = 0, and negation occurs if f(1) = 0 and f(0) = 1. For the 16 dyadic operators, the Boolean polynomials are of the form: $f(x,y)=f(1,1)xy+f(1,0)x(1-y)+f(0,1)(1-x)y+f(0,0)(1-x)(1-y)$ The dyadic operations can be translated to this polynomial format when the coefficients f take the values indicated in the respective truth tables. For instance: the NAND operation requires that: $f(1,1)=0$ and $f(1,0)=f(0,1)=f(0,0)=1$. These Boolean polynomials can be immediately extended to any number of variables, producing a large potential variety of logical operators. In vector logic, the matrix-vector structure of logical operators is an exact translation to the format of linear algebra of these Boolean polynomials, where the x and 1−x correspond to vectors s and n respectively (the same for y and 1−y). In the example of NAND, f(1,1)=n and f(1,0)=f(0,1)=f(0,0)=s and the matrix version becomes: $S=n(s\otimes s)^{T}+s[(s\otimes n)^{T}+(n\otimes s)^{T}+(n\otimes n)^{T}]$ Extensions • Vector logic can be extended to include many truth values since large-dimensional vector spaces allow the creation of many orthogonal truth values and the corresponding logical matrices.[2] • Logical modalities can be fully represented in this context, with recursive process inspired in neural models.[2][23] • Some cognitive problems about logical computations can be analyzed using this formalism, in particular recursive decisions. Any logical expression of classical propositional calculus can be naturally represented by a tree structure.[7] This fact is retained by vector logic, and has been partially used in neural models focused in the investigation of the branched structure of natural languages.[24][25][26][27][28][29] • The computation via reversible operations as the Fredkin gate can be implemented in vector logic. Such an implementation provides explicit expressions for matrix operators that produce the input format and the output filtering necessary for obtaining computations.[2][6] • Elementary cellular automata can be analyzed using the operator structure of vector logic; this analysis leads to a spectral decomposition of the laws governing its dynamics.[30][31] • In addition, based on this formalism, a discrete differential and integral calculus has been developed.[32] See also • Algebraic logic • Boolean algebra • Propositional calculus • Quantum logic • Jonathan Westphal References 1. Mizraji, E. (1992). Vector logics: the matrix-vector representation of logical calculus. Fuzzy Sets and Systems, 50, 179–185 2. Mizraji, E. (2008) Vector logic: a natural algebraic representation of the fundamental logical gates. Journal of Logic and Computation, 18, 97–121 3. Westphal, J. and Hardy, J. (2005) Logic as a Vector System. Journal of Logic and Computation, 751-765 4. Westphal, J. Caulfield, H.J. Hardy, J. and Qian, L.(2005) Optical Vector Logic Theorem-Proving. Proceedings of the Joint Conference on Information Systems, Photonics, Networking and Computing Division. 5. Westphal, J (2010). The Application of Vector Theory to Syllogistic Logic. New Perspectives on the Square of Opposition, Bern, Peter Lang. 6. Mizraji, E. (1996) The operators of vector logic. Mathematical Logic Quarterly, 42, 27–39 7. Suppes, P. (1957) Introduction to Logic, Van Nostrand Reinhold, New York. 8. Łukasiewicz, J. (1980) Selected Works. L. Borkowski, ed., pp. 153–178. North-Holland, Amsterdam, 1980 9. Rescher, N. (1969) Many-Valued Logic. McGraw–Hill, New York 10. Blanché, R. (1968) Introduction à la Logique Contemporaine, Armand Colin, Paris 11. Klir, G.J., Yuan, G. (1995) Fuzzy Sets and Fuzzy Logic. Prentice–Hall, New Jersey 12. Hayes, B. (1995) The square root of NOT. American Scientist, 83, 304–308 13. Deutsch, D., Ekert, A. and Lupacchini, R. (2000) Machines, logic and quantum physics. The Bulletin of Symbolic Logic, 6, 265-283. 14. Mizraji, E. (2020). Vector logic allows counterfactual virtualization by the square root of NOT, Logic Journal of the IGPL. Online version (doi:10.1093/jigpal/jzaa026) 15. Copilowish, I.M. (1948) Matrix development of the calculus of relations. Journal of Symbolic Logic, 13, 193–203 16. Kohonen, T. (1977) Associative Memory: A System-Theoretical Approach. Springer-Verlag, New York 17. Mizraji, E. (1989) Context-dependent associations in linear distributed memories. Bulletin of Mathematical Biology, 50, 195–205 18. Boole, G. (1854) An Investigation of the Laws of Thought, on which are Founded the Theories of Logic and Probabilities. Macmillan, London, 1854; Dover, New York Reedition, 1958 19. Dick, S. (2005) Towards complex fuzzy logic. IEEE Transactions on Fuzzy Systems, 15,405–414, 2005 20. Mittelstaedt, P. (1968) Philosophische Probleme der Modernen Physik, Bibliographisches Institut, Mannheim 21. Stern, A. (1988) Matrix Logic: Theory and Applications. North-Holland, Amsterdam 22. Jain, M.K. (2011) Logic of evidence-based inference propositions, Current Science, 1663–1672, 100 23. Mizraji, E. (1994) Modalities in vector logic Archived 2014-08-11 at the Wayback Machine. Notre Dame Journal of Formal Logic, 35, 272–283 24. Mizraji, E., Lin, J. (2002) The dynamics of logical decisions. Physica D, 168–169, 386–396 25. beim Graben, P., Potthast, R. (2009). Inverse problems in dynamic cognitive modeling. Chaos, 19, 015103 26. beim Graben, P., Pinotsis, D., Saddy, D., Potthast, R. (2008). Language processing with dynamic fields. Cogn. Neurodyn., 2, 79–88 27. beim Graben, P., Gerth, S., Vasishth, S.(2008) Towards dynamical system models of language-related brain potentials. Cogn. Neurodyn., 2, 229–255 28. beim Graben, P., Gerth, S. (2012) Geometric representations for minimalist grammars. Journal of Logic, Language and Information, 21, 393-432 . 29. Binazzi, A.(2012) Cognizione logica e modelli mentali. Studi sulla formazione, 1–2012, pag. 69–84 30. Mizraji, E. (2006) The parts and the whole: inquiring how the interaction of simple subsystems generates complexity. International Journal of General Systems, 35, pp. 395–415. 31. Arruti, C., Mizraji, E. (2006) Hidden potentialities. International Journal of General Systems, 35, 461–469. 32. Mizraji, E. (2015) Differential and integral calculus for logical operations. A matrix–vector approach Journal of Logic and Computation 25, 613-638, 2015
Wikipedia
Vector multiplication In mathematics, vector multiplication may refer to one of several operations between two (or more) vectors. It may concern any of the following articles: • Dot product – also known as the "scalar product", a binary operation that takes two vectors and returns a scalar quantity. The dot product of two vectors can be defined as the product of the magnitudes of the two vectors and the cosine of the angle between the two vectors. Alternatively, it is defined as the product of the projection of the first vector onto the second vector and the magnitude of the second vector. Thus, ${\stackrel {\,\longrightarrow }{A}}$ ⋅ ${\stackrel {\,\longrightarrow }{B}}$ = |${\stackrel {\,\longrightarrow }{A}}$| |${\stackrel {\,\longrightarrow }{B}}$| cos θ • More generally, a bilinear product in an algebra over a field. • Cross product – also known as the "vector product", a binary operation on two vectors that results in another vector. The cross product of two vectors in 3-space is defined as the vector perpendicular to the plane determined by the two vectors whose magnitude is the product of the magnitudes of the two vectors and the sine of the angle between the two vectors. So, if n̂ is the unit vector perpendicular to the plane determined by vectors A and B, ${\stackrel {\,\longrightarrow }{A}}$ × ${\stackrel {\,\longrightarrow }{B}}$ = |${\stackrel {\,\longrightarrow }{A}}$| |${\stackrel {\,\longrightarrow }{B}}$| sin θ n̂ • More generally, a Lie bracket in a Lie algebra. • Hadamard product – entrywise or elementwise product of vectors, where $(A\odot B)_{i}=A_{i}B_{i}$. • Outer product - where $(\mathbf {a} \otimes \mathbf {b} )$ with $\mathbf {a} \in \mathbb {R} ^{d},\mathbf {b} \in \mathbb {R} ^{d}$ results in a $(d\times d)$ matrix. • Triple products – products involving three vectors. • Quadruple products – products involving four vectors. Applications Vector multiplication has multiple applications in regards to mathematics, but also in other studies such as physics and engineering. Physics • The use of the Cross product can help determine the moment of force, also known as torque. • The dot product is used to determine the work done by a constant force. See also • Scalar multiplication • Matrix multiplication • Vector addition • Vector algebra relations
Wikipedia
Vector notation In mathematics and physics, vector notation is a commonly used notation for representing vectors,[1][2] which may be Euclidean vectors, or more generally, members of a vector space. Vector notation Vector arrow Pointing from A to B Vector components Describing an arrow vector v by its coordinates x and y yields an isomorphism of vector spaces. Scalar product Two equal-length sequences of coordinate vectors and returns a single number Vector product The cross-product in respect to a right-handed coordinate system For representing a vector, the common[3] typographic convention is lower case, upright boldface type, as in v. The International Organization for Standardization (ISO) recommends either bold italic serif, as in v, or non-bold italic serif accented by a right arrow, as in ${\vec {v}}$.[4] In advanced mathematics, vectors are often represented in a simple italic type, like any variable. History In 1835 Giusto Bellavitis introduced the idea of equipollent directed line segments $AB\bumpeq CD$ which resulted in the concept of a vector as an equivalence class of such segments. The term vector was coined by W. R. Hamilton around 1843, as he revealed quaternions, a system which uses vectors and scalars to span a four-dimensional space. For a quaternion q = a + bi + cj + dk, Hamilton used two projections: S q = a, for the scalar part of q, and V q = bi + cj + dk, the vector part. Using the modern terms cross product (×) and dot product (.), the quaternion product of two vectors p and q can be written pq = –p.q + p×q. In 1878, W. K. Clifford severed the two products to make the quaternion operation useful for students in his textbook Elements of Dynamic. Lecturing at Yale University, Josiah Willard Gibbs supplied notation for the scalar product and vector products, which was introduced in Vector Analysis.[5] In 1891, Oliver Heaviside argued for Clarendon to distinguish vectors from scalars. He criticized the use of Greek letters by Tait and Gothic letters by Maxwell.[6] In 1912, J.B. Shaw contributed his "Comparative Notation for Vector Expressions" to the Bulletin of the Quaternion Society.[7] Subsequently, Alexander Macfarlane described 15 criteria for clear expression with vectors in the same publication.[8] Vector ideas were advanced by Hermann Grassmann in 1841, and again in 1862 in the German language. But German mathematicians were not taken with quaternions as much as were English-speaking mathematicians. When Felix Klein was organizing the German mathematical encyclopedia, he assigned Arnold Sommerfeld to standardize vector notation.[9] In 1950, when Academic Press published G. Kuerti’s translation of the second edition of volume 2 of Lectures on Theoretical Physics by Sommerfeld, vector notation was the subject of a footnote: "In the original German text, vectors and their components are printed in the same Gothic types. The more usual way of making a typographical distinction between the two has been adopted for this translation."[10] Rectangular coordinates See also: Real coordinate space Given a Cartesian coordinate system, a vector may be specified by its Cartesian coordinates, which are a tuple of numbers. Ordered set notation A vector in $\mathbb {R} ^{n}$ can be specified using an ordered set of components, enclosed in either parentheses or angle brackets. In a general sense, an n-dimensional vector v can be specified in either of the following forms: • $\mathbf {v} =(v_{1},v_{2},\dots ,v_{n-1},v_{n})$ • $\mathbf {v} =\langle v_{1},v_{2},\dots ,v_{n-1},v_{n}\rangle $ [11] Where v1, v2, …, vn − 1, vn are the components of v.[12] Matrix notation A vector in $\mathbb {R} ^{n}$ can also be specified as a row or column matrix containing the ordered set of components. A vector specified as a row matrix is known as a row vector; one specified as a column matrix is known as a column vector. Again, an n-dimensional vector $\mathbf {v} $ can be specified in either of the following forms using matrices: • $\mathbf {v} ={\begin{bmatrix}v_{1}&v_{2}&\cdots &v_{n-1}&v_{n}\end{bmatrix}}={\begin{pmatrix}v_{1}&v_{2}&\cdots &v_{n-1}&v_{n}\end{pmatrix}}$ • $\mathbf {v} ={\begin{bmatrix}v_{1}\\v_{2}\\\vdots \\v_{n-1}\\v_{n}\end{bmatrix}}={\begin{pmatrix}v_{1}\\v_{2}\\\vdots \\v_{n-1}\\v_{n}\end{pmatrix}}$ where v1, v2, …, vn − 1, vn are the components of v. In some advanced contexts, a row and a column vector have different meaning; see covariance and contravariance of vectors for more. Unit vector notation A vector in $\mathbb {R} ^{3}$ (or fewer dimensions, such as $\mathbb {R} ^{2}$ where vz below is zero) can be specified as the sum of the scalar multiples of the components of the vector with the members of the standard basis in $\mathbb {R} ^{3}$. The basis is represented with the unit vectors ${\boldsymbol {\hat {\imath }}}=(1,0,0)$, ${\boldsymbol {\hat {\jmath }}}=(0,1,0)$, and ${\boldsymbol {\hat {k}}}=(0,0,1)$. A three-dimensional vector ${\boldsymbol {v}}$ can be specified in the following form, using unit vector notation: $\mathbf {v} =v_{x}{\boldsymbol {\hat {\imath }}}+v_{y}{\boldsymbol {\hat {\jmath }}}+v_{z}{\boldsymbol {\hat {k}}}$ where vx, vy, and vz are the scalar components of v. Scalar components may be positive or negative; the absolute value of a scalar component is its magnitude. Polar coordinates The two polar coordinates of a point in a plane may be considered as a two dimensional vector. Such a vector consists of a magnitude (or length) and a direction (or angle). The magnitude, typically represented as r, is the distance from a starting point, the origin, to the point which is represented. The angle, typically represented as θ (the Greek letter theta), is the angle, usually measured counter­clockwise, between a fixed direction, typically that of the positive x-axis, and the direction from the origin to the point. The angle is typically reduced to lie within the range $0\leq \theta <2\pi $ radians or $0\leq \theta <360^{\circ }$. Ordered set and matrix notations Vectors can be specified using either ordered pair notation (a subset of ordered set notation using only two components), or matrix notation, as with rectangular coordinates. In these forms, the first component of the vector is r (instead of v1), and the second component is θ (instead of v2). To differentiate polar coordinates from rectangular coordinates, the angle may be prefixed with the angle symbol, $\angle $. Two-dimensional polar coordinates for v can be represented as any of the following, using either ordered pair or matrix notation: • $\mathbf {v} =(r,\angle \theta )$ • $\mathbf {v} =\langle r,\angle \theta \rangle $ • $\mathbf {v} ={\begin{bmatrix}r&\angle \theta \end{bmatrix}}$ • $\mathbf {v} ={\begin{bmatrix}r\\\angle \theta \end{bmatrix}}$ where r is the magnitude, θ is the angle, and the angle symbol ($\angle $) is optional. Direct notation Vectors can also be specified using simplified autonomous equations that define r and θ explicitly. This can be unwieldy, but is useful for avoiding the confusion with two-dimensional rectangular vectors that arises from using ordered pair or matrix notation. A two-dimensional vector whose magnitude is 5 units, and whose direction is π/9 radians (20°), can be specified using either of the following forms: • $r=5,\ \theta ={\pi \over 9}$ • $r=5,\ \theta =20^{\circ }$ Cylindrical vectors A cylindrical vector is an extension of the concept of polar coordinates into three dimensions. It is akin to an arrow in the cylindrical coordinate system. A cylindrical vector is specified by a distance in the xy-plane, an angle, and a distance from the xy-plane (a height). The first distance, usually represented as r or ρ (the Greek letter rho), is the magnitude of the projection of the vector onto the xy-plane. The angle, usually represented as θ or φ (the Greek letter phi), is measured as the offset from the line collinear with the x-axis in the positive direction; the angle is typically reduced to lie within the range $0\leq \theta <2\pi $. The second distance, usually represented as h or z, is the distance from the xy-plane to the endpoint of the vector. Ordered set and matrix notations Cylindrical vectors use polar coordinates, where the second distance component is concatenated as a third component to form ordered triplets (again, a subset of ordered set notation) and matrices. The angle may be prefixed with the angle symbol ($\angle $); the distance-angle-distance combination distinguishes cylindrical vectors in this notation from spherical vectors in similar notation. A three-dimensional cylindrical vector v can be represented as any of the following, using either ordered triplet or matrix notation: • $\mathbf {v} =(r,\angle \theta ,h)$ • $\mathbf {v} =\langle r,\angle \theta ,h\rangle $ • $\mathbf {v} ={\begin{bmatrix}r&\angle \theta &h\end{bmatrix}}$ • $\mathbf {v} ={\begin{bmatrix}r\\\angle \theta \\h\end{bmatrix}}$ Where r is the magnitude of the projection of v onto the xy-plane, θ is the angle between the positive x-axis and v, and h is the height from the xy-plane to the endpoint of v. Again, the angle symbol ($\angle $) is optional. Direct notation A cylindrical vector can also be specified directly, using simplified autonomous equations that define r (or ρ), θ (or φ), and h (or z). Consistency should be used when choosing the names to use for the variables; ρ should not be mixed with θ and so on. A three-dimensional vector, the magnitude of whose projection onto the xy-plane is 5 units, whose angle from the positive x-axis is π/9 radians (20°), and whose height from the xy-plane is 3 units can be specified in any of the following forms: • $r=5,\ \theta ={\pi \over 9},\ h=3$ • $r=5,\ \theta =20^{\circ },\ h=3$ • $\rho =5,\ \phi ={\pi \over 9},\ z=3$ • $\rho =5,\ \phi =20^{\circ },\ z=3$ Spherical vectors A spherical vector is another method for extending the concept of polar vectors into three dimensions. It is akin to an arrow in the spherical coordinate system. A spherical vector is specified by a magnitude, an azimuth angle, and a zenith angle. The magnitude is usually represented as ρ. The azimuth angle, usually represented as θ, is the (counter­clockwise) offset from the positive x-axis. The zenith angle, usually represented as φ, is the offset from the positive z-axis. Both angles are typically reduced to lie within the range from zero (inclusive) to 2π (exclusive). Ordered set and matrix notations Spherical vectors are specified like polar vectors, where the zenith angle is concatenated as a third component to form ordered triplets and matrices. The azimuth and zenith angles may be both prefixed with the angle symbol ($\angle $); the prefix should be used consistently to produce the distance-angle-angle combination that distinguishes spherical vectors from cylindrical ones. A three-dimensional spherical vector v can be represented as any of the following, using either ordered triplet or matrix notation: • $\mathbf {v} =(\rho ,\angle \theta ,\angle \phi )$ • $\mathbf {v} =\langle \rho ,\angle \theta ,\angle \phi \rangle $ • $\mathbf {v} ={\begin{bmatrix}\rho &\angle \theta &\angle \phi \end{bmatrix}}$ • $\mathbf {v} ={\begin{bmatrix}\rho \\\angle \theta \\\angle \phi \end{bmatrix}}$ Where ρ is the magnitude, θ is the azimuth angle, and φ is the zenith angle. Direct notation Like polar and cylindrical vectors, spherical vectors can be specified using simplified autonomous equations, in this case for ρ, θ, and φ. A three-dimensional vector whose magnitude is 5 units, whose azimuth angle is π/9 radians (20°), and whose zenith angle is π/4 radians (45°) can be specified as: • $\rho =5,\ \theta ={\pi \over 9},\ \phi ={\pi \over 4}$ • $\rho =5,\ \theta =20^{\circ },\ \phi =45^{\circ }$ Operations In any given vector space, the operations of vector addition and scalar multiplication are defined. Normed vector spaces also define an operation known as the norm (or determination of magnitude). Inner product spaces also define an operation known as the inner product. In $\mathbb {R} ^{n}$, the inner product is known as the dot product. In $\mathbb {R} ^{3}$ and $\mathbb {R} ^{7}$, an additional operation known as the cross product is also defined. Vector addition Vector addition is represented with the plus sign used as an operator between two vectors. The sum of two vectors u and v would be represented as: $\mathbf {u} +\mathbf {v} $ Scalar multiplication Scalar multiplication is represented in the same manners as algebraic multiplication. A scalar beside a vector (either or both of which may be in parentheses) implies scalar multiplication. The two common operators, a dot and a rotated cross, are also acceptable (although the rotated cross is almost never used), but they risk confusion with dot products and cross products, which operate on two vectors. The product of a scalar k with a vector v can be represented in any of the following fashions: • $k\mathbf {v} $ • $k\cdot \mathbf {v} $ Vector subtraction and scalar division Using the algebraic properties of subtraction and division, along with scalar multiplication, it is also possible to “subtract” two vectors and “divide” a vector by a scalar. Vector subtraction is performed by adding the scalar multiple of −1 with the second vector operand to the first vector operand. This can be represented by the use of the minus sign as an operator. The difference between two vectors u and v can be represented in either of the following fashions: • $\mathbf {u} +-\mathbf {v} $ • $\mathbf {u} -\mathbf {v} $ Scalar division is performed by multiplying the vector operand with the numeric inverse of the scalar operand. This can be represented by the use of the fraction bar or division signs as operators. The quotient of a vector v and a scalar c can be represented in any of the following forms: • ${1 \over c}\mathbf {v} $ • ${\mathbf {v} \over c}$ • ${\mathbf {v} \div c}$ Norm The norm of a vector is represented with double bars on both sides of the vector. The norm of a vector v can be represented as: $\|\mathbf {v} \|$ The norm is also sometimes represented with single bars, like $|\mathbf {v} |$, but this can be confused with absolute value (which is a type of norm). Inner product The inner product of two vectors (also known as the scalar product, not to be confused with scalar multiplication) is represented as an ordered pair enclosed in angle brackets. The inner product of two vectors u and v would be represented as: $\langle \mathbf {u} ,\mathbf {v} \rangle $ Dot product In $\mathbb {R} ^{n}$, the inner product is also known as the dot product. In addition to the standard inner product notation, the dot product notation (using the dot as an operator) can also be used (and is more common). The dot product of two vectors u and v can be represented as: $\mathbf {u} \cdot \mathbf {v} $ In some older literature, the dot product is implied between two vectors written side-by-side. This notation can be confused with the dyadic product between two vectors. Cross product The cross product of two vectors (in $\mathbb {R} ^{3}$) is represented using the rotated cross as an operator. The cross product of two vectors u and v would be represented as: $\mathbf {u} \times \mathbf {v} $ By some conventions (e.g. in France and in some areas of higher mathematics), this is also denoted by a wedge,[13] which avoids confusion with the wedge product since the two are functionally equivalent in three dimensions: $\mathbf {u} \wedge \mathbf {v} $ In some older literature, the following notation is used for the cross product between u and v: $[\mathbf {u} ,\mathbf {v} ]$ Nabla Main articles: Del and Nabla symbol Vector notation is used with calculus through the Nabla operator: $\mathbf {i} {\frac {\partial }{\partial x}}+\mathbf {j} {\frac {\partial }{\partial y}}+\mathbf {k} {\frac {\partial }{\partial z}}$ With a scalar function f, the gradient is written as $\nabla f\,,$ with a vector field, F the divergence is written as $\nabla \cdot F,$ and with a vector field, F the curl is written as $\nabla \times F.$ See also • Euclidean vector • ISO 31-11 § Vectors and tensors • Phasor References 1. Principles and Applications of Mathematics for Communications-electronics. 1992. p. 123. 2. Coffin, Joseph George (1911). Vector Analysis. J. Wiley & sons. 3. "Vector Introduction | MIT - KeepNotes". keepnotes.com. Retrieved 2023-07-18. 4. "ISO 80000-2:2019 Quantities and units — Part 2: Mathematics". International Organization for Standardization. August 2019. 5. Edwin Bidwell Wilson (1901) Vector Analysis, based on the Lectures of J. W. Gibbs at Internet Archive 6. Oliver Heaviside, The Electrical Journal, Volume 28. James Gray, 1891. 109 (alt) 7. J.B. Shaw (1912) Comparative Notation for Vector Expressions, Bulletin of the Quaternion Society via Hathi Trust. 8. Alexander Macfarlane (1912) A System of Notation for Vector-Analysis; with a Discussion of the Underlying Principles from Bulletin of the Quaternion Society 9. Karin Reich (1995) Die Rolle Arnold Sommerfeld bei der Diskussion um die Vektorrechnung 10. Mechanics of Deformable Bodies, p. 10, at Google Books 11. Wright, Richard. "Precalculus 6-03 Vectors". www.andrews.edu. Retrieved 2023-07-25. 12. Weisstein, Eric W. "Vector". mathworld.wolfram.com. Retrieved 2020-08-19. 13. Cajori, Florian (2011). A History of Mathematical Notations. Dover Publications. p. 134 (Vol. 2). ISBN 9780486161167. Linear algebra • Outline • Glossary Basic concepts • Scalar • Vector • Vector space • Scalar multiplication • Vector projection • Linear span • Linear map • Linear projection • Linear independence • Linear combination • Basis • Change of basis • Row and column vectors • Row and column spaces • Kernel • Eigenvalues and eigenvectors • Transpose • Linear equations Matrices • Block • Decomposition • Invertible • Minor • Multiplication • Rank • Transformation • Cramer's rule • Gaussian elimination Bilinear • Orthogonality • Dot product • Hadamard product • Inner product space • Outer product • Kronecker product • Gram–Schmidt process Multilinear algebra • Determinant • Cross product • Triple product • Seven-dimensional cross product • Geometric algebra • Exterior algebra • Bivector • Multivector • Tensor • Outermorphism Vector space constructions • Dual • Direct sum • Function space • Quotient • Subspace • Tensor product Numerical • Floating-point • Numerical stability • Basic Linear Algebra Subprograms • Sparse matrix • Comparison of linear algebra libraries • Category • Mathematics portal • Commons • Wikibooks • Wikiversity
Wikipedia
Vector operator A vector operator is a differential operator used in vector calculus. Vector operators include the gradient, divergence, and curl: • Gradient is a vector operator that operates on a scalar field, producing a vector field. • Divergence is a vector operator that operates on a vector field, producing a scalar field. • Curl is a vector operator that operates on a vector field, producing a vector field. Defined in terms of del: ${\begin{aligned}\operatorname {grad} &\equiv \nabla \\\operatorname {div} &\equiv \nabla \cdot \\\operatorname {curl} &\equiv \nabla \times \end{aligned}}$ The Laplacian operates on a scalar field, producing a scalar field: $\nabla ^{2}\equiv \operatorname {div} \ \operatorname {grad} \equiv \nabla \cdot \nabla $ Vector operators must always come right before the scalar field or vector field on which they operate, in order to produce a result. E.g. $\nabla f$ yields the gradient of f, but $f\nabla $ is just another vector operator, which is not operating on anything. A vector operator can operate on another vector operator, to produce a compound vector operator, as seen above in the case of the Laplacian. See also • del • d'Alembertian operator Further reading • H. M. Schey (1996) Div, Grad, Curl, and All That: An Informal Text on Vector Calculus, ISBN 0-393-96997-5.
Wikipedia
Vector optimization Vector optimization is a subarea of mathematical optimization where optimization problems with a vector-valued objective functions are optimized with respect to a given partial ordering and subject to certain constraints. A multi-objective optimization problem is a special case of a vector optimization problem: The objective space is the finite dimensional Euclidean space partially ordered by the component-wise "less than or equal to" ordering. Problem formulation In mathematical terms, a vector optimization problem can be written as: $C\operatorname {-} \min _{x\in S}f(x)$ where $f:X\to Z$ for a partially ordered vector space $Z$. The partial ordering is induced by a cone $C\subseteq Z$. $X$ is an arbitrary set and $S\subseteq X$ is called the feasible set. Solution concepts There are different minimality notions, among them: • ${\bar {x}}\in S$ is a weakly efficient point (weak minimizer) if for every $x\in S$ one has $f(x)-f({\bar {x}})\not \in -\operatorname {int} C$. • ${\bar {x}}\in S$ is an efficient point (minimizer) if for every $x\in S$ one has $f(x)-f({\bar {x}})\not \in -C\backslash \{0\}$. • ${\bar {x}}\in S$ is a properly efficient point (proper minimizer) if ${\bar {x}}$ is a weakly efficient point with respect to a closed pointed convex cone ${\tilde {C}}$ where $C\backslash \{0\}\subseteq \operatorname {int} {\tilde {C}}$. Every proper minimizer is a minimizer. And every minimizer is a weak minimizer.[1] Modern solution concepts not only consists of minimality notions but also take into account infimum attainment.[2] Solution methods • Benson's algorithm for linear vector optimization problems.[2] Relation to multi-objective optimization Any multi-objective optimization problem can be written as $\mathbb {R} _{+}^{d}\operatorname {-} \min _{x\in M}f(x)$ where $f:X\to \mathbb {R} ^{d}$ and $\mathbb {R} _{+}^{d}$ is the non-negative orthant of $\mathbb {R} ^{d}$. Thus the minimizer of this vector optimization problem are the Pareto efficient points. References 1. Ginchev, I.; Guerraggio, A.; Rocca, M. (2006). "From Scalar to Vector Optimization" (PDF). Applications of Mathematics. 51: 5. doi:10.1007/s10492-006-0002-1. hdl:10338.dmlcz/134627. 2. Andreas Löhne (2011). Vector Optimization with Infimum and Supremum. Springer. ISBN 9783642183508.
Wikipedia
Vector potential In vector calculus, a vector potential is a vector field whose curl is a given vector field. This is analogous to a scalar potential, which is a scalar field whose gradient is a given vector field. Formally, given a vector field v, a vector potential is a $C^{2}$ vector field A such that $\mathbf {v} =\nabla \times \mathbf {A} .$ Consequence If a vector field v admits a vector potential A, then from the equality $\nabla \cdot (\nabla \times \mathbf {A} )=0$ (divergence of the curl is zero) one obtains $\nabla \cdot \mathbf {v} =\nabla \cdot (\nabla \times \mathbf {A} )=0,$ which implies that v must be a solenoidal vector field. Theorem Let $\mathbf {v} :\mathbb {R} ^{3}\to \mathbb {R} ^{3}$ :\mathbb {R} ^{3}\to \mathbb {R} ^{3}} be a solenoidal vector field which is twice continuously differentiable. Assume that v(x) decreases at least as fast as $1/\|\mathbf {x} \|$ for $\|\mathbf {x} \|\to \infty $. Define $\mathbf {A} (\mathbf {x} )={\frac {1}{4\pi }}\int _{\mathbb {R} ^{3}}{\frac {\nabla _{y}\times \mathbf {v} (\mathbf {y} )}{\left\|\mathbf {x} -\mathbf {y} \right\|}}\,d^{3}\mathbf {y} .$ Then, A is a vector potential for v, that is,  $\nabla \times \mathbf {A} =\mathbf {v} .$ Here, $\nabla _{y}\times $ is curl for variable y. Substituting curl[v] for the current density j of the retarded potential, you will get this formula. In other words, v corresponds to the H-field. You can restrict the integral domain to any single-connected region Ω. That is, A' below is also a vector potential of v; $\mathbf {A'} (\mathbf {x} )={\frac {1}{4\pi }}\int _{\Omega }{\frac {\nabla _{y}\times \mathbf {v} (\mathbf {y} )}{\left\|\mathbf {x} -\mathbf {y} \right\|}}\,d^{3}\mathbf {y} .$ A generalization of this theorem is the Helmholtz decomposition which states that any vector field can be decomposed as a sum of a solenoidal vector field and an irrotational vector field. By analogy with Biot-Savart's law, the following ${\boldsymbol {A''}}({\textbf {x}})$ is also qualify as a vector potential for v. ${\boldsymbol {A''}}({\textbf {x}})=\int _{\Omega }{\frac {{\boldsymbol {v}}({\boldsymbol {y}})\times ({\boldsymbol {x}}-{\boldsymbol {y}})}{4\pi |{\boldsymbol {x}}-{\boldsymbol {y}}|^{3}}}d^{3}{\boldsymbol {y}}$ Substitute j (current density) for v and H (H-field)for A, we will find the Biot-Savart law. Let ${\textbf {p}}\in \mathbb {R} $ and let the Ω be a star domain centered on the p then, translating Poincaré's lemma for differential forms into vector fields world, the following ${\boldsymbol {A'''}}({\boldsymbol {x}})$ is also a vector potential for the ${\boldsymbol {v}}$ ${\boldsymbol {A'''}}({\boldsymbol {x}})=\int _{0}^{1}s(({\boldsymbol {x}}-{\boldsymbol {p}})\times ({\boldsymbol {v}}(s{\boldsymbol {x}}+(1-s){\boldsymbol {p}}))\ ds$ Nonuniqueness The vector potential admitted by a solenoidal field is not unique. If A is a vector potential for v, then so is $\mathbf {A} +\nabla f,$ where $f$ is any continuously differentiable scalar function. This follows from the fact that the curl of the gradient is zero. This nonuniqueness leads to a degree of freedom in the formulation of electrodynamics, or gauge freedom, and requires choosing a gauge. See also • Fundamental theorem of vector calculus • Magnetic vector potential • Solenoid • Closed and Exact Differential Forms References • Fundamentals of Engineering Electromagnetics by David K. Cheng, Addison-Wesley, 1993. Authority control: National • Germany
Wikipedia
Quadruple product In mathematics, the quadruple product is a product of four vectors in three-dimensional Euclidean space. The name "quadruple product" is used for two different products,[1] the scalar-valued scalar quadruple product and the vector-valued vector quadruple product or vector product of four vectors. See also: Vector algebra relations Scalar quadruple product The scalar quadruple product is defined as the dot product of two cross products: $(\mathbf {a\times b} )\cdot (\mathbf {c} \times \mathbf {d} )\ ,$ where a, b, c, d are vectors in three-dimensional Euclidean space.[2] It can be evaluated using the identity:[2] $(\mathbf {a\times b} )\cdot (\mathbf {c} \times \mathbf {d} )=(\mathbf {a\cdot c} )(\mathbf {b\cdot d} )-(\mathbf {a\cdot d} )(\mathbf {b\cdot c} )\ .$ or using the determinant: $(\mathbf {a\times b} )\cdot (\mathbf {c} \times \mathbf {d} )={\begin{vmatrix}\mathbf {a\cdot c} &\mathbf {a\cdot d} \\\mathbf {b\cdot c} &\mathbf {b\cdot d} \end{vmatrix}}\ .$ Proof We first prove that ${\begin{aligned}\mathbf {c} \times (\mathbf {b} \times \mathbf {a} )\cdot \mathbf {d} =(\mathbf {a} \times \mathbf {b} )\cdot (\mathbf {c} \times \mathbf {d} ).\end{aligned}}$ This can be shown by straightforward matrix algebra using the correspondence between elements of $\mathbb {R} ^{3}$ and ${\mathfrak {so}}(3)$, given by $\mathbb {R} ^{3}\ni \mathbf {a} ={\begin{bmatrix}a_{1}&a_{2}&a_{3}\end{bmatrix}}^{\mathrm {T} }\mapsto \mathbf {\hat {a}} \in {\mathfrak {so}}(3)$, where ${\begin{aligned}\mathbf {\hat {a}} ={\begin{bmatrix}0&-a_{3}&a_{2}\\a_{3}&0&-a_{1}\\-a_{2}&a_{1}&0\end{bmatrix}}.\end{aligned}}$ It then follows from the properties of skew-symmetric matrices that ${\begin{aligned}\mathbf {c} \times (\mathbf {b} \times \mathbf {a} )\cdot \mathbf {d} =(\mathbf {\hat {c}} \mathbf {\hat {b}} \mathbf {a} )^{\mathrm {T} }\mathbf {d} =\mathbf {a} ^{\mathrm {T} }\mathbf {\hat {b}} \mathbf {\hat {c}} \mathbf {d} =(-\mathbf {\hat {b}} \mathbf {a} )^{\mathrm {T} }\mathbf {\hat {c}} \mathbf {d} =(\mathbf {\hat {a}} \mathbf {b} )^{\mathrm {T} }\mathbf {\hat {c}} \mathbf {d} =(\mathbf {a} \times \mathbf {b} )\cdot (\mathbf {c} \times \mathbf {d} ).\end{aligned}}$ We also know from vector triple products that ${\begin{aligned}\mathbf {c} \times (\mathbf {b} \times \mathbf {a} )=(\mathbf {c} \cdot \mathbf {a} )\mathbf {b} -(\mathbf {c} \cdot \mathbf {b} )\mathbf {a} .\end{aligned}}$ Using this identity along with the one we have just derived, we obtain the desired identity: ${\begin{aligned}(\mathbf {a} \times \mathbf {b} )\cdot (\mathbf {c} \times \mathbf {d} )=\mathbf {c} \times (\mathbf {b} \times \mathbf {a} )\cdot \mathbf {d} =\left[(\mathbf {c} \cdot \mathbf {a} )\mathbf {b} -(\mathbf {c} \cdot \mathbf {b} )\mathbf {a} \right]\cdot \mathbf {d} =(\mathbf {a} \cdot \mathbf {c} )(\mathbf {b} \cdot \mathbf {d} )-(\mathbf {a} \cdot \mathbf {d} )(\mathbf {b} \cdot \mathbf {c} ).\end{aligned}}$ Vector quadruple product The vector quadruple product is defined as the cross product of two cross products: $(\mathbf {a\times b} )\mathbf {\times } (\mathbf {c} \times \mathbf {d} )\ ,$ where a, b, c, d are vectors in three-dimensional Euclidean space.[3] It can be evaluated using the identity:[4] $(\mathbf {a\times b} )\mathbf {\times } (\mathbf {c} \times \mathbf {d} )=[\mathbf {a,\ b,\ d} ]\mathbf {c} -[\mathbf {a,\ b,\ c} ]\mathbf {d} \ ,$ using the notation for the triple product: $[\mathbf {a,\ b,\ c} ]=\mathbf {a} \cdot (\mathbf {b} \times \mathbf {c} )\ .$ Equivalent forms can be obtained using the identity:[5] $[\mathbf {b,\ c,\ d} ]\mathbf {a} -[\mathbf {c,\ d,\ a} ]\mathbf {b} +[\mathbf {d,\ a,\ b} ]\mathbf {c} -[\mathbf {a,\ b,\ c} ]\mathbf {d} =0\ .$ This identity can also be written using tensor notation and the Einstein summation convention as follows: $(\mathbf {a\times b} )\mathbf {\times } (\mathbf {c} \times \mathbf {d} )=\varepsilon _{ijk}a^{i}c^{j}d^{k}b^{l}-\varepsilon _{ijk}b^{i}c^{j}d^{k}a^{l}=\varepsilon _{ijk}a^{i}b^{j}d^{k}c^{l}-\varepsilon _{ijk}a^{i}b^{j}c^{k}d^{l}$ where εijk is the Levi-Civita symbol. Application The quadruple products are useful for deriving various formulas in spherical and plane geometry.[3] For example, if four points are chosen on the unit sphere, A, B, C, D, and unit vectors drawn from the center of the sphere to the four points, a, b, c, d respectively, the identity: $(\mathbf {a\times b} )\mathbf {\cdot } (\mathbf {c\times d} )=(\mathbf {a\cdot c} )(\mathbf {b\cdot d} )-(\mathbf {a\cdot d} )(\mathbf {b\cdot c} )\ ,$ in conjunction with the relation for the magnitude of the cross product: $\|\mathbf {a\times b} \|=ab\sin \theta _{ab}\ ,$ and the dot product: $\mathbf {a\cdot b} =ab\cos \theta _{ab}\ ,$ where a = b = 1 for the unit sphere, results in the identity among the angles attributed to Gauss: $\sin \theta _{ab}\sin \theta _{cd}\cos x=\cos \theta _{ac}\cos \theta _{bd}-\cos \theta _{ad}\cos \theta _{bc}\ ,$ where x is the angle between a × b and c × d, or equivalently, between the planes defined by these vectors. Josiah Willard Gibbs's pioneering work on vector calculus provides several other examples.[3] See also • Binet–Cauchy identity • Lagrange's identity Notes 1. Gibbs & Wilson 1901, §42 of section "Direct and skew products of vectors", p.77 2. Gibbs & Wilson 1901, p. 76 3. Gibbs & Wilson 1901, pp. 77 ff 4. Gibbs & Wilson 1901, p. 77 5. Gibbs & Wilson 1901, Equation 27, p. 77 References • Gibbs, Josiah Willard; Wilson, Edwin Bidwell (1901). Vector analysis: a text-book for the use of students of mathematics. Scribner.
Wikipedia
Vector space model Vector space model or term vector model is an algebraic model for representing text documents (and any objects, in general) as vectors of identifiers (such as index terms). It is used in information filtering, information retrieval, indexing and relevancy rankings. Its first use was in the SMART Information Retrieval System. Definitions Documents and queries are represented as vectors. $d_{j}=(w_{1,j},w_{2,j},\dotsc ,w_{n,j})$ $q=(w_{1,q},w_{2,q},\dotsc ,w_{n,q})$ Each dimension corresponds to a separate term. If a term occurs in the document, its value in the vector is non-zero. Several different ways of computing these values, also known as (term) weights, have been developed. One of the best known schemes is tf-idf weighting (see the example below). The definition of term depends on the application. Typically terms are single words, keywords, or longer phrases. If words are chosen to be the terms, the dimensionality of the vector is the number of words in the vocabulary (the number of distinct words occurring in the corpus). Vector operations can be used to compare documents with queries. Applications Relevance rankings of documents in a keyword search can be calculated, using the assumptions of document similarities theory, by comparing the deviation of angles between each document vector and the original query vector where the query is represented as a vector with same dimension as the vectors that represent the other documents. In practice, it is easier to calculate the cosine of the angle between the vectors, instead of the angle itself: $\cos {\theta }={\frac {\mathbf {d_{2}} \cdot \mathbf {q} }{\left\|\mathbf {d_{2}} \right\|\left\|\mathbf {q} \right\|}}$ Where $\mathbf {d_{2}} \cdot \mathbf {q} $ is the intersection (i.e. the dot product) of the document (d2 in the figure to the right) and the query (q in the figure) vectors, $\left\|\mathbf {d_{2}} \right\|$ is the norm of vector d2, and $\left\|\mathbf {q} \right\|$ is the norm of vector q. The norm of a vector is calculated as such: $\left\|\mathbf {q} \right\|={\sqrt {\sum _{i=1}^{n}q_{i}^{2}}}$ Using the cosine the similarity between document dj and query q can be calculated as: $\mathrm {cos} (d_{j},q)={\frac {\mathbf {d_{j}} \cdot \mathbf {q} }{\left\|\mathbf {d_{j}} \right\|\left\|\mathbf {q} \right\|}}={\frac {\sum _{i=1}^{N}w_{i,j}w_{i,q}}{{\sqrt {\sum _{i=1}^{N}w_{i,j}^{2}}}{\sqrt {\sum _{i=1}^{N}w_{i,q}^{2}}}}}$ As all vectors under consideration by this model are element-wise nonnegative, a cosine value of zero means that the query and document vector are orthogonal and have no match (i.e. the query term does not exist in the document being considered). See cosine similarity for further information. Term frequency-inverse document frequency weights In the classic vector space model proposed by Salton, Wong and Yang [1] the term-specific weights in the document vectors are products of local and global parameters. The model is known as term frequency-inverse document frequency model. The weight vector for document d is $\mathbf {v} _{d}=[w_{1,d},w_{2,d},\ldots ,w_{N,d}]^{T}$, where $w_{t,d}=\mathrm {tf} _{t,d}\cdot \log {\frac {|D|}{|\{d'\in D\,|\,t\in d'\}|}}$ and • $\mathrm {tf} _{t,d}$ is term frequency of term t in document d (a local parameter) • $\log {\frac {|D|}{|\{d'\in D\,|\,t\in d'\}|}}$ is inverse document frequency (a global parameter). $|D|$ is the total number of documents in the document set; $|\{d'\in D\,|\,t\in d'\}|$ is the number of documents containing the term t. Advantages The vector space model has the following advantages over the Standard Boolean model: 1. Simple model based on linear algebra 2. Term weights not binary 3. Allows computing a continuous degree of similarity between queries and documents 4. Allows ranking documents according to their possible relevance 5. Allows partial matching Most of these advantages are a consequence of the difference in the density of the document collection representation between Boolean and term frequency-inverse document frequency approaches. When using Boolean weights, any document lies in a vertex in a n-dimensional hypercube. Therefore, the possible document representations are $2^{n}$ and the maximum Euclidean distance between pairs is ${\sqrt {n}}$. As documents are added to the document collection, the region defined by the hypercube's vertices become more populated and hence denser. Unlike Boolean, when a document is added using term frequency-inverse document frequency weights, the inverse document frequencies of the terms in the new document decrease while that of the remaining terms increase. In average, as documents are added, the region where documents lie expands regulating the density of the entire collection representation. This behavior models the original motivation of Salton and his colleagues that a document collection represented in a low density region could yield better retrieval results. Limitations The vector space model has the following limitations: 1. Long documents are poorly represented because they have poor similarity values (a small scalar product and a large dimensionality) 2. Search keywords must precisely match document terms; word substrings might result in a "false positive match" 3. Semantic sensitivity; documents with similar context but different term vocabulary won't be associated, resulting in a "false negative match". 4. The order in which the terms appear in the document is lost in the vector space representation. 5. Theoretically assumes terms are statistically independent. 6. Weighting is intuitive but not very formal. Many of these difficulties can, however, be overcome by the integration of various tools, including mathematical techniques such as singular value decomposition and lexical databases such as WordNet. Models based on and extending the vector space model Models based on and extending the vector space model include: • Generalized vector space model • Latent semantic analysis • Term • Rocchio Classification • Random indexing Software that implements the vector space model The following software packages may be of interest to those wishing to experiment with vector models and implement search services based upon them. Free open source software • Apache Lucene. Apache Lucene is a high-performance, open source, full-featured text search engine library written entirely in Java. • OpenSearch (software) and Solr : the 2 most famous search engine software (many smaller exist) based on Lucene. • Gensim is a Python+NumPy framework for Vector Space modelling. It contains incremental (memory-efficient) algorithms for term frequency-inverse document frequency, Latent Semantic Indexing, Random Projections and Latent Dirichlet Allocation. • Weka. Weka is a popular data mining package for Java including WordVectors and Bag Of Words models. • Word2vec. Word2vec uses vector spaces for word embeddings. Further reading • G. Salton (1962), "Some experiments in the generation of word and document associations" Proceeding AFIPS '62 (Fall) Proceedings of the December 4–6, 1962, fall joint computer conference, pages 234–250. (Early paper of Salton using the term-document matrix formalization) • G. Salton, A. Wong, and C. S. Yang (1975), "A Vector Space Model for Automatic Indexing" Communications of the ACM, vol. 18, nr. 11, pages 613–620. (Article in which a vector space model was presented) • David Dubin (2004), The Most Influential Paper Gerard Salton Never Wrote (Explains the history of the Vector Space Model and the non-existence of a frequently cited publication) • Description of the vector space model • Description of the classic vector space model by Dr E. Garcia • Relationship of vector space search to the "k-Nearest Neighbor" search See also • Bag-of-words model • Champion list • Compound term processing • Conceptual space • Eigenvalues and eigenvectors • Inverted index • Nearest neighbor search • Sparse distributed memory • w-shingling References 1. G. Salton , A. Wong , C. S. Yang, A vector space model for automatic indexing, Communications of the ACM, v.18 n.11, p.613–620, Nov. 1975
Wikipedia
Vector spherical harmonics In mathematics, vector spherical harmonics (VSH) are an extension of the scalar spherical harmonics for use with vector fields. The components of the VSH are complex-valued functions expressed in the spherical coordinate basis vectors. Definition Several conventions have been used to define the VSH.[1][2][3][4][5] We follow that of Barrera et al.. Given a scalar spherical harmonic Yℓm(θ, φ), we define three VSH: • $\mathbf {Y} _{\ell m}=Y_{\ell m}{\hat {\mathbf {r} }},$ • $\mathbf {\Psi } _{\ell m}=r\nabla Y_{\ell m},$ • $\mathbf {\Phi } _{\ell m}=\mathbf {r} \times \nabla Y_{\ell m},$ with ${\hat {\mathbf {r} }}$ being the unit vector along the radial direction in spherical coordinates and $\mathbf {r} $ the vector along the radial direction with the same norm as the radius, i.e., $\mathbf {r} =r{\hat {\mathbf {r} }}$. The radial factors are included to guarantee that the dimensions of the VSH are the same as those of the ordinary spherical harmonics and that the VSH do not depend on the radial spherical coordinate. The interest of these new vector fields is to separate the radial dependence from the angular one when using spherical coordinates, so that a vector field admits a multipole expansion $\mathbf {E} =\sum _{\ell =0}^{\infty }\sum _{m=-\ell }^{\ell }\left(E_{\ell m}^{r}(r)\mathbf {Y} _{\ell m}+E_{\ell m}^{(1)}(r)\mathbf {\Psi } _{\ell m}+E_{\ell m}^{(2)}(r)\mathbf {\Phi } _{\ell m}\right).$ The labels on the components reflect that $E_{\ell m}^{r}$ is the radial component of the vector field, while $E_{\ell m}^{(1)}$ and $E_{\ell m}^{(2)}$ are transverse components (with respect to the radius vector $\mathbf {r} $). Main properties Symmetry Like the scalar spherical harmonics, the VSH satisfy ${\begin{aligned}\mathbf {Y} _{\ell ,-m}&=(-1)^{m}\mathbf {Y} _{\ell m}^{*},\\\mathbf {\Psi } _{\ell ,-m}&=(-1)^{m}\mathbf {\Psi } _{\ell m}^{*},\\\mathbf {\Phi } _{\ell ,-m}&=(-1)^{m}\mathbf {\Phi } _{\ell m}^{*},\end{aligned}}$ which cuts the number of independent functions roughly in half. The star indicates complex conjugation. Orthogonality The VSH are orthogonal in the usual three-dimensional way at each point $\mathbf {r} $: ${\begin{aligned}\mathbf {Y} _{\ell m}(\mathbf {r} )\cdot \mathbf {\Psi } _{\ell m}(\mathbf {r} )&=0,\\\mathbf {Y} _{\ell m}(\mathbf {r} )\cdot \mathbf {\Phi } _{\ell m}(\mathbf {r} )&=0,\\\mathbf {\Psi } _{\ell m}(\mathbf {r} )\cdot \mathbf {\Phi } _{\ell m}(\mathbf {r} )&=0.\end{aligned}}$ They are also orthogonal in Hilbert space: ${\begin{aligned}\int \mathbf {Y} _{\ell m}\cdot \mathbf {Y} _{\ell 'm'}^{*}\,d\Omega &=\delta _{\ell \ell '}\delta _{mm'},\\\int \mathbf {\Psi } _{\ell m}\cdot \mathbf {\Psi } _{\ell 'm'}^{*}\,d\Omega &=\ell (\ell +1)\delta _{\ell \ell '}\delta _{mm'},\\\int \mathbf {\Phi } _{\ell m}\cdot \mathbf {\Phi } _{\ell 'm'}^{*}\,d\Omega &=\ell (\ell +1)\delta _{\ell \ell '}\delta _{mm'},\\\int \mathbf {Y} _{\ell m}\cdot \mathbf {\Psi } _{\ell 'm'}^{*}\,d\Omega &=0,\\\int \mathbf {Y} _{\ell m}\cdot \mathbf {\Phi } _{\ell 'm'}^{*}\,d\Omega &=0,\\\int \mathbf {\Psi } _{\ell m}\cdot \mathbf {\Phi } _{\ell 'm'}^{*}\,d\Omega &=0.\end{aligned}}$ An additional result at a single point $\mathbf {r} $ (not reported in Barrera et al, 1985) is, for all $\ell ,m,\ell ',m'$, ${\begin{aligned}\mathbf {Y} _{\ell m}(\mathbf {r} )\cdot \mathbf {\Psi } _{\ell 'm'}(\mathbf {r} )&=0,\\\mathbf {Y} _{\ell m}(\mathbf {r} )\cdot \mathbf {\Phi } _{\ell 'm'}(\mathbf {r} )&=0.\end{aligned}}$ Vector multipole moments The orthogonality relations allow one to compute the spherical multipole moments of a vector field as ${\begin{aligned}E_{\ell m}^{r}&=\int \mathbf {E} \cdot \mathbf {Y} _{\ell m}^{*}\,d\Omega ,\\E_{\ell m}^{(1)}&={\frac {1}{\ell (\ell +1)}}\int \mathbf {E} \cdot \mathbf {\Psi } _{\ell m}^{*}\,d\Omega ,\\E_{\ell m}^{(2)}&={\frac {1}{\ell (\ell +1)}}\int \mathbf {E} \cdot \mathbf {\Phi } _{\ell m}^{*}\,d\Omega .\end{aligned}}$ The gradient of a scalar field Given the multipole expansion of a scalar field $\phi =\sum _{\ell =0}^{\infty }\sum _{m=-\ell }^{\ell }\phi _{\ell m}(r)Y_{\ell m}(\theta ,\phi ),$ we can express its gradient in terms of the VSH as $\nabla \phi =\sum _{\ell =0}^{\infty }\sum _{m=-\ell }^{\ell }\left({\frac {d\phi _{\ell m}}{dr}}\mathbf {Y} _{\ell m}+{\frac {\phi _{\ell m}}{r}}\mathbf {\Psi } _{\ell m}\right).$ Divergence For any multipole field we have ${\begin{aligned}\nabla \cdot \left(f(r)\mathbf {Y} _{\ell m}\right)&=\left({\frac {df}{dr}}+{\frac {2}{r}}f\right)Y_{\ell m},\\\nabla \cdot \left(f(r)\mathbf {\Psi } _{\ell m}\right)&=-{\frac {\ell (\ell +1)}{r}}fY_{\ell m},\\\nabla \cdot \left(f(r)\mathbf {\Phi } _{\ell m}\right)&=0.\end{aligned}}$ By superposition we obtain the divergence of any vector field: $\nabla \cdot \mathbf {E} =\sum _{\ell =0}^{\infty }\sum _{m=-\ell }^{\ell }\left({\frac {dE_{\ell m}^{r}}{dr}}+{\frac {2}{r}}E_{\ell m}^{r}-{\frac {\ell (\ell +1)}{r}}E_{\ell m}^{(1)}\right)Y_{\ell m}.$ We see that the component on Φℓm is always solenoidal. Curl For any multipole field we have ${\begin{aligned}\nabla \times \left(f(r)\mathbf {Y} _{\ell m}\right)&=-{\frac {1}{r}}f\mathbf {\Phi } _{\ell m},\\\nabla \times \left(f(r)\mathbf {\Psi } _{\ell m}\right)&=\left({\frac {df}{dr}}+{\frac {1}{r}}f\right)\mathbf {\Phi } _{\ell m},\\\nabla \times \left(f(r)\mathbf {\Phi } _{\ell m}\right)&=-{\frac {\ell (\ell +1)}{r}}f\mathbf {Y} _{\ell m}-\left({\frac {df}{dr}}+{\frac {1}{r}}f\right)\mathbf {\Psi } _{\ell m}.\end{aligned}}$ By superposition we obtain the curl of any vector field: $\nabla \times \mathbf {E} =\sum _{\ell =0}^{\infty }\sum _{m=-\ell }^{\ell }\left(-{\frac {\ell (\ell +1)}{r}}E_{\ell m}^{(2)}\mathbf {Y} _{\ell m}-\left({\frac {dE_{\ell m}^{(2)}}{dr}}+{\frac {1}{r}}E_{\ell m}^{(2)}\right)\mathbf {\Psi } _{\ell m}+\left(-{\frac {1}{r}}E_{\ell m}^{r}+{\frac {dE_{\ell m}^{(1)}}{dr}}+{\frac {1}{r}}E_{\ell m}^{(1)}\right)\mathbf {\Phi } _{\ell m}\right).$ Laplacian The action of the Laplace operator $\Delta =\nabla \cdot \nabla $ separates as follows: $\Delta \left(f(r)\mathbf {Z} _{\ell m}\right)=\left({\frac {1}{r^{2}}}{\frac {\partial }{\partial r}}r^{2}{\frac {\partial f}{\partial r}}\right)\mathbf {Z} _{\ell m}+f(r)\Delta \mathbf {Z} _{\ell m},$ where $\mathbf {Z} _{\ell m}=\mathbf {Y} _{\ell m},\mathbf {\Psi } _{\ell m},\mathbf {\Phi } _{\ell m}$ and ${\begin{aligned}\Delta \mathbf {Y} _{\ell m}&=-{\frac {1}{r^{2}}}(2+\ell (\ell +1))\mathbf {Y} _{\ell m}+{\frac {2}{r^{2}}}\mathbf {\Psi } _{\ell m},\\\Delta \mathbf {\Psi } _{\ell m}&={\frac {2}{r^{2}}}\ell (\ell +1)\mathbf {Y} _{\ell m}-{\frac {1}{r^{2}}}\ell (\ell +1)\mathbf {\Psi } _{\ell m},\\\Delta \mathbf {\Phi } _{\ell m}&=-{\frac {1}{r^{2}}}\ell (\ell +1)\mathbf {\Phi } _{\ell m}.\end{aligned}}$ Also note that this action becomes symmetric, i.e. the off-diagonal coefficients are equal to $ {\frac {2}{r^{2}}}{\sqrt {\ell (\ell +1)}}$, for properly normalized VSH. Examples $\mathbf {\Psi } _{1m}$ $\mathbf {\Psi } _{2m}$ $\mathbf {\Psi } _{3m}$ $\mathbf {\Phi } _{1m}$ $\mathbf {\Phi } _{2m}$ $\mathbf {\Phi } _{3m}$ Visualizations of the real parts of $\ell =1,2,3$ VSHs. Click to expand. First vector spherical harmonics • $\ell =0$. ${\begin{aligned}\mathbf {Y} _{00}&={\sqrt {\frac {1}{4\pi }}}{\hat {\mathbf {r} }},\\\mathbf {\Psi } _{00}&=\mathbf {0} ,\\\mathbf {\Phi } _{00}&=\mathbf {0} .\end{aligned}}$ • $\ell =1$. ${\begin{aligned}\mathbf {Y} _{10}&={\sqrt {\frac {3}{4\pi }}}\cos \theta \,{\hat {\mathbf {r} }},\\\mathbf {Y} _{11}&=-{\sqrt {\frac {3}{8\pi }}}e^{i\varphi }\sin \theta \,{\hat {\mathbf {r} }},\end{aligned}}$ ${\begin{aligned}\mathbf {\Psi } _{10}&=-{\sqrt {\frac {3}{4\pi }}}\sin \theta \,{\hat {\mathbf {\theta } }},\\\mathbf {\Psi } _{11}&=-{\sqrt {\frac {3}{8\pi }}}e^{i\varphi }\left(\cos \theta \,{\hat {\mathbf {\theta } }}+i\,{\hat {\mathbf {\varphi } }}\right),\end{aligned}}$ ${\begin{aligned}\mathbf {\Phi } _{10}&=-{\sqrt {\frac {3}{4\pi }}}\sin \theta \,{\hat {\mathbf {\varphi } }},\\\mathbf {\Phi } _{11}&={\sqrt {\frac {3}{8\pi }}}e^{i\varphi }\left(i\,{\hat {\mathbf {\theta } }}-\cos \theta \,{\hat {\mathbf {\varphi } }}\right).\end{aligned}}$ • $\ell =2$. ${\begin{aligned}\mathbf {Y} _{20}&={\frac {1}{4}}{\sqrt {\frac {5}{\pi }}}\,(3\cos ^{2}\theta -1)\,{\hat {\mathbf {r} }},\\\mathbf {Y} _{21}&=-{\sqrt {\frac {15}{8\pi }}}\,\sin \theta \,\cos \theta \,e^{i\varphi }\,{\hat {\mathbf {r} }},\\\mathbf {Y} _{22}&={\frac {1}{4}}{\sqrt {\frac {15}{2\pi }}}\,\sin ^{2}\theta \,e^{2i\varphi }\,{\hat {\mathbf {r} }}.\end{aligned}}$ ${\begin{aligned}\mathbf {\Psi } _{20}&=-{\frac {3}{2}}{\sqrt {\frac {5}{\pi }}}\,\sin \theta \,\cos \theta \,{\hat {\mathbf {\theta } }},\\\mathbf {\Psi } _{21}&=-{\sqrt {\frac {15}{8\pi }}}\,e^{i\varphi }\,\left(\cos 2\theta \,{\hat {\mathbf {\theta } }}+i\cos \theta \,{\hat {\mathbf {\varphi } }}\right),\\\mathbf {\Psi } _{22}&={\sqrt {\frac {15}{8\pi }}}\,\sin \theta \,e^{2i\varphi }\,\left(\cos \theta \,{\hat {\mathbf {\theta } }}+i\,{\hat {\mathbf {\varphi } }}\right).\end{aligned}}$ ${\begin{aligned}\mathbf {\Phi } _{20}&=-{\frac {3}{2}}{\sqrt {\frac {5}{\pi }}}\sin \theta \,\cos \theta \,{\hat {\mathbf {\varphi } }},\\\mathbf {\Phi } _{21}&={\sqrt {\frac {15}{8\pi }}}\,e^{i\varphi }\,\left(i\cos \theta \,{\hat {\mathbf {\theta } }}-\cos 2\theta \,{\hat {\mathbf {\varphi } }}\right),\\\mathbf {\Phi } _{22}&={\sqrt {\frac {15}{8\pi }}}\,\sin \theta \,e^{2i\varphi }\,\left(-i\,{\hat {\mathbf {\theta } }}+\cos \theta \,{\hat {\mathbf {\varphi } }}\right).\end{aligned}}$ Expressions for negative values of m are obtained by applying the symmetry relations. Applications Electrodynamics The VSH are especially useful in the study of multipole radiation fields. For instance, a magnetic multipole is due to an oscillating current with angular frequency $\omega $ and complex amplitude ${\hat {\mathbf {J} }}=J(r)\mathbf {\Phi } _{\ell m},$ and the corresponding electric and magnetic fields, can be written as ${\begin{aligned}{\hat {\mathbf {E} }}&=E(r)\mathbf {\Phi } _{\ell m},\\{\hat {\mathbf {B} }}&=B^{r}(r)\mathbf {Y} _{\ell m}+B^{(1)}(r)\mathbf {\Psi } _{\ell m}.\end{aligned}}$ Substituting into Maxwell equations, Gauss's law is automatically satisfied $\nabla \cdot {\hat {\mathbf {E} }}=0,$ while Faraday's law decouples as $\nabla \times {\hat {\mathbf {E} }}=-i\omega {\hat {\mathbf {B} }}\quad \Rightarrow \quad {\begin{cases}{\dfrac {\ell (\ell +1)}{r}}E=i\omega B^{r},\\{\dfrac {dE}{dr}}+{\dfrac {E}{r}}=i\omega B^{(1)}.\end{cases}}$ Gauss' law for the magnetic field implies $\nabla \cdot {\hat {\mathbf {B} }}=0\quad \Rightarrow \quad {\frac {dB^{r}}{dr}}+{\frac {2}{r}}B^{r}-{\frac {\ell (\ell +1)}{r}}B^{(1)}=0,$ and Ampère–Maxwell's equation gives $\nabla \times {\hat {\mathbf {B} }}=\mu _{0}{\hat {\mathbf {J} }}+i\mu _{0}\varepsilon _{0}\omega {\hat {\mathbf {E} }}\quad \Rightarrow \quad -{\frac {B^{r}}{r}}+{\frac {dB^{(1)}}{dr}}+{\frac {B^{(1)}}{r}}=\mu _{0}J+i\omega \mu _{0}\varepsilon _{0}E.$ In this way, the partial differential equations have been transformed into a set of ordinary differential equations. Alternative definition In many applications, vector spherical harmonics are defined as fundamental set of the solutions of vector Helmholtz equation in spherical coordinates.[6][7] In this case, vector spherical harmonics are generated by scalar functions, which are solutions of scalar Helmholtz equation with the wavevector $\mathbf {k} $. ${\begin{array}{l}{\psi _{emn}=\cos m\varphi P_{n}^{m}(\cos \vartheta )z_{n}({k}r)}\\{\psi _{omn}=\sin m\varphi P_{n}^{m}(\cos \vartheta )z_{n}({k}r)}\end{array}}$ here $P_{n}^{m}(\cos \theta )$ are the associated Legendre polynomials, and $z_{n}({k}r)$ are any of the spherical Bessel functions. Vector spherical harmonics are defined as: longitudinal harmonics $\mathbf {L} _{^{e}_{o}mn}=\mathbf {\nabla } \psi _{^{e}_{o}mn}$ magnetic harmonics $\mathbf {M} _{^{e}_{o}mn}=\nabla \times \left(\mathbf {r} \psi _{^{e}_{o}mn}\right)$ electric harmonics $\mathbf {N} _{^{e}_{o}mn}={\frac {\nabla \times \mathbf {M} _{^{e}_{o}mn}}{k}}$ Here we use harmonics real-valued angular part, where $m\geq 0$, but complex functions can be introduced in the same way. Let us introduce the notation $\rho =kr$. In the component form vector spherical harmonics are written as: ${\begin{aligned}{\mathbf {M} _{emn}(k,\mathbf {r} )=\qquad {{\frac {-m}{\sin(\theta )}}\sin(m\varphi )P_{n}^{m}(\cos(\theta ))}z_{n}(\rho )\mathbf {e} _{\theta }}\\{{}-\cos(m\varphi ){\frac {dP_{n}^{m}(\cos(\theta ))}{d\theta }}}z_{n}(\rho )\mathbf {e} _{\varphi }\end{aligned}}$ ${\begin{aligned}{\mathbf {M} _{omn}(k,\mathbf {r} )=\qquad {{\frac {m}{\sin(\theta )}}\cos(m\varphi )P_{n}^{m}(\cos(\theta ))}}z_{n}(\rho )\mathbf {e} _{\theta }\\{{}-\sin(m\varphi ){\frac {dP_{n}^{m}(\cos(\theta ))}{d\theta }}z_{n}(\rho )\mathbf {e} _{\varphi }}\end{aligned}}$ ${\begin{aligned}{\mathbf {N} _{emn}(k,\mathbf {r} )=\qquad {\frac {z_{n}(\rho )}{\rho }}\cos(m\varphi )n(n+1)P_{n}^{m}(\cos(\theta ))\mathbf {e} _{\mathbf {r} }}\\{{}+\cos(m\varphi ){\frac {dP_{n}^{m}(\cos(\theta ))}{d\theta }}}{\frac {1}{\rho }}{\frac {d}{d\rho }}\left[\rho z_{n}(\rho )\right]\mathbf {e} _{\theta }\\{{}-m\sin(m\varphi ){\frac {P_{n}^{m}(\cos(\theta ))}{\sin(\theta )}}}{\frac {1}{\rho }}{\frac {d}{d\rho }}\left[\rho z_{n}(\rho )\right]\mathbf {e} _{\varphi }\end{aligned}}$ ${\begin{aligned}\mathbf {N} _{omn}(k,\mathbf {r} )=\qquad {\frac {z_{n}(\rho )}{\rho }}\sin(m\varphi )n(n+1)P_{n}^{m}(\cos(\theta ))\mathbf {e} _{\mathbf {r} }\\{}+\sin(m\varphi ){\frac {dP_{n}^{m}(\cos(\theta ))}{d\theta }}{\frac {1}{\rho }}{\frac {d}{d\rho }}\left[\rho z_{n}(\rho )\right]\mathbf {e} _{\theta }\\{}+{m\cos(m\varphi ){\frac {P_{n}^{m}(\cos(\theta ))}{\sin(\theta )}}}{\frac {1}{\rho }}{\frac {d}{d\rho }}\left[\rho z_{n}(\rho )\right]\mathbf {e} _{\varphi }\end{aligned}}$ There is no radial part for magnetic harmonics. For electric harmonics, the radial part decreases faster than angular, and for big $\rho $ can be neglected. We can also see that for electric and magnetic harmonics angular parts are the same up to permutation of the polar and azimuthal unit vectors, so for big $\rho $ electric and magnetic harmonics vectors are equal in value and perpendicular to each other. Longitudinal harmonics: ${\begin{aligned}\mathbf {L} _{^{e}_{o}{mn}}(k,\mathbf {r} ){}=\qquad &{\frac {\partial }{\partial r}}z_{n}(kr)P_{n}^{m}(\cos \theta ){^{\cos }_{\sin }}{m\varphi }\mathbf {e} _{r}\\{}+{}&{\frac {1}{r}}z_{n}(kr){\frac {\partial }{\partial \theta }}P_{n}^{m}(\cos \theta ){^{\cos }_{\sin }}m\varphi \mathbf {e} _{\theta }\\{}\mp {}&{\frac {m}{r\sin \theta }}z_{n}(kr)P_{n}^{m}(\cos \theta ){^{\sin }_{\cos }}m\varphi \mathbf {e} _{\varphi }\end{aligned}}$ Orthogonality The solutions of the Helmholtz vector equation obey the following orthogonality relations:[7] ${\begin{aligned}\int _{0}^{2\pi }\int _{0}^{\pi }\mathbf {L} _{^{e}_{o}mn}\cdot \mathbf {L} _{^{e}_{o}mn}\sin \vartheta d\vartheta d\varphi &=(1+\delta _{m,0}){\frac {2\pi }{(2n+1)^{2}}}{\frac {(n+m)!}{(n-m)!}}k^{2}\left\{n\left[z_{n-1}(kr)\right]^{2}+(n+1)\left[z_{n+1}(kr)\right]^{2}\right\}\\[3pt]\int _{0}^{2\pi }\int _{0}^{\pi }\mathbf {M} _{^{e}_{o}mn}\cdot \mathbf {M} _{^{e}_{o}mn}\sin \vartheta d\vartheta d\varphi &=(1+\delta _{m,0}){\frac {2\pi }{2n+1}}{\frac {(n+m)!}{(n-m)!}}n(n+1)\left[z_{n}(kr)\right]^{2}\\[3pt]\int _{0}^{2\pi }\int _{0}^{\pi }\mathbf {N} _{^{e}_{o}mn}\cdot \mathbf {N} _{^{e}_{o}mn}\sin \vartheta d\vartheta d\varphi &=(1+\delta _{m,0}){\frac {2\pi }{(2n+1)^{2}}}{\frac {(n+m)!}{(n-m)!}}n(n+1)\left\{(n+1)\left[z_{n-1}(kr)\right]^{2}+n\left[z_{n+1}(kr)\right]^{2}\right\}\\[3pt]\int _{0}^{\pi }\int _{0}^{2\pi }\mathbf {L} _{^{e}_{o}mn}\cdot \mathbf {N} _{^{e}_{o}mn}\sin \vartheta d\vartheta d\varphi &=(1+\delta _{m,0}){\frac {2\pi }{(2n+1)^{2}}}{\frac {(n+m)!}{(n-m)!}}n(n+1)k\left\{\left[z_{n-1}(kr)\right]^{2}-\left[z_{n+1}(kr)\right]^{2}\right\}\end{aligned}}$ All other integrals over the angles between different functions or functions with different indices are equal to zero. Rotation and inversion Under rotation, vector spherical harmonics are transformed through each other in the same way as the corresponding scalar spherical functions, which are generating for a specific type of vector harmonics. For example, if the generating functions are the usual spherical harmonics, then the vector harmonics will also be transformed through the Wigner D-matrices[8][9][10] ${\hat {D}}(\alpha ,\beta ,\gamma )\mathbf {Y} _{JM}^{(s)}(\theta ,\varphi )=\sum _{m'=-\ell }^{\ell }[D_{MM'}^{(\ell )}(\alpha ,\beta ,\gamma )]^{*}\mathbf {Y} _{JM'}^{(s)}(\theta ,\varphi ),$ The behavior under rotations is the same for electrical, magnetic and longitudinal harmonics. Under inversion, electric and longitudinal spherical harmonics behave in the same way as scalar spherical functions, i.e. ${\hat {I}}\mathbf {N} _{JM}(\theta ,\varphi )=(-1)^{J}\mathbf {N} _{JM}(\theta ,\varphi ),$ and magnetic ones have the opposite parity: ${\hat {I}}\mathbf {M} _{JM}(\theta ,\varphi )=(-1)^{J+1}\mathbf {M} _{JM}(\theta ,\varphi ),$ Fluid dynamics In the calculation of the Stokes' law for the drag that a viscous fluid exerts on a small spherical particle, the velocity distribution obeys Navier–Stokes equations neglecting inertia, i.e., ${\begin{aligned}0&=\nabla \cdot \mathbf {v} ,\\\mathbf {0} &=-\nabla p+\eta \nabla ^{2}\mathbf {v} ,\end{aligned}}$ with the boundary conditions $\mathbf {v} ={\begin{cases}\mathbf {0} &r=a,\\-\mathbf {U} _{0}&r\to \infty .\end{cases}}$ where U is the relative velocity of the particle to the fluid far from the particle. In spherical coordinates this velocity at infinity can be written as $\mathbf {U} _{0}=U_{0}\left(\cos \theta \,{\hat {\mathbf {r} }}-\sin \theta \,{\hat {\mathbf {\theta } }}\right)=U_{0}\left(\mathbf {Y} _{10}+\mathbf {\Psi } _{10}\right).$ The last expression suggests an expansion in spherical harmonics for the liquid velocity and the pressure ${\begin{aligned}p&=p(r)Y_{10},\\\mathbf {v} &=v^{r}(r)\mathbf {Y} _{10}+v^{(1)}(r)\mathbf {\Psi } _{10}.\end{aligned}}$ Substitution in the Navier–Stokes equations produces a set of ordinary differential equations for the coefficients. Integral relations Here the following definitions are used: ${\begin{aligned}Y_{emn}&=\cos m\varphi P_{n}^{m}(\cos \theta )\\Y_{omn}&=\sin m\varphi P_{n}^{m}(\cos \theta )\end{aligned}}$ $\mathbf {X} _{^{e}_{o}mn}\left({\frac {\mathbf {k} }{k}}\right)=\nabla \times \left(\mathbf {k} Y_{^{o}_{e}mn}\left({\frac {\mathbf {k} }{k}}\right)\right)$ $\mathbf {Z} _{^{o}_{e}mn}\left({\frac {\mathbf {k} }{k}}\right)=i{\frac {\mathbf {k} }{k}}\times \mathbf {X} _{^{e}_{o}mn}\left({\frac {\mathbf {k} }{k}}\right)$ In case, when instead of $z_{n}$ are spherical Bessel functions, with help of plane wave expansion one can obtain the following integral relations:[11] $\mathbf {N} _{pmn}(k,\mathbf {r} )={\frac {i^{-n}}{4\pi }}\int \mathbf {Z} _{pmn}\left({\frac {\mathbf {k} }{k}}\right)e^{i\mathbf {k} \mathbf {r} }d\Omega _{k}$ $\mathbf {M} _{pmn}(k,\mathbf {r} )={\frac {i^{-n}}{4\pi }}\int \mathbf {X} _{pmn}\left({\frac {\mathbf {k} }{k}}\right)e^{i\mathbf {k} \mathbf {r} }d\Omega _{k}$ In case, when $z_{n}$ are spherical Hankel functions, one should use the different formulae.[12][11] For vector spherical harmonics the following relations are obtained: $\mathbf {M} _{pmn}^{(3)}(k,\mathbf {r} )={\frac {i^{-n}}{2\pi k}}\iint _{-\infty }^{\infty }dk_{\|}{\frac {e^{i\left(k_{x}x+k_{y}y\pm k_{z}z\right)}}{k_{z}}}\mathbf {X} _{pmn}\left({\frac {\mathbf {k} }{k}}\right)$ $\mathbf {N} _{pmn}^{(3)}(k,\mathbf {r} )={\frac {i^{-n}}{2\pi k}}\iint _{-\infty }^{\infty }dk_{\|}{\frac {e^{i\left(k_{x}x+k_{y}y\pm k_{z}z\right)}}{k_{z}}}\mathbf {Z} _{pmn}\left({\frac {\mathbf {k} }{k}}\right)$ where $ k_{z}={\sqrt {k^{2}-k_{x}^{2}-k_{y}^{2}}}$, index $(3)$ means, that spherical Hankel functions are used. See also • Spherical harmonics • Spinor spherical harmonics • Spin-weighted spherical harmonics • Electromagnetic radiation • Spherical basis References 1. Barrera, R G; Estevez, G A; Giraldo, J (1985-10-01). "Vector spherical harmonics and their application to magnetostatics". European Journal of Physics. IOP Publishing. 6 (4): 287–294. Bibcode:1985EJPh....6..287B. CiteSeerX 10.1.1.718.2001. doi:10.1088/0143-0807/6/4/014. ISSN 0143-0807. S2CID 250894245. 2. Carrascal, B; Estevez, G A; Lee, Peilian; Lorenzo, V (1991-07-01). "Vector spherical harmonics and their application to classical electrodynamics". European Journal of Physics. IOP Publishing. 12 (4): 184–191. Bibcode:1991EJPh...12..184C. doi:10.1088/0143-0807/12/4/007. ISSN 0143-0807. S2CID 250886412. 3. Hill, E. L. (1954). "The Theory of Vector Spherical Harmonics" (PDF). American Journal of Physics. American Association of Physics Teachers (AAPT). 22 (4): 211–214. Bibcode:1954AmJPh..22..211H. doi:10.1119/1.1933682. ISSN 0002-9505. S2CID 124182424. Archived from the original (PDF) on 2020-04-12. 4. Weinberg, Erick J. (1994-01-15). "Monopole vector spherical harmonics". Physical Review D. American Physical Society (APS). 49 (2): 1086–1092. arXiv:hep-th/9308054. Bibcode:1994PhRvD..49.1086W. doi:10.1103/physrevd.49.1086. ISSN 0556-2821. PMID 10017069. S2CID 6429605. 5. P.M. Morse and H. Feshbach, Methods of Theoretical Physics, Part II, New York: McGraw-Hill, 1898-1901 (1953) 6. Bohren, Craig F. and Donald R. Huffman, Absorption and scattering of light by small particles, New York : Wiley, 1998, 530 p., ISBN 0-471-29340-7, ISBN 978-0-471-29340-8 (second edition) 7. Stratton, J. A. (1941). Electromagnetic Theory. New York: McGraw-Hill. 8. D. A. Varhalovich, A. N. Moskalev, and V. K. Khersonskii, Quantum Theory of Angular Momentum [in Russian], Nauka, Leningrad (1975) 9. H. Zhang, Yi. Han, Addition theorem for the spherical vector wave functions and its application to the beam shape coeffcients. J. Opt. Soc. Am. B, 25(2):255-260, Feb 2008. 10. S. Stein, Addition theorems for spherical wave functions, Quarterly of Applied Mathematics, 19(1):15-24, 1961. 11. B. Stout,Spherical harmonic lattice sums for gratings. In: Popov E, editor. Gratings: theory and numeric applications. Institut Fresnel, Universite d'Aix-Marseille 6 (2012). 12. R. C. Wittmann, Spherical wave operators and the translation formulas, IEEE Transactions on Antennas and Propagation 36, 1078-1087 (1988) External links • Vector Spherical Harmonics at Eric Weisstein's Mathworld
Wikipedia
Surface integral In mathematics, particularly multivariable calculus, a surface integral is a generalization of multiple integrals to integration over surfaces. It can be thought of as the double integral analogue of the line integral. Given a surface, one may integrate a scalar field (that is, a function of position which returns a scalar as a value) over the surface, or a vector field (that is, a function which returns a vector as value). If a region R is not flat, then it is called a surface as shown in the illustration. Part of a series of articles about Calculus • Fundamental theorem • Limits • Continuity • Rolle's theorem • Mean value theorem • Inverse function theorem Differential Definitions • Derivative (generalizations) • Differential • infinitesimal • of a function • total Concepts • Differentiation notation • Second derivative • Implicit differentiation • Logarithmic differentiation • Related rates • Taylor's theorem Rules and identities • Sum • Product • Chain • Power • Quotient • L'Hôpital's rule • Inverse • General Leibniz • Faà di Bruno's formula • Reynolds Integral • Lists of integrals • Integral transform • Leibniz integral rule Definitions • Antiderivative • Integral (improper) • Riemann integral • Lebesgue integration • Contour integration • Integral of inverse functions Integration by • Parts • Discs • Cylindrical shells • Substitution (trigonometric, tangent half-angle, Euler) • Euler's formula • Partial fractions • Changing order • Reduction formulae • Differentiating under the integral sign • Risch algorithm Series • Geometric (arithmetico-geometric) • Harmonic • Alternating • Power • Binomial • Taylor Convergence tests • Summand limit (term test) • Ratio • Root • Integral • Direct comparison • Limit comparison • Alternating series • Cauchy condensation • Dirichlet • Abel Vector • Gradient • Divergence • Curl • Laplacian • Directional derivative • Identities Theorems • Gradient • Green's • Stokes' • Divergence • generalized Stokes Multivariable Formalisms • Matrix • Tensor • Exterior • Geometric Definitions • Partial derivative • Multiple integral • Line integral • Surface integral • Volume integral • Jacobian • Hessian Advanced • Calculus on Euclidean space • Generalized functions • Limit of distributions Specialized • Fractional • Malliavin • Stochastic • Variations Miscellaneous • Precalculus • History • Glossary • List of topics • Integration Bee • Mathematical analysis • Nonstandard analysis Surface integrals have applications in physics, particularly with the theories of classical electromagnetism. Surface integrals of scalar fields Assume that f is a scalar, vector, or tensor field defined on a surface S. To find an explicit formula for the surface integral of f over S, we need to parameterize S by defining a system of curvilinear coordinates on S, like the latitude and longitude on a sphere. Let such a parameterization be r(s, t), where (s, t) varies in some region T in the plane. Then, the surface integral is given by $\iint _{S}f\,\mathrm {d} S=\iint _{T}f(\mathbf {r} (s,t))\left\|{\partial \mathbf {r} \over \partial s}\times {\partial \mathbf {r} \over \partial t}\right\|\mathrm {d} s\,\mathrm {d} t$ where the expression between bars on the right-hand side is the magnitude of the cross product of the partial derivatives of r(s, t), and is known as the surface element (which would, for example, yield a smaller value near the poles of a sphere. where the lines of longitude converge more dramatically, and latitudinal coordinates are more compactly spaced). The surface integral can also be expressed in the equivalent form $\iint _{S}f\,\mathrm {d} S=\iint _{T}f(\mathbf {r} (s,t)){\sqrt {g}}\,\mathrm {d} s\,\mathrm {d} t$ where g is the determinant of the first fundamental form of the surface mapping r(s, t).[1][2] For example, if we want to find the surface area of the graph of some scalar function, say z = f(x, y), we have $A=\iint _{S}\,\mathrm {d} S=\iint _{T}\left\|{\partial \mathbf {r} \over \partial x}\times {\partial \mathbf {r} \over \partial y}\right\|\mathrm {d} x\,\mathrm {d} y$ where r = (x, y, z) = (x, y, f(x, y)). So that ${\partial \mathbf {r} \over \partial x}=(1,0,f_{x}(x,y))$, and ${\partial \mathbf {r} \over \partial y}=(0,1,f_{y}(x,y))$. So, ${\begin{aligned}A&{}=\iint _{T}\left\|\left(1,0,{\partial f \over \partial x}\right)\times \left(0,1,{\partial f \over \partial y}\right)\right\|\mathrm {d} x\,\mathrm {d} y\\&{}=\iint _{T}\left\|\left(-{\partial f \over \partial x},-{\partial f \over \partial y},1\right)\right\|\mathrm {d} x\,\mathrm {d} y\\&{}=\iint _{T}{\sqrt {\left({\partial f \over \partial x}\right)^{2}+\left({\partial f \over \partial y}\right)^{2}+1}}\,\,\mathrm {d} x\,\mathrm {d} y\end{aligned}}$ which is the standard formula for the area of a surface described this way. One can recognize the vector in the second-last line above as the normal vector to the surface. Because of the presence of the cross product, the above formulas only work for surfaces embedded in three-dimensional space. This can be seen as integrating a Riemannian volume form on the parameterized surface, where the metric tensor is given by the first fundamental form of the surface. Surface integrals of vector fields A curved surface $S$ with a vector field $\mathbf {F} $ passing through it. The red arrows (vectors) represent the magnitude and direction of the field at various points on the surface Surface divided into small patches $dS=du\,dv$ by a parameterization of the surface $[u(\mathbf {x} ),v(\mathbf {x} )]$ The flux through each patch is equal to the normal (perpendicular) component of the field $F_{n}(\mathbf {x} )=F(\mathbf {x} )\cos \theta $ at the patch's location $\mathbf {x} $ multiplied by the area $dS$. The normal component is equal to the dot product of $\mathbf {F} (\mathbf {x} )$ with the unit normal vector $\mathbf {n} (\mathbf {x} )$ (blue arrows) The total flux through the surface is found by adding up $\mathbf {F} \cdot \mathbf {n} \;dS$ for each patch. In the limit as the patches become infinitesimally small, this is the surface integral $ \int _{S}\mathbf {F\cdot n} \;dS$ Consider a vector field v on a surface S, that is, for each r = (x, y, z) in S, v(r) is a vector. The integral of v on S was defined in the previous section. Suppose now that it is desired to integrate only the normal component of the vector field over the surface, the result being a scalar, usually called the flux passing through the surface. For example, imagine that we have a fluid flowing through S, such that v(r) determines the velocity of the fluid at r. The flux is defined as the quantity of fluid flowing through S per unit time. This illustration implies that if the vector field is tangent to S at each point, then the flux is zero because the fluid just flows in parallel to S, and neither in nor out. This also implies that if v does not just flow along S, that is, if v has both a tangential and a normal component, then only the normal component contributes to the flux. Based on this reasoning, to find the flux, we need to take the dot product of v with the unit surface normal n to S at each point, which will give us a scalar field, and integrate the obtained field as above. In other words, we have to integrate v with respect to the vector surface element $\mathrm {d} \mathbf {s} ={\mathbf {n} }\mathrm {d} s$, which is the vector normal to S at the given point, whose magnitude is $\mathrm {d} s=\|\mathrm {d} {\mathbf {s} }\|.$ We find the formula ${\begin{aligned}\iint _{S}{\mathbf {v} }\cdot \mathrm {d} {\mathbf {s} }&=\iint _{S}\left({\mathbf {v} }\cdot {\mathbf {n} }\right)\,\mathrm {d} s\\&{}=\iint _{T}\left({\mathbf {v} }(\mathbf {r} (s,t))\cdot {{\frac {\partial \mathbf {r} }{\partial s}}\times {\frac {\partial \mathbf {r} }{\partial t}} \over \left\|{\frac {\partial \mathbf {r} }{\partial s}}\times {\frac {\partial \mathbf {r} }{\partial t}}\right\|}\right)\left\|{\frac {\partial \mathbf {r} }{\partial s}}\times {\frac {\partial \mathbf {r} }{\partial t}}\right\|\mathrm {d} s\,\mathrm {d} t\\&{}=\iint _{T}{\mathbf {v} }(\mathbf {r} (s,t))\cdot \left({\frac {\partial \mathbf {r} }{\partial s}}\times {\frac {\partial \mathbf {r} }{\partial t}}\right)\mathrm {d} s\,\mathrm {d} t.\end{aligned}}$ The cross product on the right-hand side of this expression is a (not necessarily unital) surface normal determined by the parametrisation. This formula defines the integral on the left (note the dot and the vector notation for the surface element). We may also interpret this as a special case of integrating 2-forms, where we identify the vector field with a 1-form, and then integrate its Hodge dual over the surface. This is equivalent to integrating $\left\langle \mathbf {v} ,\mathbf {n} \right\rangle \mathrm {d} S$ over the immersed surface, where $\mathrm {d} S$ is the induced volume form on the surface, obtained by interior multiplication of the Riemannian metric of the ambient space with the outward normal of the surface. Surface integrals of differential 2-forms Let $f=f_{z}\,\mathrm {d} x\wedge \mathrm {d} y+f_{x}\,\mathrm {d} y\wedge \mathrm {d} z+f_{y}\,\mathrm {d} z\wedge \mathrm {d} x$ be a differential 2-form defined on a surface S, and let $\mathbf {r} (s,t)=(x(s,t),y(s,t),z(s,t))$ be an orientation preserving parametrization of S with $(s,t)$ in D. Changing coordinates from $(x,y)$ to $(s,t)$, the differential forms transform as $\mathrm {d} x={\frac {\partial x}{\partial s}}\mathrm {d} s+{\frac {\partial x}{\partial t}}\mathrm {d} t$ $\mathrm {d} y={\frac {\partial y}{\partial s}}\mathrm {d} s+{\frac {\partial y}{\partial t}}\mathrm {d} t$ So $\mathrm {d} x\wedge \mathrm {d} y$ transforms to ${\frac {\partial (x,y)}{\partial (s,t)}}\mathrm {d} s\wedge \mathrm {d} t$, where ${\frac {\partial (x,y)}{\partial (s,t)}}$ denotes the determinant of the Jacobian of the transition function from $(s,t)$ to $(x,y)$. The transformation of the other forms are similar. Then, the surface integral of f on S is given by $\iint _{D}\left[f_{z}(\mathbf {r} (s,t)){\frac {\partial (x,y)}{\partial (s,t)}}+f_{x}(\mathbf {r} (s,t)){\frac {\partial (y,z)}{\partial (s,t)}}+f_{y}(\mathbf {r} (s,t)){\frac {\partial (z,x)}{\partial (s,t)}}\right]\,\mathrm {d} s\,\mathrm {d} t$ where ${\partial \mathbf {r} \over \partial s}\times {\partial \mathbf {r} \over \partial t}=\left({\frac {\partial (y,z)}{\partial (s,t)}},{\frac {\partial (z,x)}{\partial (s,t)}},{\frac {\partial (x,y)}{\partial (s,t)}}\right)$ is the surface element normal to S. Let us note that the surface integral of this 2-form is the same as the surface integral of the vector field which has as components $f_{x}$, $f_{y}$ and $f_{z}$. Theorems involving surface integrals Various useful results for surface integrals can be derived using differential geometry and vector calculus, such as the divergence theorem, and its generalization, Stokes' theorem. Dependence on parametrization Let us notice that we defined the surface integral by using a parametrization of the surface S. We know that a given surface might have several parametrizations. For example, if we move the locations of the North Pole and the South Pole on a sphere, the latitude and longitude change for all the points on the sphere. A natural question is then whether the definition of the surface integral depends on the chosen parametrization. For integrals of scalar fields, the answer to this question is simple; the value of the surface integral will be the same no matter what parametrization one uses. For integrals of vector fields, things are more complicated because the surface normal is involved. It can be proven that given two parametrizations of the same surface, whose surface normals point in the same direction, one obtains the same value for the surface integral with both parametrizations. If, however, the normals for these parametrizations point in opposite directions, the value of the surface integral obtained using one parametrization is the negative of the one obtained via the other parametrization. It follows that given a surface, we do not need to stick to any unique parametrization, but, when integrating vector fields, we do need to decide in advance in which direction the normal will point and then choose any parametrization consistent with that direction. Another issue is that sometimes surfaces do not have parametrizations which cover the whole surface. The obvious solution is then to split that surface into several pieces, calculate the surface integral on each piece, and then add them all up. This is indeed how things work, but when integrating vector fields, one needs to again be careful how to choose the normal-pointing vector for each piece of the surface, so that when the pieces are put back together, the results are consistent. For the cylinder, this means that if we decide that for the side region the normal will point out of the body, then for the top and bottom circular parts, the normal must point out of the body too. Last, there are surfaces which do not admit a surface normal at each point with consistent results (for example, the Möbius strip). If such a surface is split into pieces, on each piece a parametrization and corresponding surface normal is chosen, and the pieces are put back together, we will find that the normal vectors coming from different pieces cannot be reconciled. This means that at some junction between two pieces we will have normal vectors pointing in opposite directions. Such a surface is called non-orientable, and on this kind of surface, one cannot talk about integrating vector fields. See also • Divergence theorem • Stokes' theorem • Line integral • Volume element • Volume integral • Cartesian coordinate system • Volume and surface area elements in spherical coordinate systems • Volume and surface area elements in cylindrical coordinate systems • Holstein–Herring method References 1. Edwards, C. H. (1994). Advanced Calculus of Several Variables. Mineola, NY: Dover. p. 335. ISBN 0-486-68336-2. 2. Hazewinkel, Michiel (2001). Encyclopedia of Mathematics. Springer. pp. Surface Integral. ISBN 978-1-55608-010-4. External links • Surface Integral — from MathWorld • Surface Integral — Theory and exercises Calculus Precalculus • Binomial theorem • Concave function • Continuous function • Factorial • Finite difference • Free variables and bound variables • Graph of a function • Linear function • Radian • Rolle's theorem • Secant • Slope • Tangent Limits • Indeterminate form • Limit of a function • One-sided limit • Limit of a sequence • Order of approximation • (ε, δ)-definition of limit Differential calculus • Derivative • Second derivative • Partial derivative • Differential • Differential operator • Mean value theorem • Notation • Leibniz's notation • Newton's notation • Rules of differentiation • linearity • Power • Sum • Chain • L'Hôpital's • Product • General Leibniz's rule • Quotient • Other techniques • Implicit differentiation • Inverse functions and differentiation • Logarithmic derivative • Related rates • Stationary points • First derivative test • Second derivative test • Extreme value theorem • Maximum and minimum • Further applications • Newton's method • Taylor's theorem • Differential equation • Ordinary differential equation • Partial differential equation • Stochastic differential equation Integral calculus • Antiderivative • Arc length • Riemann integral • Basic properties • Constant of integration • Fundamental theorem of calculus • Differentiating under the integral sign • Integration by parts • Integration by substitution • trigonometric • Euler • Tangent half-angle substitution • Partial fractions in integration • Quadratic integral • Trapezoidal rule • Volumes • Washer method • Shell method • Integral equation • Integro-differential equation Vector calculus • Derivatives • Curl • Directional derivative • Divergence • Gradient • Laplacian • Basic theorems • Line integrals • Green's • Stokes' • Gauss' Multivariable calculus • Divergence theorem • Geometric • Hessian matrix • Jacobian matrix and determinant • Lagrange multiplier • Line integral • Matrix • Multiple integral • Partial derivative • Surface integral • Volume integral • Advanced topics • Differential forms • Exterior derivative • Generalized Stokes' theorem • Tensor calculus Sequences and series • Arithmetico-geometric sequence • Types of series • Alternating • Binomial • Fourier • Geometric • Harmonic • Infinite • Power • Maclaurin • Taylor • Telescoping • Tests of convergence • Abel's • Alternating series • Cauchy condensation • Direct comparison • Dirichlet's • Integral • Limit comparison • Ratio • Root • Term Special functions and numbers • Bernoulli numbers • e (mathematical constant) • Exponential function • Natural logarithm • Stirling's approximation History of calculus • Adequality • Brook Taylor • Colin Maclaurin • Generality of algebra • Gottfried Wilhelm Leibniz • Infinitesimal • Infinitesimal calculus • Isaac Newton • Fluxion • Law of Continuity • Leonhard Euler • Method of Fluxions • The Method of Mechanical Theorems Lists • Differentiation rules • List of integrals of exponential functions • List of integrals of hyperbolic functions • List of integrals of inverse hyperbolic functions • List of integrals of inverse trigonometric functions • List of integrals of irrational functions • List of integrals of logarithmic functions • List of integrals of rational functions • List of integrals of trigonometric functions • Secant • Secant cubed • List of limits • Lists of integrals Miscellaneous topics • Complex calculus • Contour integral • Differential geometry • Manifold • Curvature • of curves • of surfaces • Tensor • Euler–Maclaurin formula • Gabriel's horn • Integration Bee • Proof that 22/7 exceeds π • Regiomontanus' angle maximization problem • Steinmetz solid
Wikipedia
Vector-valued function A vector-valued function, also referred to as a vector function, is a mathematical function of one or more variables whose range is a set of multidimensional vectors or infinite-dimensional vectors. The input of a vector-valued function could be a scalar or a vector (that is, the dimension of the domain could be 1 or greater than 1); the dimension of the function's domain has no relation to the dimension of its range. Example: Helix Further information: Parametric curve A common example of a vector-valued function is one that depends on a single real parameter t, often representing time, producing a vector v(t) as the result. In terms of the standard unit vectors i, j, k of Cartesian 3-space, these specific types of vector-valued functions are given by expressions such as $\mathbf {r} (t)=f(t)\mathbf {i} +g(t)\mathbf {j} +h(t)\mathbf {k} $ where f(t), g(t) and h(t) are the coordinate functions of the parameter t, and the domain of this vector-valued function is the intersection of the domains of the functions f, g, and h. It can also be referred to in a different notation: $\mathbf {r} (t)=\langle f(t),g(t),h(t)\rangle $ The vector r(t) has its tail at the origin and its head at the coordinates evaluated by the function. The vector shown in the graph to the right is the evaluation of the function $\langle 2\cos t,\,4\sin t,\,t\rangle $ near t = 19.5 (between 6π and 6.5π; i.e., somewhat more than 3 rotations). The helix is the path traced by the tip of the vector as t increases from zero through 8π. In 2D, We can analogously speak about vector-valued functions as $\mathbf {r} (t)=f(t)\mathbf {i} +g(t)\mathbf {j} $ or $\mathbf {r} (t)=\langle f(t),g(t)\rangle $ Linear case In the linear case the function can be expressed in terms of matrices: $y=Ax,$ where y is an n × 1 output vector, x is a k × 1 vector of inputs, and A is an n × k matrix of parameters. Closely related is the affine case (linear up to a translation) where the function takes the form $y=Ax+b,$ where in addition b is an n × 1 vector of parameters. The linear case arises often, for example in multiple regression, where for instance the n × 1 vector ${\hat {y}}$ of predicted values of a dependent variable is expressed linearly in terms of a k × 1 vector ${\hat {\beta }}$ (k < n) of estimated values of model parameters: ${\hat {y}}=X{\hat {\beta }},$ in which X (playing the role of A in the previous generic form) is an n × k matrix of fixed (empirically based) numbers. Parametric representation of a surface A surface is a 2-dimensional set of points embedded in (most commonly) 3-dimensional space. One way to represent a surface is with parametric equations, in which two parameters s and t determine the three Cartesian coordinates of any point on the surface: $(x,y,z)=(f(s,t),g(s,t),h(s,t))\equiv F(s,t).$ Here F is a vector-valued function. For a surface embedded in n-dimensional space, one similarly has the representation $(x_{1},x_{2},...,x_{n})=(f_{1}(s,t),f_{2}(s,t),...,f_{n}(s,t))\equiv F(s,t).$ Derivative of a three-dimensional vector function See also: Gradient Many vector-valued functions, like scalar-valued functions, can be differentiated by simply differentiating the components in the Cartesian coordinate system. Thus, if $\mathbf {r} (t)=f(t)\mathbf {i} +g(t)\mathbf {j} +h(t)\mathbf {k} $ is a vector-valued function, then ${\frac {d\mathbf {r} }{dt}}=f'(t)\mathbf {i} +g'(t)\mathbf {j} +h'(t)\mathbf {k} .$ The vector derivative admits the following physical interpretation: if r(t) represents the position of a particle, then the derivative is the velocity of the particle $\mathbf {v} (t)={\frac {d\mathbf {r} }{dt}}.$ Likewise, the derivative of the velocity is the acceleration ${\frac {d\mathbf {v} }{dt}}=\mathbf {a} (t).$ Partial derivative The partial derivative of a vector function a with respect to a scalar variable q is defined as[1] ${\frac {\partial \mathbf {a} }{\partial q}}=\sum _{i=1}^{n}{\frac {\partial a_{i}}{\partial q}}\mathbf {e} _{i}$ where ai is the scalar component of a in the direction of ei. It is also called the direction cosine of a and ei or their dot product. The vectors e1, e2, e3 form an orthonormal basis fixed in the reference frame in which the derivative is being taken. Ordinary derivative If a is regarded as a vector function of a single scalar variable, such as time t, then the equation above reduces to the first ordinary time derivative of a with respect to t,[1] ${\frac {d\mathbf {a} }{dt}}=\sum _{i=1}^{n}{\frac {da_{i}}{dt}}\mathbf {e} _{i}.$ Total derivative If the vector a is a function of a number n of scalar variables qr (r = 1, ..., n), and each qr is only a function of time t, then the ordinary derivative of a with respect to t can be expressed, in a form known as the total derivative, as[1] ${\frac {d\mathbf {a} }{dt}}=\sum _{r=1}^{n}{\frac {\partial \mathbf {a} }{\partial q_{r}}}{\frac {dq_{r}}{dt}}+{\frac {\partial \mathbf {a} }{\partial t}}.$ Some authors prefer to use capital D to indicate the total derivative operator, as in D/Dt. The total derivative differs from the partial time derivative in that the total derivative accounts for changes in a due to the time variance of the variables qr . Reference frames Whereas for scalar-valued functions there is only a single possible reference frame, to take the derivative of a vector-valued function requires the choice of a reference frame (at least when a fixed Cartesian coordinate system is not implied as such). Once a reference frame has been chosen, the derivative of a vector-valued function can be computed using techniques similar to those for computing derivatives of scalar-valued functions. A different choice of reference frame will, in general, produce a different derivative function. The derivative functions in different reference frames have a specific kinematical relationship. Derivative of a vector function with nonfixed bases The above formulas for the derivative of a vector function rely on the assumption that the basis vectors e1, e2, e3 are constant, that is, fixed in the reference frame in which the derivative of a is being taken, and therefore the e1, e2, e3 each has a derivative of identically zero. This often holds true for problems dealing with vector fields in a fixed coordinate system, or for simple problems in physics. However, many complex problems involve the derivative of a vector function in multiple moving reference frames, which means that the basis vectors will not necessarily be constant. In such a case where the basis vectors e1, e2, e3 are fixed in reference frame E, but not in reference frame N, the more general formula for the ordinary time derivative of a vector in reference frame N is[1] ${\frac {{}^{\mathrm {N} }d\mathbf {a} }{dt}}=\sum _{i=1}^{3}{\frac {da_{i}}{dt}}\mathbf {e} _{i}+\sum _{i=1}^{3}a_{i}{\frac {{}^{\mathrm {N} }d\mathbf {e} _{i}}{dt}}$ where the superscript N to the left of the derivative operator indicates the reference frame in which the derivative is taken. As shown previously, the first term on the right hand side is equal to the derivative of a in the reference frame where e1, e2, e3 are constant, reference frame E. It also can be shown that the second term on the right hand side is equal to the relative angular velocity of the two reference frames cross multiplied with the vector a itself.[1] Thus, after substitution, the formula relating the derivative of a vector function in two reference frames is[1] ${\frac {{}^{\mathrm {N} }d\mathbf {a} }{dt}}={\frac {{}^{\mathrm {E} }d\mathbf {a} }{dt}}+{}^{\mathrm {N} }\mathbf {\omega } ^{\mathrm {E} }\times \mathbf {a} $ where NωE is the angular velocity of the reference frame E relative to the reference frame N. One common example where this formula is used is to find the velocity of a space-borne object, such as a rocket, in the inertial reference frame using measurements of the rocket's velocity relative to the ground. The velocity NvR in inertial reference frame N of a rocket R located at position rR can be found using the formula ${\frac {{}^{\mathrm {N} }d}{dt}}(\mathbf {r} ^{\mathrm {R} })={\frac {{}^{\mathrm {E} }d}{dt}}(\mathbf {r} ^{\mathrm {R} })+{}^{\mathrm {N} }\mathbf {\omega } ^{\mathrm {E} }\times \mathbf {r} ^{\mathrm {R} }.$ where NωE is the angular velocity of the Earth relative to the inertial frame N. Since velocity is the derivative of position, NvR and EvR are the derivatives of rR in reference frames N and E, respectively. By substitution, ${}^{\mathrm {N} }\mathbf {v} ^{\mathrm {R} }={}^{\mathrm {E} }\mathbf {v} ^{\mathrm {R} }+{}^{\mathrm {N} }\mathbf {\omega } ^{\mathrm {E} }\times \mathbf {r} ^{\mathrm {R} }$ where EvR is the velocity vector of the rocket as measured from a reference frame E that is fixed to the Earth. Derivative and vector multiplication The derivative of a product of vector functions behaves similarly to the derivative of a product of scalar functions.[2] Specifically, in the case of scalar multiplication of a vector, if p is a scalar variable function of q,[1] ${\frac {\partial }{\partial q}}(p\mathbf {a} )={\frac {\partial p}{\partial q}}\mathbf {a} +p{\frac {\partial \mathbf {a} }{\partial q}}.$ In the case of dot multiplication, for two vectors a and b that are both functions of q,[1] ${\frac {\partial }{\partial q}}(\mathbf {a} \cdot \mathbf {b} )={\frac {\partial \mathbf {a} }{\partial q}}\cdot \mathbf {b} +\mathbf {a} \cdot {\frac {\partial \mathbf {b} }{\partial q}}.$ Similarly, the derivative of the cross product of two vector functions is[1] ${\frac {\partial }{\partial q}}(\mathbf {a} \times \mathbf {b} )={\frac {\partial \mathbf {a} }{\partial q}}\times \mathbf {b} +\mathbf {a} \times {\frac {\partial \mathbf {b} }{\partial q}}.$ Derivative of an n-dimensional vector function A function f of a real number t with values in the space $\mathbb {R} ^{n}$ can be written as $f(t)=(f_{1}(t),f_{2}(t),\ldots ,f_{n}(t))$. Its derivative equals $f'(t)=(f_{1}'(t),f_{2}'(t),\ldots ,f_{n}'(t))$. If f is a function of several variables, say of $t\in \mathbb {R} ^{m}$, then the partial derivatives of the components of f form a $n\times m$ matrix called the Jacobian matrix of f. Infinite-dimensional vector functions Main article: Infinite-dimensional-vector function If the values of a function f lie in an infinite-dimensional vector space X, such as a Hilbert space, then f may be called an infinite-dimensional vector function. Functions with values in a Hilbert space If the argument of f is a real number and X is a Hilbert space, then the derivative of f at a point t can be defined as in the finite-dimensional case: $f'(t)=\lim _{h\rightarrow 0}{\frac {f(t+h)-f(t)}{h}}.$ Most results of the finite-dimensional case also hold in the infinite-dimensional case too, mutatis mutandis. Differentiation can also be defined to functions of several variables (e.g., $t\in \mathbb {R} ^{n}$ or even $t\in Y$, where Y is an infinite-dimensional vector space). N.B. If X is a Hilbert space, then one can easily show that any derivative (and any other limit) can be computed componentwise: if $f=(f_{1},f_{2},f_{3},\ldots )$ (i.e., $f=f_{1}e_{1}+f_{2}e_{2}+f_{3}e_{3}+\cdots $, where $e_{1},e_{2},e_{3},\ldots $ is an orthonormal basis of the space X ), and $f'(t)$ exists, then $f'(t)=(f_{1}'(t),f_{2}'(t),f_{3}'(t),\ldots )$. However, the existence of a componentwise derivative does not guarantee the existence of a derivative, as componentwise convergence in a Hilbert space does not guarantee convergence with respect to the actual topology of the Hilbert space. Other infinite-dimensional vector spaces Most of the above hold for other topological vector spaces X too. However, not as many classical results hold in the Banach space setting, e.g., an absolutely continuous function with values in a suitable Banach space need not have a derivative anywhere. Moreover, in most Banach spaces setting there are no orthonormal bases. Vector field This section is an excerpt from Vector field.[edit] In vector calculus and physics, a vector field is an assignment of a vector to each point in a space, most commonly Euclidean space $\mathbb {R} ^{n}$.[3] A vector field on a plane can be visualized as a collection of arrows with given magnitudes and directions, each attached to a point on the plane. Vector fields are often used to model, for example, the speed and direction of a moving fluid throughout three dimensional space, such as the wind, or the strength and direction of some force, such as the magnetic or gravitational force, as it changes from one point to another point. The elements of differential and integral calculus extend naturally to vector fields. When a vector field represents force, the line integral of a vector field represents the work done by a force moving along a path, and under this interpretation conservation of energy is exhibited as a special case of the fundamental theorem of calculus. Vector fields can usefully be thought of as representing the velocity of a moving flow in space, and this physical intuition leads to notions such as the divergence (which represents the rate of change of volume of a flow) and curl (which represents the rotation of a flow). A vector field is a special case of a vector-valued function, whose domain's dimension has no relation to the dimension of its range; for example, the position vector of a space curve is defined only for smaller subset of the ambient space. Likewise, n coordinates, a vector field on a domain in n-dimensional Euclidean space $\mathbb {R} ^{n}$ can be represented as a vector-valued function that associates an n-tuple of real numbers to each point of the domain. This representation of a vector field depends on the coordinate system, and there is a well-defined transformation law (covariance and contravariance of vectors) in passing from one coordinate system to the other. Vector fields are often discussed on open subsets of Euclidean space, but also make sense on other subsets such as surfaces, where they associate an arrow tangent to the surface at each point (a tangent vector). More generally, vector fields are defined on differentiable manifolds, which are spaces that look like Euclidean space on small scales, but may have more complicated structure on larger scales. In this setting, a vector field gives a tangent vector at each point of the manifold (that is, a section of the tangent bundle to the manifold). Vector fields are one kind of tensor field. See also • Coordinate vector • Curve • Multivalued function • Parametric surface • Position vector • Parametrization Notes 1. Kane & Levinson 1996, pp. 29–37 2. In fact, these relations are derived applying the product rule componentwise. 3. Galbis, Antonio; Maestre, Manuel (2012). Vector Analysis Versus Vector Calculus. Springer. p. 12. ISBN 978-1-4614-2199-3. References • Kane, Thomas R.; Levinson, David A. (1996), "1–9 Differentiation of Vector Functions", Dynamics Online, Sunnyvale, California: OnLine Dynamics, Inc., pp. 29–37 • Hu, Chuang-Gan; Yang, Chung-Chun (2013), Vector-Valued Functions and their Applications, Springer Science & Business Media, ISBN 978-94-015-8030-4 External links • Vector-valued functions and their properties (from Lake Tahoe Community College) • Weisstein, Eric W. "Vector Function". MathWorld. • Everything2 article • 3 Dimensional vector-valued functions (from East Tennessee State University) • "Position Vector Valued Functions" Khan Academy module
Wikipedia
Vectorial Mechanics Vectorial Mechanics (1948) is a book on vector manipulation (i.e., vector methods) by Edward Arthur Milne, a highly decorated (e.g., James Scott Prize Lectureship) British astrophysicist and mathematician. Milne states that the text was due to conversations (circa 1924) with his then-colleague and erstwhile teacher Sydney Chapman who viewed vectors not merely as a pretty toy but as a powerful weapon of applied mathematics. Milne states that he did not at first believe Chapman, holding on to the idea that "vectors were like a pocket-rule, which needs to be unfolded before it can be applied and used." In time, however, Milne convinces himself that Chapman was right.[1] Summary Vectorial Mechanics has 18 chapters grouped into 3 parts. Part I is on vector algebra including chapters on a definition of a vector, products of vectors, elementary tensor analysis, and integral theorems. Part II is on systems of line vectors including chapters on line co-ordinates, systems of line vectors, statics of rigid bodies, the displacement of a rigid body, and the work of a system of line vectors. Part III is on dynamics including kinematics, particle dynamics, types of particle motion, dynamics of systems of particles, rigid bodies in motion, dynamics of rigid bodies, motion of a rigid body about its center of mass, gyrostatic problems, and impulsive motion. Summary of reviews There were significant reviews given near the time of original publication. G.J.Whitrow: Although many books have been published in recent years in which vector and tensor methods are used for solving problems in geometry and mathematical physics, there has been a lack of first-class treatises which explain the methods in full detail and are nevertheless suitable for the undergraduate student. In applied mathematics no book has appeared till now which is comparable with Hardy's Pure Mathematics. ... Just as in Hardy's classic, a new note is struck at the very start: a precise definition is given of the concept "free vector", analogous to the Frege-Russell definition of "cardinal number." According to Milne, a free vector is the class of all its representations, a typical representation being defined in the customary manner. From a pedagogic point of view, however, the reviewer wonders whether it might have been better to draw attention at this early stage to a concrete instance of a free vector. The student familiar with physical concepts which have magnitude and position, but not direction, should be made to realise from the very beginning that the free vector is not merely "fundamental in discussing systems of position vectors and systems of line-vectors", but occurs naturally in its own right, as there are physical concepts which have magnitude and direction but not position, e.g. the couple in statics, and the angular velocity of a rigid body. Although the necessary existence theorems must be established at a later stage, and Milne's rigorous proofs are particularly welcome, there is no reason why some instances of free vectors should not be mentioned at this point." Daniel C. Lewis: The reviewer has long felt that the role of vector analysis in mechanics has been much overemphasized. It is true that the fundamental equations of motion in their various forms, especially in the case of rigid bodies, can be derived with greatest economy of thought by use of vectors (assuming that the requisite technique has already been developed); but once the equations have been set up, the usual procedure is to drop vector methods in their solution. If this position can be successfully refuted, this has been done in the present work, the most novel feature of which is to solve the vector differential equations by vector methods without ever writing down the corresponding scalar differential equations obtained by taking components. The author has certainly been successful in showing that this can be done in fairly simple, though nontrivial, cases. To give an example of a definitely nontrivial problem solved in this way, one might mention the nonholonomic problem afforded by the motion of a sphere rolling on a rough inclined plane or on a rough spherical surface. The author's methods are interesting and aesthetically satisfying and therefore deserve the widest publication even if they partake of the nature of a tour de force. References • E.A.Milne Vectorial Mechanics (New York: Interscience Publishers INC., 1948). PP. xiii, 382 ASIN: B0000EGLGX • G.J.Whttrow Review of Vectorial Mechanics The Mathematical Gazette Vol. 33, No. 304. (May, 1949), pp. 136–139. • D.C.Lewis Review of Vectorial Mechanics, Mathematical Reviews Volume 10, abstract index 420w, p. 488, 1949. Notes 1. Vectorial Mechanics Preface page vii
Wikipedia
Vectorial addition chain In mathematics, for positive integers k and s, a vectorial addition chain is a sequence V of k-dimensional vectors of nonnegative integers vi for −k + 1 ≤ i ≤ s together with a sequence w, such that v−k+1 = [1,0,0,...0,0] v−k+2 = [0,1,0,...0,0] ⋮ ⋮ v0 = [0,0,0,,...0,1] vi =vj+vr for all 1≤i≤s with -k+1≤j, r≤i-1 vs = [n0,...,nk-1] w = (w1,...ws), wi=(j,r). For example, a vectorial addition chain for [22,18,3] is V=([1,0,0],[0,1,0],[0,0,1],[1,1,0],[2,2,0],[4,4,0],[5,4,0],[10,8,0],[11,9,0],[11,9,1],[22,18,2],[22,18,3]) w=((-2,-1),(1,1),(2,2),(-2,3),(4,4),(1,5),(0,6),(7,7),(0,8)) Vectorial addition chains are well suited to perform multi-exponentiation: Input: Elements x0,...,xk-1 of an abelian group G and a vectorial addition chain of dimension k computing [n0,...,nk-1] Output:The element x0n0...xk-1nr-1 1. for i =-k+1 to 0 do yi → xi+k-1 2. for i = 1 to s do yi →yj×yr 3. return ys Addition sequence An addition sequence for the set of integer S ={n0, ..., nr-1} is an addition chain v that contains every element of S. For example, an addition sequence computing {47,117,343,499} is (1,2,4,8,10,11,18,36,47,55,91,109,117,226,343,434,489,499). It's possible to find addition sequence from vectorial addition chains and vice versa, so they are in a sense dual.[1] See also • Addition chain • Addition-chain exponentiation • Exponentiation by squaring • Non-adjacent form References 1. Cohen, H., Frey, G. (editors): Handbook of elliptic and hyperelliptic curve cryptography. Discrete Math. Appl., Chapman & Hall/CRC (2006)
Wikipedia
Vectorization (mathematics) In mathematics, especially in linear algebra and matrix theory, the vectorization of a matrix is a linear transformation which converts the matrix into a vector. Specifically, the vectorization of a m × n matrix A, denoted vec(A), is the mn × 1 column vector obtained by stacking the columns of the matrix A on top of one another: $\operatorname {vec} (A)=[a_{1,1},\ldots ,a_{m,1},a_{1,2},\ldots ,a_{m,2},\ldots ,a_{1,n},\ldots ,a_{m,n}]^{\mathrm {T} }$ Here, $a_{i,j}$ represents the element in the i-th row and j-th column of A, and the superscript ${}^{\mathrm {T} }$ denotes the transpose. Vectorization expresses, through coordinates, the isomorphism $\mathbf {R} ^{m\times n}:=\mathbf {R} ^{m}\otimes \mathbf {R} ^{n}\cong \mathbf {R} ^{mn}$ between these (i.e., of matrices and vectors) as vector spaces. For example, for the 2×2 matrix $A={\begin{bmatrix}a&b\\c&d\end{bmatrix}}$, the vectorization is $\operatorname {vec} (A)={\begin{bmatrix}a\\c\\b\\d\end{bmatrix}}$. The connection between the vectorization of A and the vectorization of its transpose is given by the commutation matrix. Compatibility with Kronecker products The vectorization is frequently used together with the Kronecker product to express matrix multiplication as a linear transformation on matrices. In particular, $\operatorname {vec} (ABC)=(C^{\mathrm {T} }\otimes A)\operatorname {vec} (B)$ for matrices A, B, and C of dimensions k×l, l×m, and m×n.[note 1] For example, if $\operatorname {ad} _{A}(X)=AX-XA$ (the adjoint endomorphism of the Lie algebra gl(n, C) of all n×n matrices with complex entries), then $\operatorname {vec} (\operatorname {ad} _{A}(X))=(I_{n}\otimes A-A^{\mathrm {T} }\otimes I_{n}){\text{vec}}(X)$, where $I_{n}$ is the n×n identity matrix. There are two other useful formulations: ${\begin{aligned}\operatorname {vec} (ABC)&=(I_{n}\otimes AB)\operatorname {vec} (C)=(C^{\mathrm {T} }B^{\mathrm {T} }\otimes I_{k})\operatorname {vec} (A)\\\operatorname {vec} (AB)&=(I_{m}\otimes A)\operatorname {vec} (B)=(B^{\mathrm {T} }\otimes I_{k})\operatorname {vec} (A)\end{aligned}}$ More generally, it has been shown that vectorization is a self-adjunction in the monoidal closed structure of any category of matrices.[1] Compatibility with Hadamard products Vectorization is an algebra homomorphism from the space of n × n matrices with the Hadamard (entrywise) product to Cn2 with its Hadamard product: $\operatorname {vec} (A\circ B)=\operatorname {vec} (A)\circ \operatorname {vec} (B).$ Compatibility with inner products Vectorization is a unitary transformation from the space of n×n matrices with the Frobenius (or Hilbert–Schmidt) inner product to Cn2: $\operatorname {tr} (A^{\dagger }B)=\operatorname {vec} (A)^{\dagger }\operatorname {vec} (B),$ where the superscript † denotes the conjugate transpose. Vectorization as a linear sum The matrix vectorization operation can be written in terms of a linear sum. Let X be an m × n matrix that we want to vectorize, and let ei be the i-th canonical basis vector for the n-dimensional space, that is $ \mathbf {e} _{i}=\left[0,\dots ,0,1,0,\dots ,0\right]^{\mathrm {T} }$. Let Bi be a (mn) × m block matrix defined as follows: $\mathbf {B} _{i}={\begin{bmatrix}\mathbf {0} \\\vdots \\\mathbf {0} \\\mathbf {I} _{m}\\\mathbf {0} \\\vdots \\\mathbf {0} \end{bmatrix}}=\mathbf {e} _{i}\otimes \mathbf {I} _{m}$ Bi consists of n block matrices of size m × m, stacked column-wise, and all these matrices are all-zero except for the i-th one, which is a m × m identity matrix Im. Then the vectorized version of X can be expressed as follows: $\operatorname {vec} (\mathbf {X} )=\sum _{i=1}^{n}\mathbf {B} _{i}\mathbf {X} \mathbf {e} _{i}$ Multiplication of X by ei extracts the i-th column, while multiplication by Bi puts it into the desired position in the final vector. Alternatively, the linear sum can be expressed using the Kronecker product: $\operatorname {vec} (\mathbf {X} )=\sum _{i=1}^{n}\mathbf {e} _{i}\otimes \mathbf {X} \mathbf {e} _{i}$ Half-vectorization For a symmetric matrix A, the vector vec(A) contains more information than is strictly necessary, since the matrix is completely determined by the symmetry together with the lower triangular portion, that is, the n(n + 1)/2 entries on and below the main diagonal. For such matrices, the half-vectorization is sometimes more useful than the vectorization. The half-vectorization, vech(A), of a symmetric n × n matrix A is the n(n + 1)/2 × 1 column vector obtained by vectorizing only the lower triangular part of A: $\operatorname {vech} (A)=[A_{1,1},\ldots ,A_{n,1},A_{2,2},\ldots ,A_{n,2},\ldots ,A_{n-1,n-1},A_{n,n-1},A_{n,n}]^{\mathrm {T} }.$ For example, for the 2×2 matrix $A={\begin{bmatrix}a&b\\b&d\end{bmatrix}}$, the half-vectorization is $\operatorname {vech} (A)={\begin{bmatrix}a\\b\\d\end{bmatrix}}$. There exist unique matrices transforming the half-vectorization of a matrix to its vectorization and vice versa called, respectively, the duplication matrix and the elimination matrix. Programming language Programming languages that implement matrices may have easy means for vectorization. In Matlab/GNU Octave a matrix A can be vectorized by A(:). GNU Octave also allows vectorization and half-vectorization with vec(A) and vech(A) respectively. Julia has the vec(A) function as well. In Python NumPy arrays implement the flatten method,[note 1] while in R the desired effect can be achieved via the c() or as.vector() functions. In R, function vec() of package 'ks' allows vectorization and function vech() implemented in both packages 'ks' and 'sn' allows half-vectorization.[2][3][4] Notes 1. The identity for row-major vectorization is $\operatorname {vec} (ABC)=(A\otimes C^{\mathrm {T} })\operatorname {vec} (B)$. See also • Duplication and elimination matrices • Voigt notation • Packed storage matrix • Column-major order • Matricization References 1. Macedo, H. D.; Oliveira, J. N. (2013). "Typing Linear Algebra: A Biproduct-oriented Approach". Science of Computer Programming. 78 (11): 2160–2191. arXiv:1312.4818. doi:10.1016/j.scico.2012.07.012. S2CID 9846072. 2. Duong, Tarn (2018). "ks: Kernel Smoothing". R package version 1.11.0. 3. Azzalini, Adelchi (2017). "The R package 'sn': The Skew-Normal and Related Distributions such as the Skew-t". R package version 1.5.1. 4. Vinod, Hrishikesh D. (2011). "Simultaneous Reduction and Vec Stacking". Hands-on Matrix Algebra Using R: Active and Motivated Learning with Applications. Singapore: World Scientific. pp. 233–248. ISBN 978-981-4313-69-8 – via Google Books. • Jan R. Magnus and Heinz Neudecker (1999), Matrix Differential Calculus with Applications in Statistics and Econometrics, 2nd Ed., Wiley. ISBN 0-471-98633-X.
Wikipedia
Vector (mathematics and physics) In mathematics and physics, vector is a term that refers colloquially to some quantities that cannot be expressed by a single number (a scalar), or to elements of some vector spaces. Historically, vectors were introduced in geometry and physics (typically in mechanics) for quantities that have both a magnitude and a direction, such as displacements, forces and velocity. Such quantities are represented by geometric vectors in the same way as distances, masses and time are represented by real numbers. The term vector is also used, in some contexts, for tuples, which are finite sequences of numbers of a fixed length. Both geometric vectors and tuples can be added and scaled, and these vector operations led to the concept of a vector space, which is a set equipped with a vector addition and a scalar multiplication that satisfy some axioms generalizing the main properties of operations on the above sorts of vectors. A vector space formed by geometric vectors is called a Euclidean vector space, and a vector space formed by tuples is called a coordinate vector space. Many vector spaces are considered in mathematics, such as extension field, polynomial rings, algebras and function spaces. The term vector is generally not used for elements of these vectors spaces, and is generally reserved for geometric vectors, tuples, and elements of unspecified vector spaces (for example, when discussing general properties of vector spaces). Vectors in Euclidean geometry Main article: Euclidean vector This section is an excerpt from Euclidean vector.[edit] In mathematics, physics, and engineering, a Euclidean vector or simply a vector (sometimes called a geometric vector[1] or spatial vector[2]) is a geometric object that has magnitude (or length) and direction. Vectors can be added to other vectors according to vector algebra. A Euclidean vector is frequently represented by a directed line segment, or graphically as an arrow connecting an initial point A with a terminal point B,[3] and denoted by ${\overrightarrow {AB}}$ . A vector is what is needed to "carry" the point A to the point B; the Latin word vector means "carrier".[4] It was first used by 18th century astronomers investigating planetary revolution around the Sun.[5] The magnitude of the vector is the distance between the two points, and the direction refers to the direction of displacement from A to B. Many algebraic operations on real numbers such as addition, subtraction, multiplication, and negation have close analogues for vectors,[6] operations which obey the familiar algebraic laws of commutativity, associativity, and distributivity. These operations and associated laws qualify Euclidean vectors as an example of the more generalized concept of vectors defined simply as elements of a vector space. Vectors play an important role in physics: the velocity and acceleration of a moving object and the forces acting on it can all be described with vectors.[7] Many other physical quantities can be usefully thought of as vectors. Although most of them do not represent distances (except, for example, position or displacement), their magnitude and direction can still be represented by the length and direction of an arrow. The mathematical representation of a physical vector depends on the coordinate system used to describe it. Other vector-like objects that describe physical quantities and transform in a similar way under changes of the coordinate system include pseudovectors and tensors.[8] Vector spaces Main article: Vector space This section is an excerpt from Vector space.[edit] In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called vectors, may be added together and multiplied ("scaled") by numbers called scalars. Scalars are often real numbers, but can be complex numbers or, more generally, elements of any field. The operations of vector addition and scalar multiplication must satisfy certain requirements, called vector axioms. The terms real vector space and complex vector space are often used to specify the nature of the scalars: real coordinate space or complex coordinate space. Vector spaces generalize Euclidean vectors, which allow modeling of physical quantities, such as forces and velocity, that have not only a magnitude, but also a direction. The concept of vector spaces is fundamental for linear algebra, together with the concept of matrices, which allows computing in vector spaces. This provides a concise and synthetic way for manipulating and studying systems of linear equations. Vector spaces are characterized by their dimension, which, roughly speaking, specifies the number of independent directions in the space. This means that, for two vector spaces over a given field and with the same dimension, the properties that depend only on the vector-space structure are exactly the same (technically the vector spaces are isomorphic). A vector space is finite-dimensional if its dimension is a natural number. Otherwise, it is infinite-dimensional, and its dimension is an infinite cardinal. Finite-dimensional vector spaces occur naturally in geometry and related areas. Infinite-dimensional vector spaces occur in many areas of mathematics. For example, polynomial rings are countably infinite-dimensional vector spaces, and many function spaces have the cardinality of the continuum as a dimension. Many vector spaces that are considered in mathematics are also endowed with other structures. This is the case of algebras, which include field extensions, polynomial rings, associative algebras and Lie algebras. This is also the case of topological vector spaces, which include function spaces, inner product spaces, normed spaces, Hilbert spaces and Banach spaces. Vectors in algebra Every algebra over a field is a vector space, but elements of an algebra are generally not called vectors. However, in some cases, they are called vectors, mainly due to historical reasons. • Vector quaternion, a quaternion with a zero real part • Multivector or p-vector, an element of the exterior algebra of a vector space. • Spinors, also called spin vectors, have been introduced for extending the notion of rotation vector. In fact, rotation vectors represent well rotations locally, but not globally, because a closed loop in the space of rotation vectors may induce a curve in the space of rotations that is not a loop. Also, the manifold of rotation vectors is orientable, while the manifold of rotations is not. Spinors are elements of a vector subspace of some Clifford algebra. • Witt vector, an infinite sequence of elements of a commutative ring, which belongs to an algebra over this ring, and has been introduced for handling carry propagation in the operations on p-adic numbers. Data represented by vectors The set $\mathbb {R} ^{n}$ of tuples of n real numbers has a natural structure of vector space defined by component-wise addition and scalar multiplication. It is common to call these tuples vectors, even in contexts where vector-space operations do not apply. More generally, when some data can be represented naturally by vectors, they are often called vectors even when addition and scalar multiplication of vectors are not valid operations on these data. Here are some examples. • Rotation vector, a Euclidean vector whose direction is that of the axis of a rotation and magnitude is the angle of the rotation. • Burgers vector, a vector that represents the magnitude and direction of the lattice distortion of dislocation in a crystal lattice • Interval vector, in musical set theory, an array that expresses the intervallic content of a pitch-class set • Probability vector, in statistics, a vector with non-negative entries that sum to one. • Random vector or multivariate random variable, in statistics, a set of real-valued random variables that may be correlated. However, a random vector may also refer to a random variable that takes its values in a vector space. • Logical vector, a vector of 0s and 1s (Booleans). See also Look up vector in Wiktionary, the free dictionary. • Vector (disambiguation) Vector spaces with more structure • Graded vector space, a type of vector space that includes the extra structure of gradation • Normed vector space, a vector space on which a norm is defined • Hilbert space • Ordered vector space, a vector space equipped with a partial order • Super vector space, name for a Z2-graded vector space • Symplectic vector space, a vector space V equipped with a non-degenerate, skew-symmetric, bilinear form • Topological vector space, a blend of topological structure with the algebraic concept of a vector space Vector fields A vector field is a vector-valued function that, generally, has a domain of the same dimension (as a manifold) as its codomain, • Conservative vector field, a vector field that is the gradient of a scalar potential field • Hamiltonian vector field, a vector field defined for any energy function or Hamiltonian • Killing vector field, a vector field on a Riemannian manifold • Solenoidal vector field, a vector field with zero divergence • Vector potential, a vector field whose curl is a given vector field • Vector flow, a set of closely related concepts of the flow determined by a vector field Miscellaneous • Ricci calculus • Vector Analysis, a textbook on vector calculus by Wilson, first published in 1901, which did much to standardize the notation and vocabulary of three-dimensional linear algebra and vector calculus • Vector bundle, a topological construction that makes precise the idea of a family of vector spaces parameterized by another space • Vector calculus, a branch of mathematics concerned with differentiation and integration of vector fields • Vector differential, or del, a vector differential operator represented by the nabla symbol $\nabla $ • Vector Laplacian, the vector Laplace operator, denoted by $\nabla ^{2}$, is a differential operator defined over a vector field • Vector notation, common notation used when working with vectors • Vector operator, a type of differential operator used in vector calculus • Vector product, or cross product, an operation on two vectors in a three-dimensional Euclidean space, producing a third three-dimensional Euclidean vector • Vector projection, also known as vector resolute or vector component, a linear mapping producing a vector parallel to a second vector • Vector-valued function, a function that has a vector space as a codomain • Vectorization (mathematics), a linear transformation that converts a matrix into a column vector • Vector autoregression, an econometric model used to capture the evolution and the interdependencies between multiple time series • Vector boson, a boson with the spin quantum number equal to 1 • Vector measure, a function defined on a family of sets and taking vector values satisfying certain properties • Vector meson, a meson with total spin 1 and odd parity • Vector quantization, a quantization technique used in signal processing • Vector soliton, a solitary wave with multiple components coupled together that maintains its shape during propagation • Vector synthesis, a type of audio synthesis • Phase vector Notes 1. Ivanov 2001 harvnb error: no target: CITEREFIvanov2001 (help) 2. Heinbockel 2001 3. Itô 1993, p. 1678; Pedoe 1988 4. Latin: vectus, perfect participle of vehere, "to carry"/ veho = "I carry". For historical development of the word vector, see "vector n.". Oxford English Dictionary (Online ed.). Oxford University Press. (Subscription or participating institution membership required.) and Jeff Miller. "Earliest Known Uses of Some of the Words of Mathematics". Retrieved 2007-05-25. 5. The Oxford English Dictionary (2nd. ed.). London: Clarendon Press. 2001. ISBN 9780195219425. 6. "vector | Definition & Facts". Encyclopedia Britannica. Retrieved 2020-08-19. 7. "Vectors". www.mathsisfun.com. Retrieved 2020-08-19. 8. Weisstein, Eric W. "Vector". mathworld.wolfram.com. Retrieved 2020-08-19. References • Heinbockel, J. H. (2001). Introduction to Tensor Calculus and Continuum Mechanics. Trafford Publishing. ISBN 1-55369-133-4. • Itô, Kiyosi (1993). Encyclopedic Dictionary of Mathematics (2nd ed.). MIT Press. ISBN 978-0-262-59020-4. • Ivanov, A.B. (2001) [1994], "Vector", Encyclopedia of Mathematics, EMS Press • Pedoe, Daniel (1988). Geometry: A comprehensive course. Dover. ISBN 0-486-65812-0.
Wikipedia
Vectors in Three-dimensional Space Vectors in Three-dimensional Space (1978) is a book concerned with physical quantities defined in "ordinary" 3-space. It was written by J. S. R. Chisholm, an English mathematical physicist, and published by Cambridge University Press. According to the author, such physical quantities are studied in Newtonian mechanics, fluid mechanics, theories of elasticity and plasticity, non-relativistic quantum mechanics, and many parts of solid state physics. The author further states that "the vector concept developed in two different ways: in a wide variety of physical applications, vector notation and techniques became, by the middle of this century, almost universal; on the other hand, pure mathematicians reduced vector algebra to an axiomatic system, and introduced wide generalisations of the concept of a three-dimensional 'vector space'." Chisholm explains that since these two developments proceeded largely independently, there is a need to show how one can be applied to the other.[1] Summary Vectors in Three-Dimensional Space has six chapters, each divided into five or more subsections. The first on linear spaces and displacements including these sections: Introduction, Scalar multiplication of vectors, Addition and subtraction of vectors, Displacements in Euclidean space, Geometrical applications. The second on Scalar products and components including these sections: Scalar products, Linear dependence and dimension, Components of a vector, Geometrical applications, Coordinate systems. The third on Other products of vectors. The last three chapters round out Chisholm's integration of these two largely independent developments. References Footnotes 1. Chisholm, J. S. R. (1978) pp. vii–viii Bibliography • Vectors in Three-dimensional Space has been cited by the 2002 Encyclopedia Americana article on Vector Analysis • Chisholm, J. S. R. Vectors in Three-dimensional Space, Cambridge University Press, 1978, ISBN 0-521-29289-1
Wikipedia
Vedic square In Indian mathematics, a Vedic square is a variation on a typical 9 × 9 multiplication table where the entry in each cell is the digital root of the product of the column and row headings i.e. the remainder when the product of the row and column headings is divided by 9 (with remainder 0 represented by 9). Numerous geometric patterns and symmetries can be observed in a Vedic square, some of which can be found in traditional Islamic art. $\circ $123456789 1 123456789 2 246813579 3 369369369 4 483726159 5 516273849 6 639639639 7 753186429 8 876543219 9 999999999 Algebraic properties The Vedic Square can be viewed as the multiplication table of the monoid $((\mathbb {Z} /9\mathbb {Z} )^{\times },\{1,\circ \})$ where $\mathbb {Z} /9\mathbb {Z} $ is the set of positive integers partitioned by the residue classes modulo nine. (the operator $\circ $ refers to the abstract "multiplication" between the elements of this monoid). If $a,b$ are elements of $((\mathbb {Z} /9\mathbb {Z} )^{\times },\{1,\circ \})$ then $a\circ b$ can be defined as $(a\times b)\mod {9}$, where the element 9 is representative of the residue class of 0 rather than the traditional choice of 0. This does not form a group because not every non-zero element has a corresponding inverse element; for example $6\circ 3=9$ but there is no $a\in \{1,\cdots ,9\}$ such that $9\circ a=6.$. Properties of subsets The subset $\{1,2,4,5,7,8\}$ forms a cyclic group with 2 as one choice of generator - this is the group of multiplicative units in the ring $\mathbb {Z} /9\mathbb {Z} $. Every column and row includes all six numbers - so this subset forms a Latin square. $\circ $124578 1 124578 2 248157 4 487215 5 512784 7 751842 8 875421 From two dimensions to three dimensions A Vedic cube is defined as the layout of each digital root in a three-dimensional multiplication table.[2] Vedic squares in a higher radix Vedic squares with a higher radix (or number base) can be calculated to analyse the symmetric patterns that arise. Using the calculation above, $(a\times b)\mod {({\textrm {base}}-1)}$. The images in this section are color-coded so that the digital root of 1 is dark and the digital root of (base-1) is light. See also • Latin square • Modular arithmetic • Monoid References 1. http://sciendo.com/article/10.1515/rmm-2016-0002 2. Lin, Chia-Yu. "Digital root patterns of three-dimensional space". rmm.ludus-opuscula.org. Retrieved 2016-05-25. • Deskins, W.E. (1996), Abstract Algebra, New York: Dover, pp. 162–167, ISBN 0-486-68888-7 • Pritchard, Chris (2003), The Changing Shape of Geometry: Celebrating a Century of Geometry and Geometry Teaching, Great Britain: Cambridge University Press, pp. 119–122, ISBN 0-521-53162-4 • Ghannam, Talal (2012), The Mystery of Numbers: Revealed Through Their Digital Root, CreateSpace Publications, pp. 68–73, ISBN 978-1-4776-7841-1 • Teknomo, Kadi (2005), Digital Root: Vedic Square • Chia-Yu, Lin (2016), Digital Root Patterns of Three-Dimensional Space, Recreational Mathematics Magazine, pp. 9–31, ISSN 2182-1976 Indian mathematics Mathematicians Ancient • Apastamba • Baudhayana • Katyayana • Manava • Pāṇini • Pingala • Yajnavalkya Classical • Āryabhaṭa I • Āryabhaṭa II • Bhāskara I • Bhāskara II • Melpathur Narayana Bhattathiri • Brahmadeva • Brahmagupta • Govindasvāmi • Halayudha • Jyeṣṭhadeva • Kamalakara • Mādhava of Saṅgamagrāma • Mahāvīra • Mahendra Sūri • Munishvara • Narayana • Parameshvara • Achyuta Pisharati • Jagannatha Samrat • Nilakantha Somayaji • Śrīpati • Sridhara • Gangesha Upadhyaya • Varāhamihira • Sankara Variar • Virasena Modern • Shanti Swarup Bhatnagar Prize recipients in Mathematical Science Treatises • Āryabhaṭīya • Bakhshali manuscript • Bijaganita • Brāhmasphuṭasiddhānta • Ganita Kaumudi • Karanapaddhati • Līlāvatī • Lokavibhaga • Paulisa Siddhanta • Paitamaha Siddhanta • Romaka Siddhanta • Sadratnamala • Siddhānta Shiromani • Śulba Sūtras • Surya Siddhanta • Tantrasamgraha • Vasishtha Siddhanta • Veṇvāroha • Yuktibhāṣā • Yavanajataka Pioneering innovations • Brahmi numerals • Hindu–Arabic numeral system • Symbol for zero (0) • Infinite series expansions for the trigonometric functions Centres • Kerala school of astronomy and mathematics • Jantar Mantar (Jaipur, New Delhi, Ujjain, Varanasi) Historians of mathematics • Bapudeva Sastri (1821–1900) • Shankar Balakrishna Dikshit (1853–1898) • Sudhakara Dvivedi (1855–1910) • M. Rangacarya (1861–1916) • P. C. Sengupta (1876–1962) • B. B. Datta (1888–1958) • T. Hayashi • A. A. Krishnaswamy Ayyangar (1892– 1953) • A. N. Singh (1901–1954) • C. T. Rajagopal (1903–1978) • T. A. Saraswati Amma (1918–2000) • S. N. Sen (1918–1992) • K. S. Shukla (1918–2007) • K. V. Sarma (1919–2005) Translators • Walter Eugene Clark • David Pingree Other regions • Babylon • China • Greece • Islamic mathematics • Europe Modern institutions • Indian Statistical Institute • Bhaskaracharya Pratishthana • Chennai Mathematical Institute • Institute of Mathematical Sciences • Indian Institute of Science • Harish-Chandra Research Institute • Homi Bhabha Centre for Science Education • Ramanujan Institute for Advanced Study in Mathematics • TIFR
Wikipedia
Translation surface In mathematics a translation surface is a surface obtained from identifying the sides of a polygon in the Euclidean plane by translations. An equivalent definition is a Riemann surface together with a holomorphic 1-form. These surfaces arise in dynamical systems where they can be used to model billiards, and in Teichmüller theory. A particularly interesting subclass is that of Veech surfaces (named after William A. Veech) which are the most symmetric ones. Definitions Geometric definition A translation surface is the space obtained by identifying pairwise by translations the sides of a collection of plane polygons. Here is a more formal definition. Let $P_{1},\ldots ,P_{m}$ be a collection of (not necessarily convex) polygons in the Euclidean plane and suppose that for every side $s_{i}$ of any $P_{k}$ there is a side $s_{j}$ of some $P_{l}$ with $j\not =i$ and $s_{j}=s_{i}+{\vec {v}}_{i}$ for some nonzero vector ${\vec {v}}_{i}$ (and so that ${\vec {v}}_{j}=-{\vec {v}}_{i}$. Consider the space obtained by identifying all $s_{i}$ with their corresponding $s_{j}$ through the map $x\mapsto x+{\vec {v}}_{i}$. The canonical way to construct such a surface is as follows: start with vectors ${\vec {w}}_{1},\ldots ,{\vec {w}}_{n}$ and a permutation $\sigma $ on $\{1,\ldots ,n\}$, and form the broken lines $L=x,x+{\vec {w}}_{1},\ldots ,x+{\vec {w}}_{1}+\cdots +{\vec {w}}_{n}$ and $L'=x,x+{\vec {w}}_{\sigma (1)},\ldots ,x+{\vec {w}}_{\sigma (1)}+\cdots +{\vec {w}}_{\sigma (n)}$ starting at an arbitrarily chosen point. In the case where these two lines form a polygon (i.e. they do not intersect outside of their endpoints) there is a natural side-pairing. The quotient space is a closed surface. It has a flat metric outside the set $\Sigma $ images of the vertices. At a point in $\Sigma $ the sum of the angles of the polygons around the vertices which map to it is a positive multiple of $2\pi $, and the metric is singular unless the angle is exactly $2\pi $. Analytic definition Let $S$ be a translation surface as defined above and $\Sigma $ the set of singular points. Identifying the Euclidean plane with the complex plane one gets coordinates charts on $S\setminus \Sigma $ with values in $\mathbb {C} $. Moreover, the changes of charts are holomorphic maps, more precisely maps of the form $z\mapsto z+w$ for some $w\in \mathbb {C} $. This gives $S\setminus \Sigma $ the structure of a Riemann surface, which extends to the entire surface $S$ by Riemann's theorem on removable singularities. In addition, the differential $dz$ where $z:U\to \mathbb {C} $ is any chart defined above, does not depend on the chart. Thus these differentials defined on chart domains glue together to give a well-defined holomorphic 1-form $\omega $ on $S$. The vertices of the polygon where the cone angles are not equal to $2\pi $ are zeroes of $\omega $ (a cone angle of $2k\pi $ corresponds to a zero of order $(k-1)$). In the other direction, given a pair $(X,\omega )$ where $X$ is a compact Riemann surface and $\omega $ a holomorphic 1-form one can construct a polygon by using the complex numbers $ \int _{\gamma _{j}}\omega $ where $\gamma _{j}$ are disjoint paths between the zeroes of $\omega $ which form an integral basis for the relative cohomology. Examples The simplest example of a translation surface is obtained by gluing the opposite sides of a parallelogram. It is a flat torus with no singularities. If $P$ is a regular $4g$-gon then the translation surface obtained by gluing opposite sides is of genus $g$ with a single singular point, with angle $(2g-1)2\pi $. If $P$ is obtained by putting side to side a collection of copies of the unit square then any translation surface obtained from $P$ is called a square-tiled surface. The map from the surface to the flat torus obtained by identifying all squares is a branched covering with branch points the singularities (the cone angle at a singularity is proportional to the degree of branching). Riemann–Roch and Gauss–Bonnet Suppose that the surface $X$ is a closed Riemann surface of genus $g$ and that $\omega $ is a nonzero holomorphic 1-form on $X$, with zeroes of order $d_{1},\ldots ,d_{m}$. Then the Riemann–Roch theorem implies that $\sum _{j=1}^{m}d_{j}=2g-2.$ If the translation surface $(X,\omega )$ is represented by a polygon $P$ then triangulating it and summing angles over all vertices allows to recover the formula above (using the relation between cone angles and order of zeroes), in the same manner as in the proof of the Gauss–Bonnet formula for hyperbolic surfaces or the proof of Euler's formula from Girard's theorem. Translation surfaces as foliated surfaces If $(X,\omega )$ is a translation surface there is a natural measured foliation on $X$. If it is obtained from a polygon it is just the image of vertical lines, and the measure of an arc is just the euclidean length of the horizontal segment homotopic to the arc. The foliation is also obtained by the level lines of the imaginary part of a (local) primitive for $\omega $ and the measure is obtained by integrating the real part. Moduli spaces Strata Let ${\mathcal {H}}$ be the set of translation surfaces of genus $g$ (where two such $(X,\omega ),(X',\omega ')$ are considered the same if there exists a holomorphic diffeomorphism $\phi :X\to X'$ such that $\phi ^{*}\omega '=\omega $). Let ${\mathcal {M}}_{g}$ be the moduli space of Riemann surfaces of genus $g$; there is a natural map ${\mathcal {H}}\to {\mathcal {M}}_{g}$ mapping a translation surface to the underlying Riemann surface. This turns ${\mathcal {H}}$ into a locally trivial fiber bundle over the moduli space. To a compact translation surface $(X,\omega )$ there is associated the data $(k_{1},\ldots ,k_{m})$ where $k_{1}\leq k_{2}\leq \cdots $ are the orders of the zeroes of $\omega $. If $\alpha =(k_{1},\ldots ,k_{m})$ is any partition of $2g-2$ then the stratum ${\mathcal {H}}(\alpha )$ is the subset of ${\mathcal {H}}$ of translation surfaces which have a holomorphic form whose zeroes match the partition. The stratum ${\mathcal {H}}(\alpha )$ is naturally a complex orbifold of complex dimension $2g+m-1$ (note that ${\mathcal {H}}(0)$ is the moduli space of tori, which is well-known to be an orbifold; in higher genus, the failure to be a manifold is even more dramatic). Local coordinates are given by $(X,\omega )\mapsto \left(\int _{\gamma _{1}}\omega ,\ldots ,\int _{\gamma _{n}}\omega \right)$ where $n=\dim(H_{1}(S,\{x_{1},\ldots ,x_{m}\}))=2g+m-1$ and $\gamma _{1},\ldots ,\gamma _{k}$ is as above a symplectic basis of this space. Masur-Veech volumes The stratum ${\mathcal {H}}(\alpha )$ admits a ${\mathbb {C} }^{*}$-action and thus a real and complex projectivization ${{\mathcal {H}}(\alpha )}\to {\mathcal {H}}_{1}(\alpha )\to {\mathcal {H}}_{2}(\alpha )$. The real projectivization admits a natural section ${\mathcal {H}}_{1}(\alpha )\to {\mathcal {H}}(\alpha )$ if we define it as the space of translation surfaces of area 1. The existence of the above period coordinates allows to endow the stratum ${\mathcal {H}}(\alpha )$ with an integral affine structure and thus a natural volume form $\nu $. We also get a volume form $\nu _{1}(\alpha )$ on ${\mathcal {H}}_{1}(\alpha )$ by disintegration of $\nu $. The Masur-Veech volume $Vol(\alpha )$ is the total volume of ${\mathcal {H}}_{1}(\alpha )$ for $\nu _{1}(\alpha )$. This volume was proved to be finite independently by William A. Veech[1] and Howard Masur.[2] In the 90's Maxim Kontsevich and Anton Zorich evaluated these volumes numerically by counting the lattice points of ${\mathcal {H}}(\alpha )$. They observed that $Vol(\alpha )$ should be of the form $\pi ^{2g}$ times a rational number. From this observation they expected the existence of a formula expressing the volumes in terms of intersection numbers on moduli spaces of curves. Alex Eskin and Andrei Okounkov gave the first algorithm to compute these volumes. They showed that the generating series of these numbers are q-expansions of computable quasi-modular forms. Using this algorithm they could confirm the numerical observation of Kontsevich and Zorich.[3] More recently Chen, Möller, Sauvaget, and don Zagier showed that the volumes can be computed as intersection numbers on an algebraic compactification of ${\mathcal {H}}_{2}(\alpha )$. Currently the problem is still open to extend this formula to strata of half-translation surfaces.[4] The SL2(R)-action If $(X,\omega )$ is a translation surface obtained by identifying the faces of a polygon $P$ and $g\in \mathrm {SL} _{2}(\mathbb {R} )$ then the translation surface $g\cdot (X,\omega )$ is that associated to the polygon $g(P)$. This defined a continuous action of $\mathrm {SL} _{2}(\mathbb {R} )$ on the moduli space ${\mathcal {H}}$ which preserves the strata ${\mathcal {H}}(\alpha )$. This action descends to an action on ${\mathcal {H}}_{1}(\alpha )$ that is ergodic with respect to $\nu _{1}$. Half-translation surfaces Definitions A half-translation surface is defined similarly to a translation surface but allowing the gluing maps to have a nontrivial linear part which is a half turn. Formally, a translation surface is defined geometrically by taking a collection of polygons in the Euclidean plane and identifying faces by maps of the form $z\mapsto \pm z+w$ (a "half-translation"). Note that a face can be identified with itself. The geometric structure obtained in this way is a flat metric outside of a finite number of singular points with cone angles positive multiples of $\pi $. As in the case of translation surfaces there is an analytic interpretation: a half-translation surface can be interpreted as a pair $(X,\phi )$ where $X$ is a Riemann surface and $\phi $ a quadratic differential on $X$. To pass from the geometric picture to the analytic picture one simply takes the quadratic differential defined locally by $(dz)^{2}$ (which is invariant under half-translations), and for the other direction one takes the Riemannian metric induced by $\phi $, which is smooth and flat outside of the zeros of $\phi $. Relation with Teichmüller geometry If $X$ is a Riemann surface then the vector space of quadratic differentials on $X$ is naturally identified with the tangent space to Teichmüller space at any point above $X$. This can be proven by analytic means using the Bers embedding. Half-translation surfaces can be used to give a more geometric interpretation of this: if $(X,g),(Y,h)$ are two points in Teichmüller space then by Teichmüller's mapping theorem there exists two polygons $P,Q$ whose faces can be identified by half-translations to give flat surfaces with underlying Riemann surfaces isomorphic to $X,Y$ respectively, and an affine map $f$ of the plane sending $P$ to $Q$ which has the smallest distortion among the quasiconformal mappings in its isotopy class, and which is isotopic to $h\circ g^{-1}$. Everything is determined uniquely up to scaling if we ask that $f$ be of the form $f_{s}$, where $f_{t}:(x,y)\mapsto (e^{t}x,e^{-t}y)$, for some $s>0$; we denote by $X_{t}$ the Riemann surface obtained from the polygon $f_{t}(P)$. Now the path $t\mapsto (X_{t},f_{t}\circ g)$ in Teichmüller space joins $(X,g)$ to $(Y,h)$, and differentiating it at $t=0$ gives a vector in the tangent space; since $(Y,g)$ was arbitrary we obtain a bijection. In facts the paths used in this construction are Teichmüller geodesics. An interesting fact is that while the geodesic ray associated to a flat surface corresponds to a measured foliation, and thus the directions in tangent space are identified with the Thurston boundary, the Teichmüller geodesic ray associated to a flat surface does not always converge to the corresponding point on the boundary,[5] though almost all such rays do so.[6] Veech surfaces The Veech group If $(X,\omega )$ is a translation surface its Veech group is the Fuchsian group which is the image in $\mathrm {PSL} _{2}(\mathbb {R} )$ of the subgroup $\mathrm {SL} (X,\omega )\subset \mathrm {SL} _{2}(\mathbb {R} )$ of transformations $g$ such that $g\cdot (X,\omega )$ is isomorphic (as a translation surface) to $(X,\omega )$. Equivalently, $\mathrm {SL} (X,\omega )$ is the group of derivatives of affine diffeomorphisms $(X,\omega )\to (X,\omega )$ (where affine is defined locally outside the singularities, with respect to the affine structure induced by the translation structure). Veech groups have the following properties:[7] • They are discrete subgroups in $\mathrm {PSL} _{2}(\mathbb {R} )$; • They are never cocompact. Veech groups can be either finitely generated or not.[8] Veech surfaces A Veech surface is by definition a translation surface whose Veech group is a lattice in $\mathrm {PSL} _{2}(\mathbb {R} )$, equivalently its action on the hyperbolic plane admits a fundamental domain of finite volume. Since it is not cocompact it must then contain parabolic elements. Examples of Veech surfaces are the square-tiled surfaces, whose Veech groups are commensurable to the modular group $\mathrm {PSL} _{2}(\mathbb {Z} )$. [9][10] The square can be replaced by any parallelogram (the translation surfaces obtained are exactly those obtained as ramified covers of a flat torus). In fact the Veech group is arithmetic (which amounts to it being commensurable to the modular group) if and only if the surface is tiled by parallelograms.[10] There exists Veech surfaces whose Veech group is not arithmetic, for example the surface obtained from two regular pentagons glued along an edge: in this case the Veech group is a non-arithmetic Hecke triangle group.[9] On the other hand, there are still some arithmetic constraints on the Veech group of a Veech surface: for example its trace field is a number field[10] that is totally real.[11] Geodesic flow on translation surfaces Geodesics A geodesic in a translation surface (or a half-translation surface) is a parametrised curve which is, outside of the singular points, locally the image of a straight line in Euclidean space parametrised by arclength. If a geodesic arrives at a singularity it is required to stop there. Thus a maximal geodesic is a curve defined on a closed interval, which is the whole real line if it does not meet any singular point. A geodesic is closed or periodic if its image is compact, in which case it is either a circle if it does not meet any singularity, or an arc between two (possibly equal) singularities. In the latter case the geodesic is called a saddle connection. If $(X,\omega )$ $\theta \in \mathbb {R} /2\pi \mathbb {Z} $ (or $\theta \in \mathbb {R} /\pi \mathbb {Z} $ in the case of a half-translation surface) then the geodesics with direction theta are well-defined on $X$: they are those curves $c$ which satisfy $\omega ({\overset {\cdot }{c}})=e^{i\theta }$ (or $\phi ({\overset {\cdot }{c}})=e^{i\theta }$ in the case of a half-translation surface $(X,\phi )$). The geodesic flow on $(X,\omega )$ with direction $\theta $ is the flow $\phi _{t}$ on $X$ where $t\mapsto \phi _{t}(p)$ is the geodesic starting at $p$ with direction $\theta $ if $p$ is not singular. Dynamical properties On a flat torus the geodesic flow in a given direction has the property that it is either periodic or ergodic. In general this is not true: there may be directions in which the flow is minimal (meaning every orbit is dense in the surface) but not ergodic.[12] On the other hand, on a compact translation surface the flow retains from the simplest case of the flat torus the property that it is ergodic in almost every direction.[13] Another natural question is to establish asymptotic estimates for the number of closed geodesics or saddle connections of a given length. On a flat torus $T$ there are no saddle connections and the number of closed geodesics of length $\leq L$ is equivalent to $L^{2}/\operatorname {volume} (T)$. In general one can only obtain bounds: if $(X,\omega )$ is a compact translation surface of genus $g$ then there exists constants (depending only on the genus) $c_{1},c_{2}$ such that the both $N_{cg}(L)$ of closed geodesics and $N_{sc}(L)$ of saddle connections of length $\leq L$ satisfy ${\frac {c_{1}L^{2}}{\operatorname {volume} (X,\omega )}}\leq N_{\mathrm {cg} }(L),N_{\mathrm {sc} }(L)\leq {\frac {c_{2}L^{2}}{\operatorname {volume} (X,\omega )}}.$ Restraining to a probabilistic results it is possible to get better estimates: given a genus $g$, a partition $\alpha $ of $g$ and a connected component ${\mathcal {C}}$ of the stratum ${\mathcal {H}}(\alpha )$ there exists constants $c_{\mathrm {cg} }c_{\mathrm {sc} }$ such that for almost every $(X,\omega )\in {\mathcal {C}}$ the asymptotic equivalent holds:[13] $N_{\mathrm {cg} }(L)\sim {\frac {c_{\mathrm {cg} }L^{2}}{\operatorname {volume} (X,\omega )}}$, $N_{\mathrm {sc} }(L)\sim {\frac {c_{\mathrm {sc} }L^{2}}{\operatorname {volume} (X,\omega )}}.$ The constants $c_{\mathrm {cg} },c_{\mathrm {sc} }$ are called Siegel–Veech constants. Using the ergodicity of the $\mathrm {SL} _{2}(\mathbb {R} )$-action on ${\mathcal {H}}(\alpha )$, it was shown that these constants can explicitly be computed as ratios of certain Masur-Veech volumes.[14] Veech dichotomy The geodesic flow on a Veech surface is much better behaved than in general. This is expressed via the following result, called the Veech dichotomy:[15] Let $(X,\omega )$ be a Veech surface and $\theta $ a direction. Then either all trajectories defied over Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): \mathbb {R} are periodic or the flow in the direction $\theta $ is ergodic. Relation with billiards If $P_{0}$ is a polygon in the Euclidean plane and $\theta \in \mathbb {R} /2\pi \mathbb {Z} $ a direction there is a continuous dynamical system called a billiard. The trajectory of a point inside the polygon is defined as follows: as long as it does not touch the boundary it proceeds in a straight line at unit speed; when it touches the interior of an edge it bounces back (i.e. its direction changes with an orthogonal reflection in the perpendicular of the edge), and when it touches a vertex it stops. This dynamical system is equivalent to the geodesic flow on a flat surface: just double the polygon along the edges and put a flat metric everywhere but at the vertices, which become singular points with cone angle twice the angle of the polygon at the corresponding vertex. This surface is not a translation surface or a half-translation surface, but in some cases it is related to one. Namely, if all angles of the polygon $P_{0}$ are rational multiples of $\pi $ there is ramified cover of this surface which is a translation surface, which can be constructed from a union of copies of $P_{0}$. The dynamics of the billiard flow can then be studied through the geodesic flow on the translation surface. For example, the billiard in a square is related in this way to the billiard on the flat torus constructed from four copies of the square; the billiard in an equilateral triangle gives rise to the flat torus constructed from an hexagon. The billiard in a "L" shape constructed from squares is related to the geodesic flow on a square-tiled surface; the billiard in the triangle with angles $\pi /5,\pi /5,3\pi /5$ is related to the Veech surface constructed from two regular pentagons constructed above. Relation with interval exchange transformations Let $(X,\omega )$ be a translation surface and $\theta $ a direction, and let $\phi _{t}$ be the geodesic flow on $(X,\omega )$ with direction $\theta $. Let $I$ be a geodesic segment in the direction orthogonal to $\theta $, and defined the first recurrence, or Poincaré map $\sigma :I\to I$ as follows: $\sigma (p)$ is equal to $\phi _{t}(p)$ where $\phi _{s}(p)\not \in I$ for $0<s<t$. Then this map is an interval exchange transformation and it can be used to study the dynamic of the geodesic flow.[16] Notes 1. Veech, William A. (1982). "Gauss Measures for Transformations on the Space of Interval Exchange Maps". Annals of Mathematics. 115 (2): 201–242. doi:10.2307/1971391. JSTOR 1971391. 2. Masur, Howard (1982). "Interval Exchange Transformations and Measured Foliations". Annals of Mathematics. 115 (1): 169–200. doi:10.2307/1971341. JSTOR 1971341. 3. Eskin, Alex; Okounkov, Andrei (2001). "Asymptotics of numbers of branched coverings of a torus and volumes of moduli spaces of holomorphic differentials". Inventiones Mathematicae. 145 (1): 59–103. arXiv:math/0006171. Bibcode:2001InMat.145...59E. doi:10.1007/s002220100142. S2CID 14125769. 4. Chen, Dawei; Möller, Martin; Sauvaget, Adrien; Zagier, Don Bernhard (2019). "Masur-Veech volumes and intersection theory on moduli spaces of abelian differentials". Inventiones Mathematicae. 222 (1): 283. arXiv:1901.01785. Bibcode:2020InMat.222..283C. doi:10.1007/s00222-020-00969-4. S2CID 119655348. 5. Lenzhen, Anna (2008). "Teichmüller geodesics that do not have a limit in PMF". Geometry and Topology. 12: 177–197. arXiv:math/0511001. doi:10.2140/gt.2008.12.177. S2CID 16047629. 6. Masur, Howard (1982). "Two boundaries of TeichmÛller space". Duke Math. J. 49: 183–190. doi:10.1215/s0012-7094-82-04912-2. MR 0650376. 7. Veech 2006. sfn error: no target: CITEREFVeech2006 (help) 8. McMullen, Curtis T. (2003). "Teichmüller geodesics of infinite complexity". Acta Math. 191 (2): 191–223. doi:10.1007/bf02392964. 9. Veech 1989. 10. Gutkin & Judge 2000. 11. Hubert, Pascal; Lanneau, Erwan (2006). "Veech groups without parabolic elements". Duke Mathematical Journal. 133 (2): 335–346. arXiv:math/0503047. doi:10.1215/s0012-7094-06-13326-4. S2CID 14274833. 12. Masur 2006, Theorem 2. 13. Zorich 2006, 6.1. 14. Eskin, Alex; Masur, Howard; Zorich, Anton (2003). "Moduli spaces of abelian differentials : the principal boundary, counting problems, and the Siegel-Veech constants". Publications Mathématiques de l'IHÉS. 97: 61–179. arXiv:math/0202134. doi:10.1007/s10240-003-0015-1. S2CID 119713402. 15. Veech 1989, Theorem 1. 16. Zorich 2006, Chapter 5. References • Hubert, Pascal; Schmidt, Thomas A. (2006), "An introduction to Veech surfaces" (PDF), Handbook of dynamical systems. Vol. 1B, Handbook of Dynamical Systems, vol. 1, Elsevier B. V., Amsterdam, pp. 501–526, doi:10.1016/S1874-575X(06)80031-7, ISBN 9780444520555, MR 2186246 • Gutkin, Eugene; Judge, Chris. (2000), "Affine mappings of translation surfaces: geometry and arithmetic", Duke Math. J., 103 (3): 191–213, doi:10.1215/S0012-7094-00-10321-3 • Masur, Howard (2006), "Ergodic theory of translation surfaces", Handbook of dynamical systems. Vol. 1B, Handbook of Dynamical Systems, vol. 1, Elsevier B. V., Amsterdam, pp. 527–547, doi:10.1016/S1874-575X(06)80032-9, ISBN 9780444520555, MR 2186247 • Veech, W. A. (1989), "Teichmüller curves in moduli space, Eisenstein series and an application to triangular billiards", Inventiones Mathematicae, 97 (3): 553–583, Bibcode:1989InMat..97..553V, doi:10.1007/BF01388890, ISSN 0020-9910, MR 1005006, S2CID 189831945 • Zorich, Anton (2006). "Flat surfaces". In Cartier, P.; Julia, B.; Moussa, P.; Vanhove, P. (eds.). Frontiers in Number Theory, Physics and Geometry. Volume 1: On random matrices, zeta functions and dynamical systems. Springer-Verlag. arXiv:math/0609392. Bibcode:2006math......9392Z.
Wikipedia
Verlet integration Verlet integration (French pronunciation: ​[vɛʁˈlɛ]) is a numerical method used to integrate Newton's equations of motion.[1] It is frequently used to calculate trajectories of particles in molecular dynamics simulations and computer graphics. The algorithm was first used in 1791 by Jean Baptiste Delambre and has been rediscovered many times since then, most recently by Loup Verlet in the 1960s for use in molecular dynamics. It was also used by P. H. Cowell and A. C. C. Crommelin in 1909 to compute the orbit of Halley's Comet, and by Carl Størmer in 1907 to study the trajectories of electrical particles in a magnetic field (hence it is also called Störmer's method).[2] The Verlet integrator provides good numerical stability, as well as other properties that are important in physical systems such as time reversibility and preservation of the symplectic form on phase space, at no significant additional computational cost over the simple Euler method. Basic Störmer–Verlet For a second-order differential equation of the type ${\ddot {\mathbf {x} }}(t)=\mathbf {A} {\bigl (}\mathbf {x} (t){\bigr )}$ with initial conditions $\mathbf {x} (t_{0})=\mathbf {x} _{0}$ and ${\dot {\mathbf {x} }}(t_{0})=\mathbf {v} _{0}$, an approximate numerical solution $\mathbf {x} _{n}\approx \mathbf {x} (t_{n})$ at the times $t_{n}=t_{0}+n\,\Delta t$ with step size $\Delta t>0$ can be obtained by the following method: 1. set $ \mathbf {x} _{1}=\mathbf {x} _{0}+\mathbf {v} _{0}\,\Delta t+{\tfrac {1}{2}}\mathbf {A} (\mathbf {x} _{0})\,\Delta t^{2}$, 2. for n = 1, 2, ... iterate $\mathbf {x} _{n+1}=2\mathbf {x} _{n}-\mathbf {x} _{n-1}+\mathbf {A} (\mathbf {x} _{n})\,\Delta t^{2}.$ Equations of motion Newton's equation of motion for conservative physical systems is ${\boldsymbol {M}}{\ddot {\mathbf {x} }}(t)=F{\bigl (}\mathbf {x} (t){\bigr )}=-\nabla V{\bigl (}\mathbf {x} (t){\bigr )},$ or individually $m_{k}{\ddot {\mathbf {x} }}_{k}(t)=F_{k}{\bigl (}\mathbf {x} (t){\bigr )}=-\nabla _{\mathbf {x} _{k}}V\left(\mathbf {x} (t)\right),$ where • $t$ is the time, • $\mathbf {x} (t)={\bigl (}\mathbf {x} _{1}(t),\ldots ,\mathbf {x} _{N}(t){\bigr )}$ is the ensemble of the position vector of $N$ objects, • $V$ is the scalar potential function, • $F$ is the negative gradient of the potential, giving the ensemble of forces on the particles, • ${\boldsymbol {M}}$ is the mass matrix, typically diagonal with blocks with mass $m_{k}$ for every particle. This equation, for various choices of the potential function $V$, can be used to describe the evolution of diverse physical systems, from the motion of interacting molecules to the orbit of the planets. After a transformation to bring the mass to the right side and forgetting the structure of multiple particles, the equation may be simplified to ${\ddot {\mathbf {x} }}(t)=\mathbf {A} {\bigl (}\mathbf {x} (t){\bigr )}$ with some suitable vector-valued function $\mathbf {A} (\mathbf {x} )$ representing the position-dependent acceleration. Typically, an initial position $\mathbf {x} (0)=\mathbf {x} _{0}$ and an initial velocity $\mathbf {v} (0)={\dot {\mathbf {x} }}(0)=\mathbf {v} _{0}$ are also given. Verlet integration (without velocities) To discretize and numerically solve this initial value problem, a time step $\Delta t>0$ is chosen, and the sampling-point sequence $t_{n}=n\,\Delta t$ considered. The task is to construct a sequence of points $\mathbf {x} _{n}$ that closely follow the points $\mathbf {x} (t_{n})$ on the trajectory of the exact solution. Where Euler's method uses the forward difference approximation to the first derivative in differential equations of order one, Verlet integration can be seen as using the central difference approximation to the second derivative: ${\begin{aligned}{\frac {\Delta ^{2}\mathbf {x} _{n}}{\Delta t^{2}}}&={\frac {{\frac {\mathbf {x} _{n+1}-\mathbf {x} _{n}}{\Delta t}}-{\frac {\mathbf {x} _{n}-\mathbf {x} _{n-1}}{\Delta t}}}{\Delta t}}\\[6pt]&={\frac {\mathbf {x} _{n+1}-2\mathbf {x} _{n}+\mathbf {x} _{n-1}}{\Delta t^{2}}}=\mathbf {a} _{n}=\mathbf {A} (\mathbf {x} _{n}).\end{aligned}}$ Verlet integration in the form used as the Störmer method[3] uses this equation to obtain the next position vector from the previous two without using the velocity as ${\begin{aligned}\mathbf {x} _{n+1}&=2\mathbf {x} _{n}-\mathbf {x} _{n-1}+\mathbf {a} _{n}\,\Delta t^{2},\\[6pt]\mathbf {a} _{n}&=\mathbf {A} (\mathbf {x} _{n}).\end{aligned}}$ Discretisation error The time symmetry inherent in the method reduces the level of local errors introduced into the integration by the discretization by removing all odd-degree terms, here the terms in $\Delta t$ of degree three. The local error is quantified by inserting the exact values $\mathbf {x} (t_{n-1}),\mathbf {x} (t_{n}),\mathbf {x} (t_{n+1})$ into the iteration and computing the Taylor expansions at time $t=t_{n}$ of the position vector $\mathbf {x} (t\pm \Delta t)$ in different time directions: ${\begin{aligned}\mathbf {x} (t+\Delta t)&=\mathbf {x} (t)+\mathbf {v} (t)\Delta t+{\frac {\mathbf {a} (t)\Delta t^{2}}{2}}+{\frac {\mathbf {b} (t)\Delta t^{3}}{6}}+{\mathcal {O}}\left(\Delta t^{4}\right)\\\mathbf {x} (t-\Delta t)&=\mathbf {x} (t)-\mathbf {v} (t)\Delta t+{\frac {\mathbf {a} (t)\Delta t^{2}}{2}}-{\frac {\mathbf {b} (t)\Delta t^{3}}{6}}+{\mathcal {O}}\left(\Delta t^{4}\right),\end{aligned}}$ where $\mathbf {x} $ is the position, $\mathbf {v} ={\dot {\mathbf {x} }}$ the velocity, $\mathbf {a} ={\ddot {\mathbf {x} }}$ the acceleration, and $\mathbf {b} ={\dot {\mathbf {a} }}={\overset {\dots }{\mathbf {x} }}$ the jerk (third derivative of the position with respect to the time). Adding these two expansions gives $\mathbf {x} (t+\Delta t)=2\mathbf {x} (t)-\mathbf {x} (t-\Delta t)+\mathbf {a} (t)\Delta t^{2}+{\mathcal {O}}\left(\Delta t^{4}\right).$ We can see that the first- and third-order terms from the Taylor expansion cancel out, thus making the Verlet integrator an order more accurate than integration by simple Taylor expansion alone. Caution should be applied to the fact that the acceleration here is computed from the exact solution, $\mathbf {a} (t)=\mathbf {A} {\bigl (}\mathbf {x} (t){\bigr )}$, whereas in the iteration it is computed at the central iteration point, $\mathbf {a} _{n}=\mathbf {A} (\mathbf {x} _{n})$. In computing the global error, that is the distance between exact solution and approximation sequence, those two terms do not cancel exactly, influencing the order of the global error. A simple example To gain insight into the relation of local and global errors, it is helpful to examine simple examples where the exact solution, as well as the approximate solution, can be expressed in explicit formulas. The standard example for this task is the exponential function. Consider the linear differential equation ${\ddot {x}}(t)=w^{2}x(t)$ with a constant $w$. Its exact basis solutions are $e^{wt}$ and $e^{-wt}$. The Störmer method applied to this differential equation leads to a linear recurrence relation $x_{n+1}-2x_{n}+x_{n-1}=h^{2}w^{2}x_{n},$ or $x_{n+1}-2\left(1+{\tfrac {1}{2}}(wh)^{2}\right)x_{n}+x_{n-1}=0.$ It can be solved by finding the roots of its characteristic polynomial $q^{2}-2\left(1+{\tfrac {1}{2}}(wh)^{2}\right)q+1=0$. These are $q_{\pm }=1+{\tfrac {1}{2}}(wh)^{2}\pm wh{\sqrt {1+{\tfrac {1}{4}}(wh)^{2}}}.$ The basis solutions of the linear recurrence are $x_{n}=q_{+}^{n}$ and $x_{n}=q_{-}^{n}$. To compare them with the exact solutions, Taylor expansions are computed: ${\begin{aligned}q_{+}&=1+{\tfrac {1}{2}}(wh)^{2}+wh\left(1+{\tfrac {1}{8}}(wh)^{2}-{\tfrac {3}{128}}(wh)^{4}+{\mathcal {O}}\left(h^{6}\right)\right)\\&=1+(wh)+{\tfrac {1}{2}}(wh)^{2}+{\tfrac {1}{8}}(wh)^{3}-{\tfrac {3}{128}}(wh)^{5}+{\mathcal {O}}\left(h^{7}\right).\end{aligned}}$ The quotient of this series with the one of the exponential $e^{wh}$ starts with $1-{\tfrac {1}{24}}(wh)^{3}+{\mathcal {O}}\left(h^{5}\right)$, so ${\begin{aligned}q_{+}&=\left(1-{\tfrac {1}{24}}(wh)^{3}+{\mathcal {O}}\left(h^{5}\right)\right)e^{wh}\\&=e^{-{\frac {1}{24}}(wh)^{3}+{\mathcal {O}}\left(h^{5}\right)}\,e^{wh}.\end{aligned}}$ From there it follows that for the first basis solution the error can be computed as ${\begin{aligned}x_{n}=q_{+}^{n}&=e^{-{\frac {1}{24}}(wh)^{2}\,wt_{n}+{\mathcal {O}}\left(h^{4}\right)}\,e^{wt_{n}}\\&=e^{wt_{n}}\left(1-{\tfrac {1}{24}}(wh)^{2}\,wt_{n}+{\mathcal {O}}(h^{4})\right)\\&=e^{wt_{n}}+{\mathcal {O}}\left(h^{2}t_{n}e^{wt_{n}}\right).\end{aligned}}$ That is, although the local discretization error is of order 4, due to the second order of the differential equation the global error is of order 2, with a constant that grows exponentially in time. Starting the iteration Note that at the start of the Verlet iteration at step $n=1$, time $t=t_{1}=\Delta t$, computing $\mathbf {x} _{2}$, one already needs the position vector $\mathbf {x} _{1}$ at time $t=t_{1}$. At first sight, this could give problems, because the initial conditions are known only at the initial time $t_{0}=0$. However, from these the acceleration $\mathbf {a} _{0}=\mathbf {A} (\mathbf {x} _{0})$ is known, and a suitable approximation for the position at the first time step can be obtained using the Taylor polynomial of degree two: $\mathbf {x} _{1}=\mathbf {x} _{0}+\mathbf {v} _{0}\Delta t+{\tfrac {1}{2}}\mathbf {a} _{0}\Delta t^{2}\approx \mathbf {x} (\Delta t)+{\mathcal {O}}\left(\Delta t^{3}\right).$ The error on the first time step then is of order ${\mathcal {O}}\left(\Delta t^{3}\right)$. This is not considered a problem because on a simulation over a large number of time steps, the error on the first time step is only a negligibly small amount of the total error, which at time $t_{n}$ is of the order ${\mathcal {O}}\left(e^{Lt_{n}}\Delta t^{2}\right)$, both for the distance of the position vectors $\mathbf {x} _{n}$ to $\mathbf {x} (t_{n})$ as for the distance of the divided differences ${\tfrac {\mathbf {x} _{n+1}-\mathbf {x} _{n}}{\Delta t}}$ to ${\tfrac {\mathbf {x} (t_{n+1})-\mathbf {x} (t_{n})}{\Delta t}}$. Moreover, to obtain this second-order global error, the initial error needs to be of at least third order. Non-constant time differences A disadvantage of the Störmer–Verlet method is that if the time step ($\Delta t$) changes, the method does not approximate the solution to the differential equation. This can be corrected using the formula[4] $\mathbf {x} _{i+1}=\mathbf {x} _{i}+\left(\mathbf {x} _{i}-\mathbf {x} _{i-1}\right){\frac {\Delta t_{i}}{\Delta t_{i-1}}}+\mathbf {a} _{i}\Delta t_{i}^{2}.$ A more exact derivation uses the Taylor series (to second order) at $t_{i}$ for times $t_{i+1}=t_{i}+\Delta t_{i}$ and $t_{i-1}=t_{i}-\Delta t_{i-1}$ to obtain after elimination of $\mathbf {v} _{i}$ ${\frac {\mathbf {x} _{i+1}-\mathbf {x} _{i}}{\Delta t_{i}}}+{\frac {\mathbf {x} _{i-1}-\mathbf {x} _{i}}{\Delta t_{i-1}}}=\mathbf {a} _{i}\,{\frac {\Delta t_{i}+\Delta t_{i-1}}{2}},$ so that the iteration formula becomes $\mathbf {x} _{i+1}=\mathbf {x} _{i}+(\mathbf {x} _{i}-\mathbf {x} _{i-1}){\frac {\Delta t_{i}}{\Delta t_{i-1}}}+\mathbf {a} _{i}\,{\frac {\Delta t_{i}+\Delta t_{i-1}}{2}}\,\Delta t_{i}.$ Computing velocities – Störmer–Verlet method The velocities are not explicitly given in the basic Störmer equation, but often they are necessary for the calculation of certain physical quantities like the kinetic energy. This can create technical challenges in molecular dynamics simulations, because kinetic energy and instantaneous temperatures at time $t$ cannot be calculated for a system until the positions are known at time $t+\Delta t$. This deficiency can either be dealt with using the velocity Verlet algorithm or by estimating the velocity using the position terms and the mean value theorem: $\mathbf {v} (t)={\frac {\mathbf {x} (t+\Delta t)-\mathbf {x} (t-\Delta t)}{2\Delta t}}+{\mathcal {O}}\left(\Delta t^{2}\right).$ Note that this velocity term is a step behind the position term, since this is for the velocity at time $t$, not $t+\Delta t$, meaning that $\mathbf {v} _{n}={\tfrac {\mathbf {x} _{n+1}-\mathbf {x} _{n-1}}{2\Delta t}}$ is a second-order approximation to $\mathbf {v} (t_{n})$. With the same argument, but halving the time step, $\mathbf {v} _{n+{\frac {1}{2}}}={\tfrac {\mathbf {x} _{n+1}-\mathbf {x} _{n}}{\Delta t}}$ is a second-order approximation to $\mathbf {v} \left(t_{n+{\frac {1}{2}}}\right)$, with $t_{n+{\frac {1}{2}}}=t_{n}+{\tfrac {1}{2}}\Delta t$. One can shorten the interval to approximate the velocity at time $t+\Delta t$ at the cost of accuracy: $\mathbf {v} (t+\Delta t)={\frac {\mathbf {x} (t+\Delta t)-\mathbf {x} (t)}{\Delta t}}+{\mathcal {O}}(\Delta t).$ Velocity Verlet A related, and more commonly used, algorithm is the velocity Verlet algorithm,[5] similar to the leapfrog method, except that the velocity and position are calculated at the same value of the time variable (leapfrog does not, as the name suggests). This uses a similar approach, but explicitly incorporates velocity, solving the problem of the first time step in the basic Verlet algorithm: ${\begin{aligned}\mathbf {x} (t+\Delta t)&=\mathbf {x} (t)+\mathbf {v} (t)\,\Delta t+{\tfrac {1}{2}}\,\mathbf {a} (t)\Delta t^{2},\\[6pt]\mathbf {v} (t+\Delta t)&=\mathbf {v} (t)+{\frac {\mathbf {a} (t)+\mathbf {a} (t+\Delta t)}{2}}\Delta t.\end{aligned}}$ It can be shown that the error in the velocity Verlet is of the same order as in the basic Verlet. Note that the velocity algorithm is not necessarily more memory-consuming, because, in basic Verlet, we keep track of two vectors of position, while in velocity Verlet, we keep track of one vector of position and one vector of velocity. The standard implementation scheme of this algorithm is: 1. Calculate $\mathbf {v} \left(t+{\tfrac {1}{2}}\,\Delta t\right)=\mathbf {v} (t)+{\tfrac {1}{2}}\,\mathbf {a} (t)\,\Delta t$. 2. Calculate $\mathbf {x} (t+\Delta t)=\mathbf {x} (t)+\mathbf {v} \left(t+{\tfrac {1}{2}}\,\Delta t\right)\,\Delta t$. 3. Derive $\mathbf {a} (t+\Delta t)$ from the interaction potential using $\mathbf {x} (t+\Delta t)$. 4. Calculate $\mathbf {v} (t+\Delta t)=\mathbf {v} \left(t+{\tfrac {1}{2}}\,\Delta t\right)+{\tfrac {1}{2}}\,\mathbf {a} (t+\Delta t)\Delta t$. This algorithm also works with variable time steps, and is identical to the 'kick-drift-kick' form of leapfrog method integration. Eliminating the half-step velocity, this algorithm may be shortened to 1. Calculate $\mathbf {x} (t+\Delta t)=\mathbf {x} (t)+\mathbf {v} (t)\,\Delta t+{\tfrac {1}{2}}\,\mathbf {a} (t)\,\Delta t^{2}$. 2. Derive $\mathbf {a} (t+\Delta t)$ from the interaction potential using $\mathbf {x} (t+\Delta t)$. 3. Calculate $\mathbf {v} (t+\Delta t)=\mathbf {v} (t)+{\tfrac {1}{2}}\,{\bigl (}\mathbf {a} (t)+\mathbf {a} (t+\Delta t){\bigr )}\Delta t$. Note, however, that this algorithm assumes that acceleration $\mathbf {a} (t+\Delta t)$ only depends on position $\mathbf {x} (t+\Delta t)$ and does not depend on velocity $\mathbf {v} (t+\Delta t)$. One might note that the long-term results of velocity Verlet, and similarly of leapfrog are one order better than the semi-implicit Euler method. The algorithms are almost identical up to a shift by half a time step in the velocity. This can be proven by rotating the above loop to start at step 3 and then noticing that the acceleration term in step 1 could be eliminated by combining steps 2 and 4. The only difference is that the midpoint velocity in velocity Verlet is considered the final velocity in semi-implicit Euler method. The global error of all Euler methods is of order one, whereas the global error of this method is, similar to the midpoint method, of order two. Additionally, if the acceleration indeed results from the forces in a conservative mechanical or Hamiltonian system, the energy of the approximation essentially oscillates around the constant energy of the exactly solved system, with a global error bound again of order one for semi-explicit Euler and order two for Verlet-leapfrog. The same goes for all other conserved quantities of the system like linear or angular momentum, that are always preserved or nearly preserved in a symplectic integrator.[6] The velocity Verlet method is a special case of the Newmark-beta method with $\beta =0$ and $\gamma ={\tfrac {1}{2}}$. Algorithmic representation Since velocity Verlet is a generally useful algorithm in 3D applications, a general solution written in C++ could look like below. A simplified drag force is used to demonstrate change in acceleration, however it is only needed if acceleration is not constant. struct Body { Vec3d pos { 0.0, 0.0, 0.0 }; Vec3d vel { 2.0, 0.0, 0.0 }; // 2 m/s along x-axis Vec3d acc { 0.0, 0.0, 0.0 }; // no acceleration at first double mass = 1.0; // 1kg double drag = 0.1; // rho*C*Area – simplified drag for this example /** * Update pos and vel using "Velocity Verlet" integration * @param dt DeltaTime / time step [eg: 0.01] */ void update(double dt) { Vec3d new_pos = pos + vel*dt + acc*(dt*dt*0.5); Vec3d new_acc = apply_forces(); // only needed if acceleration is not constant Vec3d new_vel = vel + (acc+new_acc)*(dt*0.5); pos = new_pos; vel = new_vel; acc = new_acc; } Vec3d apply_forces() const { Vec3d grav_acc = Vec3d{0.0, 0.0, -9.81 }; // 9.81 m/s² down in the z-axis Vec3d drag_force = 0.5 * drag * (vel * vel); // D = 0.5 * (rho * C * Area * vel^2) Vec3d drag_acc = drag_force / mass; // a = F/m return grav_acc - drag_acc; } }; Error terms The global truncation error of the Verlet method is ${\mathcal {O}}\left(\Delta t^{2}\right)$, both for position and velocity. This is in contrast with the fact that the local error in position is only ${\mathcal {O}}\left(\Delta t^{4}\right)$ as described above. The difference is due to the accumulation of the local truncation error over all of the iterations. The global error can be derived by noting the following: $\operatorname {error} {\bigl (}x(t_{0}+\Delta t){\bigr )}={\mathcal {O}}\left(\Delta t^{4}\right)$ and $x(t_{0}+2\Delta t)=2x(t_{0}+\Delta t)-x(t_{0})+\Delta t^{2}{\ddot {x}}(t_{0}+\Delta t)+{\mathcal {O}}\left(\Delta t^{4}\right).$ Therefore $\operatorname {error} {\bigl (}x(t_{0}+2\Delta t){\bigr )}=2\cdot \operatorname {error} {\bigl (}x(t_{0}+\Delta t){\bigr )}+{\mathcal {O}}\left(\Delta t^{4}\right)=3\,{\mathcal {O}}\left(\Delta t^{4}\right).$ Similarly: ${\begin{aligned}\operatorname {error} {\bigl (}x(t_{0}+3\Delta t){\bigl )}&=6\,{\mathcal {O}}\left(\Delta t^{4}\right),\\[6px]\operatorname {error} {\bigl (}x(t_{0}+4\Delta t){\bigl )}&=10\,{\mathcal {O}}\left(\Delta t^{4}\right),\\[6px]\operatorname {error} {\bigl (}x(t_{0}+5\Delta t){\bigl )}&=15\,{\mathcal {O}}\left(\Delta t^{4}\right),\end{aligned}}$ which can be generalized to (it can be shown by induction, but it is given here without proof): $\operatorname {error} {\bigl (}x(t_{0}+n\Delta t){\bigr )}={\frac {n(n+1)}{2}}\,{\mathcal {O}}\left(\Delta t^{4}\right).$ If we consider the global error in position between $x(t)$ and $x(t+T)$, where $T=n\Delta t$, it is clear that $\operatorname {error} {\bigl (}x(t_{0}+T){\bigr )}=\left({\frac {T^{2}}{2\Delta t^{2}}}+{\frac {T}{2\Delta t}}\right){\mathcal {O}}\left(\Delta t^{4}\right),$ and therefore, the global (cumulative) error over a constant interval of time is given by $\operatorname {error} {\bigr (}x(t_{0}+T){\bigl )}={\mathcal {O}}\left(\Delta t^{2}\right).$ Because the velocity is determined in a non-cumulative way from the positions in the Verlet integrator, the global error in velocity is also ${\mathcal {O}}\left(\Delta t^{2}\right)$. In molecular dynamics simulations, the global error is typically far more important than the local error, and the Verlet integrator is therefore known as a second-order integrator. Constraints Systems of multiple particles with constraints are simpler to solve with Verlet integration than with Euler methods. Constraints between points may be, for example, potentials constraining them to a specific distance or attractive forces. They may be modeled as springs connecting the particles. Using springs of infinite stiffness, the model may then be solved with a Verlet algorithm. In one dimension, the relationship between the unconstrained positions ${\tilde {x}}_{i}^{(t)}$ and the actual positions $x_{i}^{(t)}$ of points $i$ at time $t$, given a desired constraint distance of $r$, can be found with the algorithm ${\begin{aligned}d_{1}&=x_{2}^{(t)}-x_{1}^{(t)},\\[6px]d_{2}&=\|d_{1}\|,\\[6px]d_{3}&={\frac {d_{2}-r}{d_{2}}},\\[6px]x_{1}^{(t+\Delta t)}&={\tilde {x}}_{1}^{(t+\Delta t)}+{\tfrac {1}{2}}d_{1}d_{3},\\[6px]x_{2}^{(t+\Delta t)}&={\tilde {x}}_{2}^{(t+\Delta t)}-{\tfrac {1}{2}}d_{1}d_{3}.\end{aligned}}$ Verlet integration is useful because it directly relates the force to the position, rather than solving the problem using velocities. Problems, however, arise when multiple constraining forces act on each particle. One way to solve this is to loop through every point in a simulation, so that at every point the constraint relaxation of the last is already used to speed up the spread of the information. In a simulation this may be implemented by using small time steps for the simulation, using a fixed number of constraint-solving steps per time step, or solving constraints until they are met by a specific deviation. When approximating the constraints locally to first order, this is the same as the Gauss–Seidel method. For small matrices it is known that LU decomposition is faster. Large systems can be divided into clusters (for example, each ragdoll = cluster). Inside clusters the LU method is used, between clusters the Gauss–Seidel method is used. The matrix code can be reused: The dependency of the forces on the positions can be approximated locally to first order, and the Verlet integration can be made more implicit. Sophisticated software, such as SuperLU[7] exists to solve complex problems using sparse matrices. Specific techniques, such as using (clusters of) matrices, may be used to address the specific problem, such as that of force propagating through a sheet of cloth without forming a sound wave.[8] Another way to solve holonomic constraints is to use constraint algorithms. Collision reactions One way of reacting to collisions is to use a penalty-based system, which basically applies a set force to a point upon contact. The problem with this is that it is very difficult to choose the force imparted. Use too strong a force, and objects will become unstable, too weak, and the objects will penetrate each other. Another way is to use projection collision reactions, which takes the offending point and attempts to move it the shortest distance possible to move it out of the other object. The Verlet integration would automatically handle the velocity imparted by the collision in the latter case; however, note that this is not guaranteed to do so in a way that is consistent with collision physics (that is, changes in momentum are not guaranteed to be realistic). Instead of implicitly changing the velocity term, one would need to explicitly control the final velocities of the objects colliding (by changing the recorded position from the previous time step). The two simplest methods for deciding on a new velocity are perfectly elastic and inelastic collisions. A slightly more complicated strategy that offers more control would involve using the coefficient of restitution. See also • Courant–Friedrichs–Lewy condition • Energy drift • Symplectic integrator • Leapfrog integration • Beeman's algorithm Literature 1. Verlet, Loup (1967). "Computer "Experiments" on Classical Fluids. I. Thermodynamical Properties of Lennard−Jones Molecules". Physical Review. 159 (1): 98–103. Bibcode:1967PhRv..159...98V. doi:10.1103/PhysRev.159.98. 2. Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007). "Section 17.4. Second-Order Conservative Equations". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. 3. webpage Archived 2004-08-03 at the Wayback Machine with a description of the Störmer method. 4. Dummer, Jonathan. "A Simple Time-Corrected Verlet Integration Method". 5. Swope, William C.; H. C. Andersen; P. H. Berens; K. R. Wilson (1 January 1982). "A computer simulation method for the calculation of equilibrium constants for the formation of physical clusters of molecules: Application to small water clusters". The Journal of Chemical Physics. 76 (1): 648 (Appendix). Bibcode:1982JChPh..76..637S. doi:10.1063/1.442716. 6. Hairer, Ernst; Lubich, Christian; Wanner, Gerhard (2003). "Geometric numerical integration illustrated by the Störmer/Verlet method". Acta Numerica. 12: 399–450. Bibcode:2003AcNum..12..399H. CiteSeerX 10.1.1.7.7106. doi:10.1017/S0962492902000144. S2CID 122016794. 7. SuperLU User's Guide. 8. Baraff, D.; Witkin, A. (1998). "Large Steps in Cloth Simulation" (PDF). Computer Graphics Proceedings. Annual Conference Series: 43–54. External links • Verlet Integration Demo and Code as a Java Applet • Advanced Character Physics by Thomas Jakobsen • Theory of Molecular Dynamics Simulations – bottom of page • Verlet integration implemented in modern JavaScript – bottom of page Numerical methods for integration First-order methods • Euler method • Backward Euler • Semi-implicit Euler • Exponential Euler Second-order methods • Verlet integration • Velocity Verlet • Trapezoidal rule • Beeman's algorithm • Midpoint method • Heun's method • Newmark-beta method • Leapfrog integration Higher-order methods • Exponential integrator • Runge–Kutta methods • List of Runge–Kutta methods • Linear multistep method • General linear methods • Backward differentiation formula • Yoshida • Gauss–Legendre method Theory • Symplectic integrator
Wikipedia
Velocity Velocity is the speed and the direction of motion of an object. Velocity is a fundamental concept in kinematics, the branch of classical mechanics that describes the motion of bodies. Velocity As a change of direction occurs while the racing cars turn on the curved track, their velocity is not constant. Common symbols v, v, v→, v Other units mph, ft/s In SI base unitsm/s DimensionL T−1 Part of a series on Classical mechanics ${\textbf {F}}={\frac {d}{dt}}(m{\textbf {v}})$ Second law of motion • History • Timeline • Textbooks Branches • Applied • Celestial • Continuum • Dynamics • Kinematics • Kinetics • Statics • Statistical mechanics Fundamentals • Acceleration • Angular momentum • Couple • D'Alembert's principle • Energy • kinetic • potential • Force • Frame of reference • Inertial frame of reference • Impulse • Inertia / Moment of inertia • Mass • Mechanical power • Mechanical work • Moment • Momentum • Space • Speed • Time • Torque • Velocity • Virtual work Formulations • Newton's laws of motion • Analytical mechanics • Lagrangian mechanics • Hamiltonian mechanics • Routhian mechanics • Hamilton–Jacobi equation • Appell's equation of motion • Koopman–von Neumann mechanics Core topics • Damping • Displacement • Equations of motion • Euler's laws of motion • Fictitious force • Friction • Harmonic oscillator • Inertial / Non-inertial reference frame • Mechanics of planar particle motion • Motion (linear) • Newton's law of universal gravitation • Newton's laws of motion • Relative velocity • Rigid body • dynamics • Euler's equations • Simple harmonic motion • Vibration Rotation • Circular motion • Rotating reference frame • Centripetal force • Centrifugal force • reactive • Coriolis force • Pendulum • Tangential speed • Rotational frequency • Angular acceleration / displacement / frequency / velocity Scientists • Kepler • Galileo • Huygens • Newton • Horrocks • Halley • Maupertuis • Daniel Bernoulli • Johann Bernoulli • Euler • d'Alembert • Clairaut • Lagrange • Laplace • Hamilton • Poisson • Cauchy • Routh • Liouville • Appell • Gibbs • Koopman • von Neumann •  Physics portal •  Category Velocity is a physical vector quantity: both magnitude and direction are needed to define it. The scalar absolute value (magnitude) of velocity is called speed, being a coherent derived unit whose quantity is measured in the SI (metric system) as metres per second (m/s or m⋅s−1). For example, "5 metres per second" is a scalar, whereas "5 metres per second east" is a vector. If there is a change in speed, direction or both, then the object is said to be undergoing an acceleration. Constant velocity vs acceleration To have a constant velocity, an object must have a constant speed in a constant direction. Constant direction constrains the object to motion in a straight path thus, a constant velocity means motion in a straight line at a constant speed. For example, a car moving at a constant 20 kilometres per hour in a circular path has a constant speed, but does not have a constant velocity because its direction changes. Hence, the car is considered to be undergoing an acceleration. Difference between speed and velocity Speed, the scalar magnitude of a velocity vector, denotes only how fast an object is moving.[1][2] Equation of motion Average velocity Velocity is defined as the rate of change of position with respect to time, which may also be referred to as the instantaneous velocity to emphasize the distinction from the average velocity. In some applications the average velocity of an object might be needed, that is to say, the constant velocity that would provide the same resultant displacement as a variable velocity in the same time interval, v(t), over some time period Δt. Average velocity can be calculated as: ${\boldsymbol {\bar {v}}}={\frac {\Delta {\boldsymbol {x}}}{\Delta t}}.$ The average velocity is always less than or equal to the average speed of an object. This can be seen by realizing that while distance is always strictly increasing, displacement can increase or decrease in magnitude as well as change direction. In terms of a displacement-time (x vs. t) graph, the instantaneous velocity (or, simply, velocity) can be thought of as the slope of the tangent line to the curve at any point, and the average velocity as the slope of the secant line between two points with t coordinates equal to the boundaries of the time period for the average velocity. The average velocity is the same as the velocity averaged over time – that is to say, its time-weighted average, which may be calculated as the time integral of the velocity: ${\boldsymbol {\bar {v}}}={1 \over t_{1}-t_{0}}\int _{t_{0}}^{t_{1}}{\boldsymbol {v}}(t)\ dt,$ where we may identify $\Delta {\boldsymbol {x}}=\int _{t_{0}}^{t_{1}}{\boldsymbol {v}}(t)\ dt$ and $\Delta t=t_{1}-t_{0}.$ Special cases • When a particle moves with different uniform speeds v1, v2, v3, ..., vn in different time intervals t1, t2, t3, ..., tn respectively, then average speed over the total time of journey is given as ${\boldsymbol {\bar {v}}}={v_{1}t_{1}+v_{2}t_{2}+v_{3}t_{3}+\dots +v_{n}t_{n} \over t_{1}+t_{2}+t_{3}+\dots +t_{n}}$ If t1 = t2 = t3 = ... = t, then average speed is given by the arithmetic mean of the speeds ${\boldsymbol {\bar {v}}}={v_{1}+v_{2}+v_{3}+\dots +v_{n} \over n}$ • When a particle moves different distances s1, s2, s3,..., sn with speeds v1, v2, v3,..., vn respectively, then the average speed of the particle over the total distance is given as ${\boldsymbol {\bar {v}}}={s_{1}+s_{2}+s_{3}+\dots +s_{n} \over t_{1}+t_{2}+t_{3}+\dots +t_{n}}={{s_{1}+s_{2}+s_{3}+\dots +s_{n}} \over {{s_{1} \over v_{1}}+{s_{2} \over v_{2}}+{s_{3} \over v_{3}}+\dots +{s_{n} \over v_{n}}}}$ If s1 = s2 = s3 = ... = s, then average speed is given by the harmonic mean of the speeds ${\boldsymbol {\bar {v}}}=n\left({1 \over v_{1}}+{1 \over v_{2}}+{1 \over v_{3}}+\dots +{1 \over v_{n}}\right)^{-1}$ Instantaneous velocity If we consider v as velocity and x as the displacement (change in position) vector, then we can express the (instantaneous) velocity of a particle or object, at any particular time t, as the derivative of the position with respect to time: ${\boldsymbol {v}}=\lim _{{\Delta t}\to 0}{\frac {\Delta {\boldsymbol {x}}}{\Delta t}}={\frac {d{\boldsymbol {x}}}{dt}}.$ From this derivative equation, in the one-dimensional case it can be seen that the area under a velocity vs. time (v vs. t graph) is the displacement, x. In calculus terms, the integral of the velocity function v(t) is the displacement function x(t). In the figure, this corresponds to the yellow area under the curve labeled s (s being an alternative notation for displacement). ${\boldsymbol {x}}=\int {\boldsymbol {v}}\ dt.$ Since the derivative of the position with respect to time gives the change in position (in metres) divided by the change in time (in seconds), velocity is measured in metres per second (m/s). Although the concept of an instantaneous velocity might at first seem counter-intuitive, it may be thought of as the velocity that the object would continue to travel at if it stopped accelerating at that moment. Relationship to acceleration Although velocity is defined as the rate of change of position, it is often common to start with an expression for an object's acceleration. As seen by the three green tangent lines in the figure, an object's instantaneous acceleration at a point in time is the slope of the line tangent to the curve of a v(t) graph at that point. In other words, acceleration is defined as the derivative of velocity with respect to time: ${\boldsymbol {a}}={\frac {d{\boldsymbol {v}}}{dt}}.$ From there, we can obtain an expression for velocity as the area under an a(t) acceleration vs. time graph. As above, this is done using the concept of the integral: ${\boldsymbol {v}}=\int {\boldsymbol {a}}\ dt.$ Constant acceleration In the special case of constant acceleration, velocity can be studied using the suvat equations. By considering a as being equal to some arbitrary constant vector, it is trivial to show that ${\boldsymbol {v}}={\boldsymbol {u}}+{\boldsymbol {a}}t$ with v as the velocity at time t and u as the velocity at time t = 0. By combining this equation with the suvat equation x = ut + at2/2, it is possible to relate the displacement and the average velocity by ${\boldsymbol {x}}={\frac {({\boldsymbol {u}}+{\boldsymbol {v}})}{2}}t={\boldsymbol {\bar {v}}}t.$ It is also possible to derive an expression for the velocity independent of time, known as the Torricelli equation, as follows: $v^{2}={\boldsymbol {v}}\cdot {\boldsymbol {v}}=({\boldsymbol {u}}+{\boldsymbol {a}}t)\cdot ({\boldsymbol {u}}+{\boldsymbol {a}}t)=u^{2}+2t({\boldsymbol {a}}\cdot {\boldsymbol {u}})+a^{2}t^{2}$ $(2{\boldsymbol {a}})\cdot {\boldsymbol {x}}=(2{\boldsymbol {a}})\cdot ({\boldsymbol {u}}t+{\tfrac {1}{2}}{\boldsymbol {a}}t^{2})=2t({\boldsymbol {a}}\cdot {\boldsymbol {u}})+a^{2}t^{2}=v^{2}-u^{2}$ $\therefore v^{2}=u^{2}+2({\boldsymbol {a}}\cdot {\boldsymbol {x}})$ where v = |v| etc. The above equations are valid for both Newtonian mechanics and special relativity. Where Newtonian mechanics and special relativity differ is in how different observers would describe the same situation. In particular, in Newtonian mechanics, all observers agree on the value of t and the transformation rules for position create a situation in which all non-accelerating observers would describe the acceleration of an object with the same values. Neither is true for special relativity. In other words, only relative velocity can be calculated. Quantities that are dependent on velocity The kinetic energy of a moving object is dependent on its velocity and is given by the equation $E_{\text{k}}={\tfrac {1}{2}}mv^{2}$ ignoring special relativity, where Ek is the kinetic energy and m is the mass. Kinetic energy is a scalar quantity as it depends on the square of the velocity, however a related quantity, momentum, is a vector and defined by ${\boldsymbol {p}}=m{\boldsymbol {v}}$ In special relativity, the dimensionless Lorentz factor appears frequently, and is given by $\gamma ={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}$ where γ is the Lorentz factor and c is the speed of light. Escape velocity is the minimum speed a ballistic object needs to escape from a massive body such as Earth. It represents the kinetic energy that, when added to the object's gravitational potential energy (which is always negative), is equal to zero. The general formula for the escape velocity of an object at a distance r from the center of a planet with mass M is $v_{\text{e}}={\sqrt {\frac {2GM}{r}}}={\sqrt {2gr}},$ where G is the gravitational constant and g is the gravitational acceleration. The escape velocity from Earth's surface is about 11 200 m/s, and is irrespective of the direction of the object. This makes "escape velocity" somewhat of a misnomer, as the more correct term would be "escape speed": any object attaining a velocity of that magnitude, irrespective of atmosphere, will leave the vicinity of the base body as long as it does not intersect with something in its path. Relative velocity Relative velocity is a measurement of velocity between two objects as determined in a single coordinate system. Relative velocity is fundamental in both classical and modern physics, since many systems in physics deal with the relative motion of two or more particles. In Newtonian mechanics, the relative velocity is independent of the chosen inertial reference frame. This is not the case anymore with special relativity in which velocities depend on the choice of reference frame. If an object A is moving with velocity vector v and an object B with velocity vector w, then the velocity of object A relative to object B is defined as the difference of the two velocity vectors: ${\boldsymbol {v}}_{A{\text{ relative to }}B}={\boldsymbol {v}}-{\boldsymbol {w}}$ Similarly, the relative velocity of object B moving with velocity w, relative to object A moving with velocity v is: ${\boldsymbol {v}}_{B{\text{ relative to }}A}={\boldsymbol {w}}-{\boldsymbol {v}}$ Usually, the inertial frame chosen is that in which the latter of the two mentioned objects is in rest. Scalar velocities In the one-dimensional case,[3] the velocities are scalars and the equation is either: $v_{\text{rel}}=v-(-w),$ if the two objects are moving in opposite directions, or: $v_{\text{rel}}=v-(+w),$ if the two objects are moving in the same direction. Polar coordinates See also: Circular_motion § In_polar_coordinates; and Radial, transverse, normal In polar coordinates, a two-dimensional velocity is described by a radial velocity, defined as the component of velocity away from or toward the origin (also known as "velocity made good"), and a transverse velocity, perpendicular to the radial one. Both arise from angular velocity, which is the rate of rotation about the origin (with positive quantities representing counter-clockwise rotation and negative quantities representing clockwise rotation, in a right-handed coordinate system). The radial and traverse velocities can be derived from the Cartesian velocity and displacement vectors by decomposing the velocity vector into radial and transverse components. The transverse velocity is the component of velocity along a circle centered at the origin. ${\boldsymbol {v}}={\boldsymbol {v}}_{T}+{\boldsymbol {v}}_{R}$ where • ${\boldsymbol {v}}_{T}$ is the transverse velocity • ${\boldsymbol {v}}_{R}$ is the radial velocity. The radial speed (or magnitude of the radial velocity) is the dot product of the velocity vector and the unit vector in the radial direction. $v_{R}={\frac {{\boldsymbol {v}}\cdot {\boldsymbol {r}}}{\left|{\boldsymbol {r}}\right|}}={\boldsymbol {v}}\cdot {\hat {\boldsymbol {r}}}$ where ${\boldsymbol {r}}$ is position and ${\hat {\boldsymbol {r}}}$ is the radial direction. The transverse speed (or magnitude of the transverse velocity) is the magnitude of the cross product of the unit vector in the radial direction and the velocity vector. It is also the dot product of velocity and transverse direction, or the product of the angular speed $\omega $ and the radius (the magnitude of the position). $v_{T}={\frac {|{\boldsymbol {r}}\times {\boldsymbol {v}}|}{|{\boldsymbol {r}}|}}={\boldsymbol {v}}\cdot {\hat {\boldsymbol {t}}}=\omega |{\boldsymbol {r}}|$ such that $\omega ={\frac {|{\boldsymbol {r}}\times {\boldsymbol {v}}|}{|{\boldsymbol {r}}|^{2}}}.$ Angular momentum in scalar form is the mass times the distance to the origin times the transverse velocity, or equivalently, the mass times the distance squared times the angular speed. The sign convention for angular momentum is the same as that for angular velocity. $L=mrv_{T}=mr^{2}\omega $ where • $m$ is mass • $r=|{\boldsymbol {r}}|.$ The expression $mr^{2}$ is known as moment of inertia. If forces are in the radial direction only with an inverse square dependence, as in the case of a gravitational orbit, angular momentum is constant, and transverse speed is inversely proportional to the distance, angular speed is inversely proportional to the distance squared, and the rate at which area is swept out is constant. These relations are known as Kepler's laws of planetary motion. See also • Four-velocity (relativistic version of velocity for Minkowski spacetime) • Group velocity • Hypervelocity • Phase velocity • Proper velocity (in relativity, using traveler time instead of observer time) • Rapidity (a version of velocity additive at relativistic speeds) • Terminal velocity • Velocity vs. time graph Notes 1. Rowland, Todd (2019). "Velocity Vector". Wolfram MathWorld. Retrieved 2 June 2019. 2. "Origin of the speed/velocity terminology". History of Science and Mathematics Stack Exchange. Retrieved 12 June 2023. Introduction of the speed/velocity terminology by Prof. Tait, in 1882. 3. Basic principle References • Robert Resnick and Jearl Walker, Fundamentals of Physics, Wiley; 7 Sub edition (June 16, 2004). ISBN 0-471-23231-9. External links Wikimedia Commons has media related to Velocity. • Velocity and Acceleration • Introduction to Mechanisms (Carnegie Mellon University) Kinematics • ← Integrate … Differentiate → • Absement • Displacement (Distance) • Velocity (Speed) • Acceleration • Jerk • Higher derivatives Classical mechanics SI units Linear/translational quantities Angular/rotational quantities Dimensions 1 L L2 Dimensions 1 θ θ2 T time: t s absement: A m s T time: t s 1 distance: d, position: r, s, x, displacement m area: A m2 1 angle: θ, angular displacement: θ rad solid angle: Ω rad2, sr T−1 frequency: f s−1, Hz speed: v, velocity: v m s−1 kinematic viscosity: ν, specific angular momentum: h m2 s−1 T−1 frequency: f s−1, Hz angular speed: ω, angular velocity: ω rad s−1 T−2 acceleration: a m s−2 T−2 angular acceleration: α rad s−2 T−3 jerk: j m s−3 T−3 angular jerk: ζ rad s−3 M mass: m kg weighted position: M ⟨x⟩ = ∑ m x ML2 moment of inertia: I kg m2 MT−1 Mass flow rate: ${\dot {m}}$ kg s−1 momentum: p, impulse: J kg m s−1, N s action: 𝒮, actergy: ℵ kg m2 s−1, J s ML2T−1 angular momentum: L, angular impulse: ΔL kg m2 s−1 action: 𝒮, actergy: ℵ kg m2 s−1, J s MT−2 force: F, weight: Fg kg m s−2, N energy: E, work: W, Lagrangian: L kg m2 s−2, J ML2T−2 torque: τ, moment: M kg m2 s−2, N m energy: E, work: W, Lagrangian: L kg m2 s−2, J MT−3 yank: Y kg m s−3, N s−1 power: P kg m2 s−3, W ML2T−3 rotatum: P kg m2 s−3, N m s−1 power: P kg m2 s−3, W Authority control: National • France • BnF data • Germany • Israel • United States • Czech Republic
Wikipedia
William Kahan William "Velvel" Morton Kahan (born June 5, 1933) is a Canadian mathematician and computer scientist, who received the Turing Award in 1989 for "his fundamental contributions to numerical analysis",[2] was named an ACM Fellow in 1994,[2] and inducted into the National Academy of Engineering in 2005.[2] William Morton Kahan Kahan in 2008 Born (1933-06-05) June 5, 1933 Toronto, Ontario, Canada NationalityCanadian Alma materUniversity of Toronto Known forIEEE 754 Kahan summation algorithm AwardsTuring Award (1989) IEEE Emanuel R. Piore Award[1] (2000) National Academy of Engineering ACM Fellow Scientific career FieldsMathematics Computer Science InstitutionsUniversity of California, Berkeley ThesisGauss–Seidel Methods Of Solving Large Systems Of Linear Equations (1958) Doctoral advisorByron Alexander Griffith Doctoral studentsJames Demmel Biography Born to a Canadian Jewish family,[2] he attended the University of Toronto, where he received his bachelor's degree in 1954, his master's degree in 1956, and his Ph.D. in 1958, all in the field of mathematics. Kahan is now emeritus professor of mathematics and of electrical engineering and computer sciences (EECS) at the University of California, Berkeley. Kahan was the primary architect behind the IEEE 754-1985 standard for floating-point computation (and its radix-independent follow-on, IEEE 854). He has been called "The Father of Floating Point", since he was instrumental in creating the original IEEE 754 specification.[2] Kahan continued his contributions to the IEEE 754 revision that led to the current IEEE 754 standard. In the 1980s he developed the program "paranoia", a benchmark that tests for a wide range of potential floating-point bugs.[3] He also developed the Kahan summation algorithm, an important algorithm for minimizing error introduced when adding a sequence of finite-precision floating-point numbers. He coined the term "Table-maker's dilemma" for the unknown cost of correctly rounding transcendental functions to some preassigned number of digits.[4] The Davis–Kahan–Weinberger dilation theorem is one of the landmark results in the dilation theory of Hilbert space operators and has found applications in many different areas.[5] He is an outspoken advocate of better education of the general computing population about floating-point issues and regularly denounces decisions in the design of computers and programming languages that he believes would impair good floating-point computations.[6][7][8] When Hewlett-Packard (HP) introduced the original HP-35 pocket scientific calculator, its numerical accuracy in evaluating transcendental functions for some arguments was not optimal. HP worked extensively with Kahan to enhance the accuracy of the algorithms, which led to major improvements. This was documented at the time in the Hewlett-Packard Journal.[9][10] He also contributed substantially to the design of the algorithms in the HP Voyager series and wrote part of their intermediate and advanced manuals. See also • Intel 8087 References 1. "IEEE Emanuel R. Piore Award Recipients" (PDF). IEEE. Archived from the original (PDF) on November 24, 2010. Retrieved March 20, 2021. 2. Haigh, Thomas (1989). "William ("Velvel") Morton Kahan". A. M. Turing Award. Retrieved 2017-05-27. 3. Karpinski, Richard (1985), "Paranoia: A floating-point benchmark", Byte Magazine, 10 (2): 223–235 4. Kahan, William. "A Logarithm Too Clever by Half". Retrieved 2008-11-14. 5. Davis, Chandler; Kahan, W. M.; Weinberger, H. F. (1982). "Norm-Preserving Dilations and Their Applications to Optimal Error Bounds". SIAM Journal on Numerical Analysis. 19 (3): 445–469. Bibcode:1982SJNA...19..445D. doi:10.1137/0719029. hdl:10338.dmlcz/128534. 6. Kahan, William (1 March 1998). "How Java's Floating-Point Hurts Everyone Everywhere" (PDF). Retrieved 1 March 2021. 7. Haigh, Thomas (March 2016). "An interview with William M. Kahan" (PDF). Retrieved 1 March 2021. 8. Kahan, William (31 July 2004). "Matlab's Loss is Nobody's Gain" (PDF). Retrieved 1 March 2021. 9. Kahan, William M. (December 1979). "Personal Calculator Has Key to Solve Any Equation f(x) = 0" (PDF). Hewlett-Packard Journal. 30 (12): 20–26. Retrieved 2023-06-16. 10. Kahan, William M. (August 1980). "Handheld Calculator Evaluates Integrals" (PDF). Hewlett-Packard Journal. 31 (8): 23–32. Retrieved 2023-06-16. External links • William Kahan's home page • An oral history of William Kahan, Revision 1.1, March, 2016 • William Kahan at the Mathematics Genealogy Project • A Conversation with William Kahan, Dr. Dobb's Journal , November 1, 1997 • An Interview with the Old Man of Floating-Point, February 20, 1998 • IEEE 754 An Interview with William Kahan April, 1998 • Paranoia source code in multiple languages • Paranoia for modern graphics processing units (GPUs) • 754-1985 - IEEE Standard for Binary Floating-Point Arithmetic, 1985, Superseded by IEEE Std 754-2008 A. M. Turing Award laureates 1960s • Alan Perlis (1966) • Maurice Vincent Wilkes (1967) • Richard Hamming (1968) • Marvin Minsky (1969) 1970s • James H. Wilkinson (1970) • John McCarthy (1971) • Edsger W. Dijkstra (1972) • Charles Bachman (1973) • Donald Knuth (1974) • Allen Newell; Herbert A. Simon (1975) • Michael O. Rabin; Dana Scott (1976) • John Backus (1977) • Robert W. Floyd (1978) • Kenneth E. Iverson (1979) 1980s • Tony Hoare (1980) • Edgar F. Codd (1981) • Stephen Cook (1982) • Ken Thompson; Dennis Ritchie (1983) • Niklaus Wirth (1984) • Richard Karp (1985) • John Hopcroft; Robert Tarjan (1986) • John Cocke (1987) • Ivan Sutherland (1988) • William Kahan (1989) 1990s • Fernando J. Corbató (1990) • Robin Milner (1991) • Butler Lampson (1992) • Juris Hartmanis; Richard E. Stearns (1993) • Edward Feigenbaum; Raj Reddy (1994) • Manuel Blum (1995) • Amir Pnueli (1996) • Douglas Engelbart (1997) • Jim Gray (1998) • Fred Brooks (1999) 2000s • Andrew Yao (2000) • Ole-Johan Dahl; Kristen Nygaard (2001) • Ron Rivest; Adi Shamir; Leonard Adleman (2002) • Alan Kay (2003) • Vint Cerf; Bob Kahn (2004) • Peter Naur (2005) • Frances Allen (2006) • Edmund M. Clarke; E. Allen Emerson; Joseph Sifakis (2007) • Barbara Liskov (2008) • Charles P. Thacker (2009) 2010s • Leslie G. Valiant (2010) • Judea Pearl (2011) • Shafi Goldwasser; Silvio Micali (2012) • Leslie Lamport (2013) • Michael Stonebraker (2014) • Martin Hellman; Whitfield Diffie (2015) • Tim Berners-Lee (2016) • John L. Hennessy; David Patterson (2017) • Yoshua Bengio; Geoffrey Hinton; Yann LeCun (2018) • Ed Catmull; Pat Hanrahan (2019) 2020s • Alfred Aho; Jeffrey Ullman (2020) • Jack Dongarra (2021) • Robert Metcalfe (2022) John von Neumann Lecturers • Lars Ahlfors (1960) • Mark Kac (1961) • Jean Leray (1962) • Stanislaw Ulam (1963) • Solomon Lefschetz (1964) • Freeman Dyson (1965) • Eugene Wigner (1966) • Chia-Chiao Lin (1967) • Peter Lax (1968) • George F. Carrier (1969) • James H. Wilkinson (1970) • Paul Samuelson (1971) • Jule Charney (1974) • James Lighthill (1975) • René Thom (1976) • Kenneth Arrow (1977) • Peter Henrici (1978) • Kurt O. Friedrichs (1979) • Keith Stewartson (1980) • Garrett Birkhoff (1981) • David Slepian (1982) • Joseph B. Keller (1983) • Jürgen Moser (1984) • John W. Tukey (1985) • Jacques-Louis Lions (1986) • Richard M. Karp (1987) • Germund Dahlquist (1988) • Stephen Smale (1989) • Andrew Majda (1990) • R. Tyrrell Rockafellar (1992) • Martin D. Kruskal (1994) • Carl de Boor (1996) • William Kahan (1997) • Olga Ladyzhenskaya (1998) • Charles S. Peskin (1999) • Persi Diaconis (2000) • David Donoho (2001) • Eric Lander (2002) • Heinz-Otto Kreiss (2003) • Alan C. Newell (2004) • Jerrold E. Marsden (2005) • George C. Papanicolaou (2006) • Nancy Kopell (2007) • David Gottlieb (2008) • Franco Brezzi (2009) • Bernd Sturmfels (2010) • Ingrid Daubechies (2011) • John M. Ball (2012) • Stanley Osher (2013) • Leslie Greengard (2014) • Jennifer Tour Chayes (2015) • Donald Knuth (2016) • Bernard J. Matkowsky (2017) • Charles F. Van Loan (2018) • Margaret H. Wright (2019) • Nick Trefethen (2020) • Chi-Wang Shu (2021) • Leah Keshet (2022) Authority control: Academics • Association for Computing Machinery • 2 • DBLP • MathSciNet • Mathematics Genealogy Project • Scopus • zbMATH
Wikipedia
Veniamin Myasnikov Veniamin Petrovich Myasnikov (4 December 1936 — 29 February 2004) was a Soviet Mathematician, Mechanician and a member of the Russian Academy of Sciences (1992). Veniamin Myasnikov Born(1936-12-04)4 December 1936 Moscow, USSR Died29 February 2004(2004-02-29) (aged 67) Moscow, Russia Awards • Order of Honour (1997) Academic background Alma materMoscow State University Thesis (1969) Biography V.Myasnikov was born in Moscow (USSR) in 1936. He was educated in MSU (Mechanics-mathematics Division), which was completed in 1959.[1] He established the Department of the Computational Mechanics in MSU Faculty of Mechanics and Mathematics and headed it in 1998-2000 yy.[2] After of E.Zolotov recommendation Myasnikov lived to Vladivostok and was elected as Director of IACP of the Far-Eastern Branch of the Soviet Academy of Sciences (1988-2004).[3] In 1992 Myasnikov was elected a Member of the Russian Academy of Sciences.[4] Main research fields • Fluid dynamics • Mathematical theory of plasticity • Geomechanics Family life Veniamin Myasnikov died of cancer in Moscow in 2004 aged 67 and buried at Vostryakovskoe cemetery.[5] He is survived by his wife Svetlana Grigorievna and two children, daughter Anna and son. His father Peter Veniaminovich Myasnikov and mother Varvara Akimovna Myasnikova were graduated in MSU, where the father further was a professor of the "Analytical mechanics" Department. Awards and honours V.Myasnikov was appointed an Order of Honour in 1997 for achievements in scientific research resulting in significant Russian scientific and technological advantage in mechanics and technology. Honorary Professor of MSU (2000). References 1. Department of MSU(rus) 2. History of MSU(rus) 3. IACP website 4. RAS website 5. V.Myasnikov's Grave (rus) Bibliography • Мясников В. П., Фадеев В. Е. Гидродинамические модели эволюции планет земной группы. — М.: Наука, 1979. — 231 с. (rus) • Мосолов П. П., Мясников В. П. «Механика жесткопластических сред» — М. Наука, 1981(rus) • Мясников В. П., Гордин В. М., Михайлов В. О., Новиков В. Л., Сазонов Ю. В. Геомеханические модели как основа комплексной историко-генетической интерпретации геофизических данных. // В кн.: Методика комплексного изучения тектоносферы (под ред. В. В. Белоусова). — М.: Радио и связь, 1984. — с. 99-110.(rus) • В. П. Маслов, В. П. Мясников, В. Г. Данилов. Математическое моделирование аварийного блока Чернобыльской АЭС. — М.: Наука, Глав. ред. физико-математической лит-ры, 1988.(rus) • Myasnikov V.P., Guzev M.A. Thermo-mechanical model of elastic-plastic materials with defect structures. Theoretical and Applied Fracture Mechanics. 2000, V. 33, p. 165-171 • Myasnikov V.P., Guzev M.A., Ushakov A.A. Self-equilibrated stress fields in a continues medium. Journal of Applied Mechanics and Technical Physics. 2004,V. 45, N 4, p. 558-566 Authority control International • FAST • ISNI • VIAF National • United States Academics • zbMATH
Wikipedia
Venterus Mandey Venterus Mandey (1646–1702), also called Venturus Mandey and Venteri Mandey, was an English bricklayer and mathematician.[1] References 1. Smith 2003. Sources • Smith, Terence Paul (February 2003). "Venturus Mandey: No Ordinary Bricklayer". Information, 90, Bricklaying Issue. British Brick Society. pp. 16–19. Further reading • Betts, Jonathan; McEvoy, Rory, eds. (2020). Harrison Decoded: Towards a Perfect Pendulum Clock. Oxford University Press. pp. 128–129. External links • "Mandey, Venterus". Online Books Page. • Works related to Venterus Mandey at Wikisource Authority control • VIAF
Wikipedia
Vera Huckel Vera Huckel (1908–1999) was an American mathematician and aerospace engineer and one of the first female "computers" at NACA, now NASA, where she mainly worked in the Dynamic Loads Division.[1] Vera Huckel Vera Huckel, 1966 Born1908 DiedMarch 24, 1999 Newport News, Virginia, US Resting placeWest Laurel Hill Cemetery, Bala Cynwyd, Pennsylvania Occupation(s)Computer Aerospace engineer Mathematician Employer(s)National Advisory Committee for Aeronautics National Aeronautics and Space Administration Life and work Huckel was born in 1908 and studied math at the University of Pennsylvania, graduating in 1929.[2] After living in California for ten years, she visited a friend in Newport News and was hired as a ''junior computer,'' doing mathematical calculations for other researchers for $1,440 a year (a man with her background typically earned about $2,000 a year). Before the invention of electronic computers, these so-called "computers," who were mostly women, would do the time-consuming calculations necessary for successful flights.[3] Huckel became one of the first female engineers at NASA and wrote the program for its first electronic computer.[2] She also worked as a supervisory mathematician and aerospace engineer during her time at NACA/NASA. By 1945 she had been promoted to section head in charge of up to 17 other women.[1] She was involved in helping researchers make the switch from using slide rules to do their complex calculations to super computers. She also worked on theories of aerodynamics. As a mathematician, she was involved in the testing of sonic booms in supersonic flight.[1][3] Huckel retired from NASA in 1972 after working there for more than 33 years.[2][3] She was active in the Soroptomist Organization, the AAUW, and volunteered with the Hampton United Way.[2] Huckel died at 90 years of age on March 24, 1999, in Newport News, Virginia, where she had lived for more than 60 years. She was buried in West Laurel Hill Cemetery in Bala Cynwyd, Pennsylvania.[2][4] Selected publications • Morgan, Homer G., Harry L. Runyan, and Vera Huckel. "Theoretical considerations of flutter at high Mach numbers." Journal of the Aerospace Sciences 25, no. 6 (1958): 371-381. • Morgan, Homer G., Vera Huckel, and Harry L. Runyan. Procedure for calculating flutter at high supersonic speed including camber deflections, and comparison with experimental results. No. NACA-TN-4335. 1958. • Hilton, David Artland, Vera Huckel, Domenic J. Maglieri, and R. Steiner. Sonic-boom exposures during FAA community response studies over a 6-month period in the Oklahoma City area. No. NASA-TN-D-2539. 1964. • Hilton, David A., Vera Huckel, and Domenic J. Maglieri. Sonic-boom measurements during bomber training operations in the Chicago area. Vol. 3655. National Aeronautics and Space Administration, 1966. References 1. "The Women of NASA - National Women's History Museum". Google Arts & Culture. Retrieved 2021-09-21. 2. "Daily Press: Hampton Roads News, Virginia News & Videos". dailypress.com. Retrieved 2021-09-21. 3. "50 YEARS: FLYING HIGH IN A MAN'S WORLD". Daily Press. Retrieved 2022-07-26. 4. "Women's Activism NYC". womensactivism.nyc. Retrieved 2021-09-21. See also • Harvard Computers
Wikipedia
Vera Myller Vera Myller-Lebedev (1 December 1880 – 12 December 1970) was a Russian Empire-born mathematician who earned her doctorate in Germany with David Hilbert and became the first female university professor in Romania. Education Vera Lebedev was born in Saint Petersburg and educated in Novgorod. From 1897 through 1902 she participated in the Bestuzhev Courses in Saint Petersburg.[1] She then traveled to the University of Göttingen, where she completed a doctorate in 1906 under the supervision of David Hilbert. Her dissertation was Die Theorie der Integralgleichungen in Anwendungen auf einige Reihenentwickelungen, and concerned integral equations.[2] Marriage and career In Göttingen, she met Romanian mathematician Alexandru Myller.[1] She married him in 1907,[3] returned with him to the University of Iași, and in 1910 joined the mathematics faculty there. In 1918 she was promoted to full professor,[1][3] becoming Romania's first female professor.[3][4] She died in Iași in 1970, and is buried at the city's Eternitatea Cemetery.[5] Contributions She wrote Romanian-language textbooks on algebra (1942) and algebraic applications of group theory (1945),[4] and won the Romanian State Prize in 1953 for her algebra text.[1] References 1. Myller, Vera (1880 – 1970), Digital Mechanism and Gear Library, retrieved 18 November 2018 2. Vera Myller at the Mathematics Genealogy Project 3. Corduneanu, Constantin (2011), "The centennial of a Romanian mathematical school", Alexandru Myller Mathematical Seminar Centennial Conference, AIP Conference Proceedings, vol. 1329, pp. 3–15, doi:10.1063/1.3546071, ISBN 978-0-7354-0884-5 4. Myller-Lebedev Vera (1880-1970), Central Library of the University of Iași, retrieved 18 November 2018 5. "Vera Myller, prima femeie profesor universitar din România", iasimulticultural.ro (in Romanian), 2018, retrieved 31 October 2020 Authority control International • ISNI • VIAF National • Germany • Netherlands Academics • Mathematics Genealogy Project People • Deutsche Biographie
Wikipedia
Vera Kublanovskaya Vera Nikolaevna Kublanovskaya (née Totubalina; November 21, 1920 – February 21, 2012 [1]) was a Russian mathematician noted for her work on developing computational methods for solving spectral problems of algebra. She proposed the QR algorithm for computing eigenvalues and eigenvectors in 1961, which has been named as one of the ten most important algorithms of the twentieth century.[2] This algorithm was proposed independently by the English computer scientist John G.F. Francis in 1959. Early life Kublanovskaya was born in November 1920 in Krokhona, a village near Belozersk in Vologda Oblast, Russia. She was born in a farming and fishing family as one of nine siblings. She died at the age of 91 years old in February 2012. [3] Education Kublanovskaya started her tertiary education in 1939 at the Gertzen Pedagogical Institute in Leningrad.[4] There, she was encouraged to pursue a career in mathematics. She moved on to study mathematics at Leningrad State University in 1945 and graduated in 1948. Following her graduation, she joined the Leningrad Branch of the Steklov Mathematical Institute of the USSR Academy of Sciences. She remained there for 64 years of her life. In 1955, she got a first doctorate degree on the application of analytic continuation to numeric methods. In 1972 she obtained a secondary doctorate on the use of orthogonal transformations to solve algebraic problems. In October 1985, she was awarded an honorary doctorate at Umeå University, Sweden, with which she has collaborated.[4] Scientific works During her first PhD, she joined Leonid Kantorovich's group that was working on developing a universal computer language in the USSR. Her task was to select and classify matrix operations that are useful in numerical linear algebra. Her subsequent works have been foundational in furthering mathematical research and software development. She is mentioned in the Book of Proofs [5] Publications • On some algorithms for the solution of the complete eigenvalue problem [6] • On a method of solving the complete eigenvalue problem for a degenerate matrix [7] • Methods and algorithms of solving spectral problems for polynomial and rational matrices [8] • To solving problems of algebra for two-parameter matrices. V [9] • To solving problems of algebra for two-parameter matrices. IX [10] Notes 1. Obituaries: Vera Nikolaevna Kublanovskaya, July 17, 2012 2. Dongarra & Sullivan (2000) 3. "Obituaries: Vera Nikolaevna Kublanovskaya". SIAM News. Retrieved 2020-03-07. 4. "Vera Nikolaevna Kublanovskaya". MacTutor. Retrieved 29 January 2021.{{cite web}}: CS1 maint: url-status (link) 5. "Vera Nikolaevna Kublanovskaya". www.bookofproofs.org. Retrieved 2020-03-07. 6. Kublanovskaya, V. N. (1962-01-01). "On some algorithms for the solution of the complete eigenvalue problem". USSR Computational Mathematics and Mathematical Physics. 1 (3): 637–657. doi:10.1016/0041-5553(63)90168-X. ISSN 0041-5553. 7. Kublanovskaya, V. N. (1966-01-01). "On a method of solving the complete eigenvalue problem for a degenerate matrix". USSR Computational Mathematics and Mathematical Physics. 6 (4): 1–14. doi:10.1016/0041-5553(66)90001-2. ISSN 0041-5553. 8. Kublanovskaya, V. N. (1999-09-01). "Methods and algorithms of solving spectral problems for polynomial and rational matrices". Journal of Mathematical Sciences. 96 (3): 3085–3287. doi:10.1007/BF02168360. ISSN 1573-8795. S2CID 120403984. 9. Kublanovskaya, V. N. (2010-03-01). "To solving problems of algebra for two-parameter matrices. V". Journal of Mathematical Sciences. 165 (5): 574–588. doi:10.1007/s10958-010-9827-y. ISSN 1573-8795. S2CID 189871368. 10. Kublanovskaya, V. N. (2012-05-01). "To solving problems of algebra for two-parameter matrices. IX". Journal of Mathematical Sciences. 182 (6): 814–822. doi:10.1007/s10958-012-0789-0. ISSN 1573-8795. S2CID 189871944. References • Dongarra, Jack J.; Sullivan, Francis (2000), "Guest editors' introduction: The top 10 algorithms", Computing in Science & Engineering, 2 (1): 22–23, Bibcode:2000CSE.....2a..22D, doi:10.1109/MCISE.2000.814652, ISSN 1521-9615. • Golub, Gene H.; Uhlig, Frank (2009), "The QR algorithm: 50 years later – its genesis by John Francis and Vera Kublanovskaya, and subsequent developments", IMA Journal of Numerical Analysis, 29 (3): 467–485, doi:10.1093/imanum/drp012, ISSN 0272-4979. • Kon'kova, Ya.; Simonova, V.N.; Khazanov, V.B. (2000), "Vera Nikolaevna Kublanovskaya. Short Biography", Journal of Mathematical Sciences, 114 (6): 1755–56, doi:10.1023/A:1022491200674, S2CID 118551402. External links • MacTutor History of Mathematics biography Authority control International • ISNI • VIAF National • Germany • Belgium • United States Academics • MathSciNet • zbMATH
Wikipedia
Vera Nikolaevna Maslennikova Vera Nikolaevna Maslennikova (Russian: Вера Николаевна Масленникова; 29 April 1926 – 14 August 2000) was a Russian Mathematician known for her contributions to the theory of partial differential equations.[3] Vera Nikolaevna Maslennikova Born29 April 1926 Priluki, Russia Died14 August 2000 (2000-08-15) (aged 74) NationalityRussian Known forHydrodynamics Functional analysis AwardsState Prize of the USSR Bertrand Bolzano Gold Medal[1] Scientific career FieldsPartial differential equations InstitutionsRussian Peoples’ Friendship University Mathematical Institute of the Academy of Sciences Steklov Mathematical Institute[2] Doctoral advisorSergei Sobolev[2] Early life and war service Maslennikova was born on 29 April 1926 in the village of Priluki near Vologda in the former USSR. Little is known about her childhood except that she lost her parents when she was only eleven years old. In 1941 she entered the Moscow Textile Engineering School. Maslennikova served in the Great Patriotic War for Russia during the early 1940s in the 413th Independent Antiaircraft Artillery Division in the front-line army. For her service she was awarded the Order of the Patriotic War.[2][3] Mathematical career After the war she enrolled in the Faculty of Mechanics and Mathematics at the University of Moscow. She graduated with distinction in 1951, having studied under Alexander Gelfond. Maslennikova then enrolled as a graduate student at the Steklov Institute of Mathematics, advised by Sergei Sobolev. She obtained her doctorate in 1954 on the topic of "fundamental solutions of initial boundary-value problems for systems of hydrodynamics of rotating fluids with regard to compressibility."[2] She then continued research at the Steklov Institute, working there for twenty two years. In 1975 she became the chair of differential equations and functional analysis at the Patrice Lumumba University and continued working there until her death in 2000. Research She worked in the field of partial differential equations, the mathematical hydrodynamics of rotating fluids, and in function spaces, having published more than one hundred and forty research papers.[2][3] References 1. "Vera Nikolaevna Maslennikova". Biographies of Women Mathematicians. Agnes Scott College. Retrieved 17 March 2012. 2. Bogovskii, M. E.; et al. (1987). "Vera Nikolaevna Maslennikova (on her sixtieth birthday)". Russian Mathematical Surveys. 42 (4): 187–189. Bibcode:1987RuMaS..42..187B. doi:10.1070/RM1987v042n04ABEH001463. 3. Arutyunov, A. V.; et al. (2001). "Vera Nikolaevna Maslennikova (obituary)". Russian Mathematical Surveys. 56 (4): 739–743. Bibcode:2001RuMaS..56..739A. doi:10.1070/RM2001v056n04ABEH000418. Authority control: Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Vera Pawlowsky-Glahn Vera Pawlowsky-Glahn (born September 25, 1951) is a Spanish-German mathematician. From 2000 till 2018, she was a full-time professor at the University of Girona, Spain in the Department of Computer Science, Applied Mathematics, and Statistics. Since 2018 she is emeritus professor at the same university. She was previously an associate professor at Technology University in Barcelona from 1986 to 2000. Her main areas of research interest include statistical analysis of compositional data,[3] algebraic-geometric approach to statistical inference,[4] and spatial cluster analysis.[5] She was the president of the International Association for Mathematical Geosciences (IAMG) during 2008–2012. IAMG awarded her the William Christian Krumbein Medal in 2006 [6] and the John Cedric Griffiths Teaching Award in 2008.[7] In 2007, she was selected IAMG Distinguished Lecturer.[8] During the 6th International Workshop on Compositional Data Analysis in June 2015, Vera was appointed president of a commission to formalize the creation of an international organization of scientists interested in the advancement and application of compositional data modeling.[9] Vera Pawlowsky-Glahn Pawlowsky-Glahn in 2012 Born Vera Pawlowsky Glahn (1951-09-25) September 25, 1951 Barcelona, Spain Alma materUniversitat de Barcelona Freie Universität Berlin Known forCompositional data analysis SpouseJuan José Egozcue ChildrenTania Monreal-Pawlowsky AwardsKrumbein Medal[1] Griffiths Award[2] Georges Matheron Lectureship Award (2019) Scientific career InstitutionsUniversitat de Girona Universitat Politècnica de Catalunya Websiteima.udg.edu/~verap/ Education • PhD., Free University Berlin, 1986 • MSc., University of Barcelona, 1982 • B.Sc., University of Barcelona, 1980 Books • Vera Pawlowsky-Glahn, Jean Serra (Editors), 2019.[10] Oxford University Press, 190 p. • Vera Pawlowsky-Glahn, Juan José Egozcue, Raimon Tolosana-Delgado, 2015. Modelling and Analysis of Compositional Data. Wiley, 256 p.[11] • Vera Pawlowsky-Glahn, Antonella Buccianti (Editors), 2011. Compositional Data Analysis: Theory and Applications. Wiley, p. 400.[12] • Vera Pawlowsky-Glahn, Mario Chica-Olmo, Eulogio Pardo-Igúzquiza, 2011. New applications of geomathematics in earth sciences, v. 122, no. 4, Boletín Geológico y Minero, Instituto Geológico y Minero de España, 435 p.[13] • Antonella Buccianti, G. Mateu-Figueras, Vera Pawlowsky-Glahn (Editors), 2006. Compositional Data Analysis in the Geosciences: From Theory to Practice. Geological Society of London special publication, 212 p.[14] • Vera Pawlowsky-Glahn and Ricardo A. Olea, 2004. Geostatistical Analysis of Compositional Data. International Association for Mathematical Geosciences, Studies in Mathematical Geosciences, Oxford University Press, 181 p.[15] • Lucila Candela and Vera Pawlowsky (Editors), 1988. Curso sobre fundamentos de geoestadística. Barcelona, Spain, ISBN 84-404-1653-9. References 1. John Aitchison (2007-02-13). "Vera Pawlowsky-Glahn: 2006 William Christian Krumbein Medal of the International Association for Mathematical Geology - Springer". Mathematical Geology. 39: 135–139. doi:10.1007/s11004-006-9068-2. S2CID 122222941. 2. Burger, Heinz H. (December 2011). "2008 John Cedric Griffiths Teaching Award to Vera Pawlowsky". Computers. 37 (12): 1992. Bibcode:2011CG.....37.1992B. doi:10.1016/j.cageo.2011.11.003. 3. Pawlowsky-Glahn, Vera; Buccianti, Antonella (2011-09-19). Compositional Data Analysis: Theory and Applications - Vera Pawlowsky-Glahn, Antonella Buccianti - Google Books. ISBN 9780470711354. Retrieved 2016-05-26. 4. Pawlowsky-Glahn, V.; Egozcue, J. J. (2001). "Geometric approach to statistical analysis on the simplex". Stochastic Environmental Research and Risk Assessment. 15 (5): 384–398. doi:10.1007/s004770100077. S2CID 119921010. 5. Vera Pawlowsky-Glahn, Gonzalo Simarro-Grande, Josep Antoni Martín-Fernández, 1997. Spatial cluster analysis using generalized Mahalanobis distance. In Proceedings of the Third Annual Conference of the International Association for Mathematical Geology, Vera Pawlowsky-Glahn (editor), International Center for Numerical Methods in Engineering, Barcelona, Spain, p. 175–191 6. "11004_2006_9068_Article.dvi" (PDF). Retrieved 2016-05-26. 7. "Vera Pawlowsky-Glahn". IAMG. Retrieved 2016-05-26. 8. "Past Distinguished Lecturers". IAMG. 2015-05-19. Retrieved 2016-05-26. 9. "CoDaWeb". Compositionaldata.com. Retrieved 2016-05-26. 10. Matheron's Theory of Regionalised Variables. 11. Pawlowsky-Glahn, Vera; Egozcue, Juan José; Tolosana-Delgado, Raimon (2015-04-06). Modeling and Analysis of Compositional Data - Vera Pawlowsky-Glahn, Juan José Egozcue, Raimon Tolosana-Delgado - Google Books. ISBN 9781118443064. Retrieved 2016-05-26. 12. "Wiley: Compositional Data Analysis: Theory and Applications - Vera Pawlowsky-Glahn, Antonella Buccianti". As.wiley.com. Retrieved 2016-05-26. 13. Pawlowsky-Glahn, Vera; Olmo, Mario Chica; Igúzquiza, Eulogio Pardo (2011). New applications of geomathematics in earth sciences - Google Books. Retrieved 2016-05-26. 14. Buccianti, Antonella; Mateu-Figueras, G.; Pawlowsky-Glahn, Vera (2007-07-23). Compositional Data Analysis in the Geosciences: From Theory to Practice - Google Books. ISBN 9781862392052. Retrieved 2016-05-26. --- Review: https://www.geolsoc.org.uk/Geoscientist/Archive/October-2007/Reviews-October-2007 15. Pawlowsky-Glahn, Vera; Olea, Ricardo A. (2004-06-03). Geostatistical Analysis of Compositional Data - Vera Pawlowsky-Glahn, Ricardo A. Olea - Google Books. ISBN 9780198038313. Retrieved 2016-05-26. Authority control International • ISNI • VIAF National • France • BnF data • Catalonia • Germany • Israel • Belgium • United States • Czech Republic Academics • MathSciNet • Mathematics Genealogy Project • ORCID • ResearcherID • zbMATH Other • IdRef
Wikipedia
Vera Faddeeva Vera Faddeeva (Russian: Вера Николаевна Фаддеева; Vera Nikolaevna Faddeeva; 1906–1983) was a Soviet mathematician. Faddeeva published some of the earliest work in the field of numerical linear algebra. Her 1950 work, Computational methods of linear algebra was widely acclaimed and she won a USSR State Prize for it. Between 1962 and 1975, she wrote many research papers with her husband, Dmitry Konstantinovich Faddeev. She is remembered as an important Russian mathematician, specializing in linear algebra, who worked in the 20th century. Vera Faddeeva Вера Николаевна Фаддеева Born Vera Nikolaevna Zamyatin (1906-09-20)20 September 1906 Tambov, Russian Empire Died15 April 1983(1983-04-15) (aged 76) Leningrad, RSFSR, USSR NationalitySoviet Union Occupationmathematician Years active1930–1980 SpouseDmitry Konstantinovich Faddeev Biography Vera Nikolaevna Zamyatina (Russian: Вера Николаевна Замятина) was born 20 September 1906 in Tambov, Russia, to Nikolai Zamyatin. She began her higher education in 1927 at the Leningrad State Pedagogical Institute and then transferred in 1928 to Leningrad State University. She graduated in 1930, married Dmitrii Konstantinovich Faddeev, a fellow mathematician, and began work at the Leningrad Board of Weights and Measures, all in the same year. Between 1930 and 1934, she worked at the Leningrad Hydraulic Engineering Institute and simultaneously between 1933 and 1934 served as a junior researcher at the Seismology Institute of the USSR Academy of Sciences. Beginning in 1935, she conducted research under Boris Grigorievich Galerkin at the Leningrad Institute of Constructions for three years. She returned to the Pedagogical Institute to complete her graduate work in 1938, studying for the next three years. In 1942 Faddeeva was appointed as a junior researcher at the Steklov Institute of Mathematics in Leningrad, but had to flee the city during the German invasion. She lived in Kazan with her family until the siege was over in 1944 and they were able to secure permits as academics to return. By 1946, she had completed her thesis entitled On One Problem and submitted it to the Department of Mathematical Physics of Leningrad State University. The thesis was accepted and she received the equivalent of a PhD in 1946.[1] In 1949 she published two papers: The method of lines applied to some boundary problems and On fundamental functions of the operator X{IV}. The following year, she published a book with a colleague, Mark Konstantinovich Gavurin, which was a series of Bessel function tables and her most famous work, Computational methods of linear algebra,[1] which was one of the first of its kind in the field.[2][3][notes 1] The book described linear algebra, gave methods for solving linear equations and the inversion of matrices, and explained computing square roots and eigenvalues and eigenvectors of a matrix.[1] Faddeeva had continued working at the Steklov Institute, she would work there until her retirement, and in 1951, became head of the Laboratory of Numerical Computations. This unit was based on a model unit set up at Leningrad State University by Gavurin with Leonid Vitalyevich Kantorovich in 1948.[1] Computational methods was translated into English in 1959[3] and was widely influential.[4] In 1960, the book was expanded and reprinted in Russian, she was awarded a USSR State Prize, and it also was translated into English, being published in 1963. Between 1962 and 1974, she worked with her husband compiling a summary of developments being made in linear algebra, which were published in 1975. Faddeeva's last paper, prepared in 1980 for a conference in Warsaw was entitled Numerical methods of linear algebra in computer formulation and was published posthumously in 1984. Faddeeva died 15 April 1983 in Leningrad, Russia.[1] Personal Vera Nikolaevna Zamyatina married Dmitry Konstantinovich Faddeev in 1930.[1] Children: Maria (b. 6 October 1931), a chemist; Ludvig (10 March 1934-26 February 2017),[5] a mathematician and theoretical physicist; and Michael (28 June 1937–30 September 1992), a mathematician.[6] Selected works • Faddeeva, Vera Nikolaevna; Benster, Curtis D. (1959). Computational methods of linear algebra. Mineola, New York: Dover Publications. (original Russian published in 1950) • Faddeeva, V. N. (1968). Numerical Methods and Inequalities in Function Spaces. Providence, Rhode Island: American Mathematical Society. (original Russian published in 1965) • Faddeeva, V. N. (1970). Automatic Programming, Numerical Methods and Functional Analysis. Providence, Rhode Island: American Mathematical Society. ISBN 978-0-8218-1896-1. • Faddeeva, Vera Nikolaevna; Matematicheskiĭ institut im. V.A. Steklova (1972). Automatic programming and numerical methods of analysis. New York, New York: Consultants Bureau. ISBN 9780306188183. References 1. O'Connor, John J.; Robertson, Edmund F., "Vera Nikolaevna Faddeeva", MacTutor History of Mathematics Archive, University of St Andrews 2. Ladyzhenskaya 1994, p. 208. 3. Brezinski & Tournès 2014, p. 118. 4. Brezinski & Wuytack 2012, p. 16. 5. "Autobiography of Ludwig Faddeev". Kowloon, Hong Kong: Shaw Prize. 9 September 2008. Archived from the original on 25 December 2018. Retrieved 9 November 2015. 6. "Комаровское кладбище" (in Russian). Retrieved 9 November 2015. Notes 1. Ladyzhenskaya states that the book was the "first of its kind in this field", but Brezinski & Tournès state she presented the square root method in her book but “attributed it to Banachievwicz”. It is clear that hers was not the first work in the field, but it is unclear if she was the first to publish the information.[2][3] Bibliography • Brezinski, Claude; Tournès, Dominique (2014). André-Louis Cholesky: Mathematician, Topographer and Army Officer. Switzerland: Springer. ISBN 978-3-319-08135-9. • Brezinski, C.; Wuytack, L. (2012). Numerical Analysis: Historical Developments in the 20th Century. Amsterdam: Elsevier Science. ISBN 978-0-444-59858-5. • Ladyzhenskaya, O. A., ed. (1994). Proceedings of the St. Petersburg Mathematical Society. 2 Translations. Vol. 159. Translation editor Simeon Ivanov. Providence, Rhode Island: American Mathematical Society. ISBN 978-0-8218-9593-1. External links • WorldCat publications Authority control International • ISNI • VIAF National • France • BnF data • Germany • Israel • United States • Czech Republic • Netherlands • Poland Academics • MathSciNet • zbMATH Other • IdRef
Wikipedia
Verbal subgroup In mathematics, in the area of abstract algebra known as group theory, a verbal subgroup is a subgroup of a group that is generated by all elements that can be formed by substituting group elements for variables in a given set of words. For example, given the word xy, the corresponding verbal subgroup is generated by the set of all products of two elements in the group, substituting any element for x and any element for y, and hence would be the group itself. On the other hand, the verbal subgroup for the set of words $\{x^{2},xy^{2}x^{-1}\}$ is generated by the set of squares and their conjugates. Verbal subgroups are the only fully characteristic subgroups of a free group and therefore represent the generic example of fully characteristic subgroups, (Magnus, Karrass & Solitar 2004, p. 75). Another example is the verbal subgroup for $\{x^{-1}y^{-1}xy\}$, which is the derived subgroup. References • Magnus, Wilhelm; Karrass, Abraham; Solitar, Donald (2004), Combinatorial Group Theory, New York: Dover Publications, ISBN 978-0-486-43830-6, MR 0207802
Wikipedia
Verena Huber-Dyson Verena Esther Huber-Dyson (May 6, 1923 – March 12, 2016) was a Swiss-American mathematician, known for her work on group theory and formal logic.[1][2] She has been described as a "brilliant mathematician",[2] who did research on the interface between algebra and logic, focusing on undecidability in group theory. At the time of her death, she was emeritus faculty in the philosophy department of the University of Calgary, Alberta. Verena Huber-Dyson Born Verena Esther Huber (1923-05-06)May 6, 1923 Naples, Italy DiedMarch 12, 2016(2016-03-12) (aged 92)[1] Bellingham, Washington Other namesVerena Huber, Verena Haefeli Citizenship • Switzerland • United States EducationUniversity of Zürich Known for • Group theory • Mathematical logic Spouses • Hans-Georg Haefeli (1942-1948) • Freeman Dyson (1950–1958) Children • Katarina Halm • Esther Dyson • George Dyson Scientific career FieldsLogic, algebra Institutions • University of California, Berkeley • Adelphi University • University of California, Los Angeles • University of Illinois at Urbana–Champaign • University of Calgary ThesisEin Dualismus als Klassifikationsprinzip in der abstrakten Gruppentheorie (1947) Doctoral advisorAndreas Speiser Life and career Family and early life Huber-Dyson was born Verena Esther Huber in Naples, Italy, on May 6, 1923. Her parents, Karl (Charles) Huber (1893–1946) and Berthy Ryffel (1899–1945), were Swiss nationals[3] who raised Verena and her sister Adelheid ("Heidi", 1925–1987) in Athens, Greece, where the girls attended the German-speaking Deutsche Schule, or German School of Athens, until forced to return to Switzerland in 1940 by the war. Charles Huber, who had managed the Middle Eastern operations of Bühler AG, a Swiss food-process engineering firm, began working for the International Committee of the Red Cross (ICRC), monitoring the treatment of prisoners of war in internment camps. As the ICRC delegate to India and Ceylon, he was responsible for Italian prisoners held in British camps, but also visited German and Allied camps in Europe. In 1945-46 he served as an ICRC delegate to the United States, which he described to Verena as a place she "definitely ought to experience at length and in depth but just as definitely ought not to settle in."[1] She studied mathematics, with minors in physics and philosophy, at the University of Zurich, where she obtained her Ph.D. in mathematics in 1947 with a thesis in finite group theory[4][5][6] under the supervision of Andreas Speiser. Children External image Verena Huber-Dyson New Jersey, 1949,[7] Verena married Hans-Georg Haefeli, a fellow mathematician, in 1942, and was divorced in 1948. Her first daughter, Katarina Halm (née Halm), was born in 1945.[3][8] She subsequently married Freeman Dyson in Ann Arbor, Michigan, on August 11, 1950.[5] They had two children together, Esther Dyson (born July 14, 1951, in Zurich) and George Dyson (born 1953, Ithaca, New York),[2][5] and divorced in 1958.[8] Career Huber-Dyson accepted a postdoctoral fellow appointment at the Institute for Advanced Study in Princeton in 1948,[9] where she worked on group theory and formal logic.[8][10] She also began teaching at Goucher College near Baltimore during this time.[10] She moved to California with her daughter Katarina, began teaching at San Jose State University in 1959, and then joined Alfred Tarski's Group in Logic and the Methodology of Science at the University of California, Berkeley.[8][11] Huber-Dyson taught at San Jose State University, the University of Zürich, Monash University, as well as at University of California, Berkeley, Adelphi University, University of California, Los Angeles, and the University of Illinois at Chicago, in mathematics and in philosophy departments. She accepted a position in the philosophy department of the University of Calgary in 1973, becoming emerita in 1988.[12] Academic affiliations prior to June 1968 • Cornell University • Goucher College • San Jose State University (September 1959) • Adelphi University • UCLA • University of London • ETH Zürich • Warwick University • University of Melbourne • Monash University • Australian National University in Canberra • University of Zürich • Mills College • UC Berkeley Academic affiliations after September 1968 • Department of Mathematics, University of Illinois at Chicago (September 1968 – June 1971) tenure-track Assistant Professor • Department of Philosophy, University of Calgary (September 1971 –June 1972) nontenure-track • Department of Mathematics, University of Illinois at Chicago (September 1972 – June 1973) tenured Associate Professor • Department of Philosophy, University of Calgary (September 1973 – June 1975) tenure-track Assistant Professor • Department of Philosophy, University of Calgary (September 1977 – June 1981) tenured Associate Professor. • Department of Philosophy, University of Calgary (September 1981 – June 1988) Full Professor • Department of Philosophy, University of Calgary (September 1988 – March 2016) Emerita Professor Activities while at Calgary • Taught graduate courses on foundations of mathematics and the philosophy and methodology of the sciences • Began work on the monograph, Gödel's theorems: a workbook on formalization[13] Non-academic employment • Consultant for Remington Rand (Univac) in Philadelphia • Consultant for Hughes Aircraft in Los Angeles Later life External image Verena Huber-Dyson Later life,[14] After retiring from Calgary, Verena Huber-Dyson moved back to South Pender Island in British Columbia, where she lived for 14 years.[15][16] She died on March 12, 2016, in Bellingham, Washington, at the age of 92.[1][7] Selected publications "There is more to truth than can be caught by proof". — Roberts 2016 Monographs • Haefeli-Huber, Verena Esther (1948). Ein Dualismus als Klassifikationsprinzip in der abstrakten Gruppentheorie [A dualism as a classification principle in abstract group theory] (PhD) (in German). Zurich University. OCLC 2277810. • Roggenkamp, Klaus W.; Huber-Dyson, Verena (1970). Lattices over Orders I. Lecture Notes in Mathematics (No 115). Springer-Verlag. doi:10.1007/BFb0068796. ISBN 978-3-540-04904-3. • Huber-Dyson, Verena (1991). Gödel's theorems: a workbook on formalization. 122 in Teubner-Texte zur Mathematik. B.G. Teubner Verlagsgesellschaft. ISBN 978-3-8154-2023-2. Articles External image Verena Huber-Dyson July 28, 2006[17] • Huber-Dyson, Verena; Kreisel, Georg (1961). "Analysis of Beth's Semantic Construction of Intuitionistic Logic". Stanford Research Report. 3. • Huber-Dyson, Verena (1964). "On the Decision Problem for Theories of Finite Models". Israel Journal of Mathematics. 2 (1): 55–70. doi:10.1007/bf02759735. S2CID 122395102. • Huber-Dyson, Verena (1965). "Strong representability of Number-Theoretic Functions". Hughes Aircraft Report. • Huber-Dyson, Verena (1969). "On the Decision Problem for Extensions of a Decidable Theory". Fundamenta Mathematicae. 64: 7–40. doi:10.4064/fm-64-1-7-40. • Huber-Dyson, Verena (1974). "A Family of Groups with Nice Word Problems". Journal of the Australian Mathematical Society. 17. • Huber-Dyson, Verena (1977). "Talking about Free Groups in Naturally Enriched Languages". Communications in Algebra. 5 (11): 1163–1191. doi:10.1080/00927877708822214. • Huber-Dyson, Verena (1979). "An Inductive Theory for Free Products of Groups". Algebra Universalis. 9: 35–44. doi:10.1007/BF02488014. S2CID 119943802. • Huber-Dyson, Verena (1981). "A Reduction of the Open Sentence Problem for Finite Groups". Bulletin of the London Mathematical Society. 13 (4): 331–338. doi:10.1112/blms/13.4.331. • Huber-Dyson, Verena (1982). "Symmetric Groups and the Open Sentence Problem". Patras Logic Symposium. North-Holland. • Huber-Dyson, Verena (1982). "Finiteness Conditions and the Word Problem". Groups St. Andrews 1981. LMS Lecture Notes. Vol. 71. • Huber-Dyson, Verena; Jones, James Parks; Shepherdson, John Cedric (1982). "Some Diophantine Forms of Gödel's Theorem". Archiv für Mathematische Logik. 22. • Huber-Dyson, Verena (1982). "Decision Problems in Group Theory". Recent Trends in Mathematics, Reinhardsbrunn 1982. Teubner Texte zur Mathematik. Vol. 50. • Huber-Dyson, Verena (1984). "HNN-constructing Finite Groups". Groups Korea 1983. Springer Lecture Notes in Mathematics. Vol. 1098. • Huber-Dyson, Verena (1981). "Critical Notice on Gödel, Escher, Bach by D.R. Hofstadter". Canadian Journal of Philosophy. 11 (4). • Huber-Dyson, Verena (1996). "Thoughts on the Occasion of Kreisel's 70th Birthday". In Odifreddi (ed.). Kreiseliana, about and around George Kreisel. AK Peters. • Huber-Dyson, Verena (June 1996). "Shrieks and Shadows Over the Notices" (PDF). Notices of the AMS. 43 (6): 653. Retrieved 2 November 2020.{{cite journal}}: CS1 maint: date and year (link) • Huber-Dyson, Verena (15 February 1998). "On The Nature Of Mathematical Concepts: Why And How Do Mathematicians Jump To Conclusions?". Edge.org. Retrieved 2020-02-26. • Huber-Dyson, Verena (27 July 2005). "Gödel And The Nature Of Mathematical Truth II". Edge.org. Retrieved 2020-02-26. • Huber-Dyson, Verena (13 May 2006). "Gödel in a Nutshell". edge.org. Retrieved 2 November 2020. References Notes Citations 1. "Obituary of Verena Huber-Dyson". Moles Farewell Tributes. 12 March 2016. Archived from the original on 2020-02-26. Retrieved 2020-02-26. 2. Dawidoff 2009. 3. Schewe 2013, p. 52. 4. Haefeli-Huber 1948. 5. O'Connor, John J.; Robertson, Edmund F., "Freeman Dyson", MacTutor History of Mathematics Archive, University of St Andrews 6. Verena Huber-Dyson at the Mathematics Genealogy Project 7. Brockman 2016. 8. Feferman & Feferman 2004, pp. 272–276. 9. "A Community of Scholars". Institute for Advanced Study. Archived from the original on 2013-01-07. Retrieved March 14, 2014. 10. Schewe 2013, p. 72. 11. Huber-Dyson 2006. 12. Schewe 2013. 13. Huber-Dyson 1991. 14. Sherman 2009. 15. Huber-Dyson 1996a, p. 653. 16. Brooks 2002, p. 20. 17. Verena Huber-Dyson on Flickr Sources • Brockman, John (13 March 2016). "Verena Huber-Dyson May 6, 1923—March 12, 2016". edge.org. Retrieved 2 November 2020. • Brooks, Pamela (3 January 2002). "Pender Snippets" (PDF). Gulf Island Driftwood. p. 20. Retrieved 2 November 2020. • Dawidoff, Nicholas (25 March 2009). "The Civil Heretic". The New York Times. Retrieved 30 October 2020. • Feferman, Soloman; Feferman, Anita (2004). Alfred Tarski:Life and Logic. Cambridge: University Press. ISBN 9780521802406. • Roberts, Siobhan (29 June 2016). "Waiting for Gödel". The New Yorker. Retrieved 2 November 2020. • Schewe, Phillip (2013). "Maverick Genius: The Pioneering Odyssey of Freeman Dyson". Physics Today. 66 (6): 52. Bibcode:2013PhT....66f..52B. doi:10.1063/PT.3.2012. • Sherman, Linda (24 March 2009). "Esther Dyson Visionary Extraordinaire". Its Different For Girls. Retrieved 2 November 2020. Authority control International • ISNI • VIAF National • France • BnF data • Germany • Israel • United States • Netherlands Academics • CiNii • DBLP • MathSciNet • Mathematics Genealogy Project • zbMATH People • Trove Other • IdRef
Wikipedia
Verhoeff algorithm The Verhoeff algorithm[1] is a checksum for error detection first published by Dutch mathematician Jacobus Verhoeff in 1969.[2][3] It was the first decimal check digit algorithm which detects all single-digit errors, and all transposition errors involving two adjacent digits,[4] which was at the time thought impossible with such a code. Goals Verhoeff had the goal of finding a decimal code—one where the check digit is a single decimal digit—which detected all single-digit errors and all transpositions of adjacent digits. At the time, supposed proofs of the nonexistence[5] of these codes made base-11 codes popular, for example in the ISBN check digit. His goals were also practical, and he based the evaluation of different codes on live data from the Dutch postal system, using a weighted points system for different kinds of error. The analysis broke the errors down into a number of categories: first, by how many digits are in error; for those with two digits in error, there are transpositions (ab → ba), twins (aa → 'bb'), jump transpositions (abc → cba), phonetic (1a → a0), and jump twins (aba → cbc). Additionally there are omitted and added digits. Although the frequencies of some of these kinds of errors might be small, some codes might be immune to them in addition to the primary goals of detecting all singles and transpositions. The phonetic errors in particular showed linguistic effects, because in Dutch, numbers are typically read in pairs; and also while 50 sounds similar to 15 in Dutch, 80 doesn't sound like 18. Taking six-digit numbers as an example, Verhoeff reported the following classification of the errors:. Digits in error Classification Count Frequency 1Transcription9,57479.05% 2Transpositions1,23710.21% Twins670.55% Phonetic590.49% Other adjacent2321.92% Jump transpositions990.82% Jump Twins350.29% Other jump errors430.36% Other980.81% 31691.40% 41180.97% 52191.81% 61621.34% Total12,112 Description The general idea of the algorithm is to represent each of the digits (0 through 9) as elements of the dihedral group $D_{5}$. That is, map digits to $D_{5}$, manipulate these, then map back into digits. Let this mapping be $m:[0,9]\to D_{5}$ $m={\begin{pmatrix}0&1&2&3&4&5&6&7&8&9\\e&r&r^{2}&r^{3}&r^{4}&s&rs&r^{2}s&r^{3}s&r^{4}s\end{pmatrix}}$ Let the nth digit be $a_{n}$ and let the number of digits be $k$. For example given the code 248 then $k$ is 3 and $a_{3}=m(8)=r^{3}s$. Now define the permutation $f:D_{5}\to D_{5}$ $f={\begin{pmatrix}e&r&r^{2}&r^{3}&r^{4}&s&rs&r^{2}s&r^{3}s&r^{4}s\\r&s&r^{2}s&rs&r^{2}&r^{3}s&r^{3}&e&r^{4}s&r^{4}\end{pmatrix}}$ For example $f(r^{3})=rs$. Another example is $f^{2}(r^{3})=r^{3}$ since $f(f(r^{3}))=f(rs)=r^{3}$ Using multiplicative notation for the group operation of $D_{5}$, the check digit is then simply a value $c$ such that $f(a_{1})\cdot f^{2}(a_{2})\cdot \ldots \cdot f^{k}(a_{k})\cdot f^{k+1}(c)=e$ $c$ is explicitly given by inverse permutation $c=f^{-1-k}\left(\prod _{n=1}^{k}f^{n}(a_{n})^{-1}\right)$ For example the check digit for 248 is 5. To verify this, use the mapping to $D_{5}$ and insert into the LHS of the previous equation $f(r^{2})\cdot f^{2}(r^{4})\cdot f^{3}(r^{3}s)\cdot f^{4}(s)=e$ To evaluate this permutation quickly use that $f^{4}(s)=f^{3}(r^{3}s)=f^{2}(r^{4})=f(r^{2})=r^{2}s$ to get that $r^{2}s\cdot r^{2}s\cdot r^{2}s\cdot r^{2}s=e$ This is the same reflection being iteratively multiplied. Use that reflections are their own inverse.[6] $(r^{2}s\cdot r^{2}s)\cdot (r^{2}s\cdot r^{2}s)=e^{2}=e$ In practice the algorithm is implemented using simple lookup tables without needing to understand how to generate those tables from the underlying group and permutation theory. This is more properly considered a family of algorithms, as other permutations work too. Verhoeff's notes that the particular permutation, given above, is special as it has the property of detecting 95.3% of the phonetic errors.[7] The strengths of the algorithm are that it detects all transliteration and transposition errors, and additionally most twin, twin jump, jump transposition and phonetic errors. The main weakness of the Verhoeff algorithm is its complexity. The calculations required cannot easily be expressed as a formula in say $\mathbb {Z} /10\mathbb {Z} }$. Lookup tables are required for easy calculation. A similar code is the Damm algorithm, which has similar qualities. Table-based algorithm The Verhoeff algorithm can be implemented using three tables: a multiplication table d, an inverse table inv, and a permutation table p. $d(j,k)$[8] k 0 1 2 3 4 5 6 7 8 9 j 0 0123456789 1 1234067895 2 2340178956 3 3401289567 4 4012395678 5 5987604321 6 6598710432 7 7659821043 8 8765932104 9 9876543210 j $inv(j)$ 0 0 1 4 2 3 3 2 4 1 5 5 6 6 7 7 8 8 9 9 $p(pos,num)$ num 0 1 2 3 4 5 6 7 8 9 pos (mod 8) 0 0123456789 1 1576283094 2 5803796142 3 8916043527 4 9453126870 5 4286573901 6 2793806415 7 7046913258 The first table, d, is based on multiplication in the dihedral group D5.[6] and is simply the Cayley table of the group. Note that this group is not commutative, that is, for some values of j and k, d(j,k) ≠ d(k, j). The inverse table inv represents the multiplicative inverse of a digit, that is, the value that satisfies d(j, inv(j)) = 0. The permutation table p applies a permutation to each digit based on its position in the number. This is actually a single permutation (1 5 8 9 4 2 7 0)(3 6) applied iteratively; i.e. p(i+j,n) = p(i, p(j,n)). The Verhoeff checksum calculation is performed as follows: 1. Create an array n out of the individual digits of the number, taken from right to left (rightmost digit is n0, etc.). 2. Initialize the checksum c to zero. 3. For each index i of the array n, starting at zero, replace c with $d(c,p(i{\bmod {8}},n_{i}))$. The original number is valid if and only if $c=0$. To generate a check digit, append a 0, perform the calculation: the correct check digit is $inv(c)$. Examples Generate a check digit for 236: i ni p(i,ni) c 0000 1633 2331 3212 c is 2, so the check digit is inv(2), which is 3. Validate the check digit 2363: i ni p(i,ni) c 0333 1631 2334 3210 c is zero, so the check is correct. References 1. Verhoeff, J. (1969). Error Detecting Decimal Codes (Tract 29). The Mathematical Centre, Amsterdam. Bibcode:1971ZaMM...51..240N. doi:10.1002/zamm.19710510323. 2. Kirtland, Joseph (2001). "5. Group Theory and the Verhoeff Check Digit Scheme". Identification Numbers and Check Digit Schemes. Mathematical Association of America. p. 153. ISBN 0-88385-720-0. 3. Salomon, David (2005). "§2.11 The Verhoeff Check Digit Method". Coding for Data and Computer Communications. Springer. pp. 56–58. ISBN 0-387-21245-0. 4. Haunsperger, Deanna; Kennedy, Stephen, eds. (2006). The Edge of the Universe: Celebrating Ten Years of Math Horizons. Mathematical Association of America. p. 38. ISBN 978-0-88385-555-3. LCCN 2005937266. 5. Sisson, Roger L. (May 1958). "An improved decimal redundancy check". Communications of the ACM. 1 (5): 10–12. doi:10.1145/368819.368854. 6. Gallian, Joseph A. (2010). Contemporary Abstract Algebra (7th ed.). Brooks/Cole. p. 111. ISBN 978-0-547-16509-7. LCCN 2008940386. Retrieved August 26, 2011. verhoeff check digit. 7. Verhoeff 1969, p. 95 8. Verhoeff 1969, p. 83 External links Wikibooks has a book on the topic of: Algorithm_Implementation/Checksums/Verhoeff_Algorithm • Detailed description of the Verhoeff algorithm
Wikipedia
Logistic function A logistic function or logistic curve is a common S-shaped curve (sigmoid curve) with the equation $f(x)={\frac {L}{1+e^{-k(x-x_{0})}}}$ where $x_{0}$, the $x$ value of the function's midpoint; $L$, the supremum of the values of the function; $k$, the logistic growth rate or steepness of the curve.[1] For values of $x$ in the domain of real numbers from $-\infty $ to $+\infty $, the S-curve shown on the right is obtained, with the graph of $f$ approaching $L$ as $x$ approaches $+\infty $ and approaching zero as $x$ approaches $-\infty $. The logistic function finds applications in a range of fields, including biology (especially ecology), biomathematics, chemistry, demography, economics, geoscience, mathematical psychology, probability, sociology, political science, linguistics, statistics, and artificial neural networks. A generalization of the logistic function is the hyperbolastic function of type I. The standard logistic function, where $L=1,k=1,x_{0}=0$, is sometimes simply called the sigmoid.[2] It is also sometimes called the expit, being the inverse of the logit.[3][4] History The logistic function was introduced in a series of three papers by Pierre François Verhulst between 1838 and 1847, who devised it as a model of population growth by adjusting the exponential growth model, under the guidance of Adolphe Quetelet.[5] Verhulst first devised the function in the mid 1830s, publishing a brief note in 1838,[1] then presented an expanded analysis and named the function in 1844 (published 1845);[lower-alpha 1][6] the third paper adjusted the correction term in his model of Belgian population growth.[7] The initial stage of growth is approximately exponential (geometric); then, as saturation begins, the growth slows to linear (arithmetic), and at maturity, growth stops. Verhulst did not explain the choice of the term "logistic" (French: logistique), but it is presumably in contrast to the logarithmic curve,[8][lower-alpha 2] and by analogy with arithmetic and geometric. His growth model is preceded by a discussion of arithmetic growth and geometric growth (whose curve he calls a logarithmic curve, instead of the modern term exponential curve), and thus "logistic growth" is presumably named by analogy, logistic being from Ancient Greek: λογῐστῐκός, romanized: logistikós, a traditional division of Greek mathematics.[lower-alpha 3] The term is unrelated to the military and management term logistics, which is instead from French: logis "lodgings", though some believe the Greek term also influenced logistics; see Logistics § Origin for details. Mathematical properties The standard logistic function is the logistic function with parameters $k=1$, $x_{0}=0$, $L=1$, which yields $f(x)={\frac {1}{1+e^{-x}}}={\frac {e^{x}}{e^{x}+1}}={\frac {1}{2}}+{\frac {1}{2}}\tanh \left({\frac {x}{2}}\right).$ In practice, due to the nature of the exponential function $e^{-x}$, it is often sufficient to compute the standard logistic function for $x$ over a small range of real numbers, such as a range contained in [−6, +6], as it quickly converges very close to its saturation values of 0 and 1. The logistic function has the symmetry property that $1-f(x)=f(-x).$ Thus, $x\mapsto f(x)-1/2$ is an odd function. The logistic function is an offset and scaled hyperbolic tangent function: $f(x)={\frac {1}{2}}+{\frac {1}{2}}\tanh \left({\frac {x}{2}}\right),$ or $\tanh(x)=2f(2x)-1.$ This follows from ${\begin{aligned}\tanh(x)&={\frac {e^{x}-e^{-x}}{e^{x}+e^{-x}}}={\frac {e^{x}\cdot \left(1-e^{-2x}\right)}{e^{x}\cdot \left(1+e^{-2x}\right)}}\\&=f(2x)-{\frac {e^{-2x}}{1+e^{-2x}}}=f(2x)-{\frac {e^{-2x}+1-1}{1+e^{-2x}}}=2f(2x)-1.\end{aligned}}$ Derivative The standard logistic function has an easily calculated derivative. The derivative is known as the density of the logistic distribution: $f(x)={\frac {1}{1+e^{-x}}}={\frac {e^{x}}{1+e^{x}}},$ ${\frac {\mathrm {d} }{\mathrm {d} x}}f(x)={\frac {e^{x}\cdot (1+e^{x})-e^{x}\cdot e^{x}}{(1+e^{x})^{2}}}={\frac {e^{x}}{(1+e^{x})^{2}}}=f(x){\big (}1-f(x){\big )}$ The logistic distribution has mean x0 and variance π2/3k2 Integral Conversely, its antiderivative can be computed by the substitution $u=1+e^{x}$, since $f(x)={\frac {e^{x}}{1+e^{x}}}={\frac {u'}{u}}$, so (dropping the constant of integration) $\int {\frac {e^{x}}{1+e^{x}}}\,dx=\int {\frac {1}{u}}\,du=\ln u=\ln(1+e^{x}).$ In artificial neural networks, this is known as the softplus function and (with scaling) is a smooth approximation of the ramp function, just as the logistic function (with scaling) is a smooth approximation of the Heaviside step function. Logistic differential equation The standard logistic function is the solution of the simple first-order non-linear ordinary differential equation ${\frac {d}{dx}}f(x)=f(x){\big (}1-f(x){\big )}$ with boundary condition $f(0)=1/2$. This equation is the continuous version of the logistic map. Note that the reciprocal logistic function is solution to a simple first-order linear ordinary differential equation.[9] The qualitative behavior is easily understood in terms of the phase line: the derivative is 0 when the function is 1; and the derivative is positive for $f$ between 0 and 1, and negative for $f$ above 1 or less than 0 (though negative populations do not generally accord with a physical model). This yields an unstable equilibrium at 0 and a stable equilibrium at 1, and thus for any function value greater than 0 and less than 1, it grows to 1. The logistic equation is a special case of the Bernoulli differential equation and has the following solution: $f(x)={\frac {e^{x}}{e^{x}+C}}.$ Choosing the constant of integration $C=1$ gives the other well known form of the definition of the logistic curve: $f(x)={\frac {e^{x}}{e^{x}+1}}={\frac {1}{1+e^{-x}}}.$ More quantitatively, as can be seen from the analytical solution, the logistic curve shows early exponential growth for negative argument, which reaches to linear growth of slope 1/4 for an argument near 0, then approaches 1 with an exponentially decaying gap. The logistic function is the inverse of the natural logit function $\operatorname {logit} p=\log {\frac {p}{1-p}}{\text{ for }}0<p<1$ and so converts the logarithm of odds into a probability. The conversion from the log-likelihood ratio of two alternatives also takes the form of a logistic curve. The differential equation derived above is a special case of a general differential equation that only models the sigmoid function for $x>0$. In many modeling applications, the more general form[10] ${\frac {df(x)}{dx}}={\frac {k}{a}}f(x){\big (}a-f(x){\big )},\quad f(0)={\frac {a}{1+e^{kr}}}$ can be desirable. Its solution is the shifted and scaled sigmoid $aS{\big (}k(x-r){\big )}$. The hyperbolic-tangent relationship leads to another form for the logistic function's derivative: ${\frac {d}{dx}}f(x)={\frac {1}{4}}\operatorname {sech} ^{2}\left({\frac {x}{2}}\right),$ which ties the logistic function into the logistic distribution. Rotational symmetry about (0, 1/2) The sum of the logistic function and its reflection about the vertical axis, $f(-x)$, is ${\frac {1}{1+e^{-x}}}+{\frac {1}{1+e^{-(-x)}}}={\frac {e^{x}}{e^{x}+1}}+{\frac {1}{e^{x}+1}}=1.$ The logistic function is thus rotationally symmetrical about the point (0, 1/2).[11] Applications Link[12] created an extension of Wald's theory of sequential analysis to a distribution-free accumulation of random variables until either a positive or negative bound is first equaled or exceeded. Link[13] derives the probability of first equaling or exceeding the positive boundary as $1/(1+e^{-\theta A})$, the logistic function. This is the first proof that the logistic function may have a stochastic process as its basis. Link[14] provides a century of examples of "logistic" experimental results and a newly derived relation between this probability and the time of absorption at the boundaries. In ecology: modeling population growth A typical application of the logistic equation is a common model of population growth (see also population dynamics), originally due to Pierre-François Verhulst in 1838, where the rate of reproduction is proportional to both the existing population and the amount of available resources, all else being equal. The Verhulst equation was published after Verhulst had read Thomas Malthus' An Essay on the Principle of Population, which describes the Malthusian growth model of simple (unconstrained) exponential growth. Verhulst derived his logistic equation to describe the self-limiting growth of a biological population. The equation was rediscovered in 1911 by A. G. McKendrick for the growth of bacteria in broth and experimentally tested using a technique for nonlinear parameter estimation.[15] The equation is also sometimes called the Verhulst-Pearl equation following its rediscovery in 1920 by Raymond Pearl (1879–1940) and Lowell Reed (1888–1966) of the Johns Hopkins University.[16] Another scientist, Alfred J. Lotka derived the equation again in 1925, calling it the law of population growth. Letting $P$ represent population size ($N$ is often used in ecology instead) and $t$ represent time, this model is formalized by the differential equation: ${\frac {dP}{dt}}=rP\left(1-{\frac {P}{K}}\right),$ where the constant $r$ defines the growth rate and $K$ is the carrying capacity. In the equation, the early, unimpeded growth rate is modeled by the first term $+rP$. The value of the rate $r$ represents the proportional increase of the population $P$ in one unit of time. Later, as the population grows, the modulus of the second term (which multiplied out is $-rP^{2}/K$) becomes almost as large as the first, as some members of the population $P$ interfere with each other by competing for some critical resource, such as food or living space. This antagonistic effect is called the bottleneck, and is modeled by the value of the parameter $K$. The competition diminishes the combined growth rate, until the value of $P$ ceases to grow (this is called maturity of the population). The solution to the equation (with $P_{0}$ being the initial population) is $P(t)={\frac {KP_{0}e^{rt}}{K+P_{0}\left(e^{rt}-1\right)}}={\frac {K}{1+\left({\frac {K-P_{0}}{P_{0}}}\right)e^{-rt}}},$ where $\lim _{t\to \infty }P(t)=K,$ where $K$ is the limiting value of $P$, the highest value that the population can reach given infinite time (or come close to reaching in finite time). It is important to stress that the carrying capacity is asymptotically reached independently of the initial value $P(0)>0$, and also in the case that $P(0)>K$. In ecology, species are sometimes referred to as $r$-strategist or $K$-strategist depending upon the selective processes that have shaped their life history strategies. Choosing the variable dimensions so that $n$ measures the population in units of carrying capacity, and $\tau $ measures time in units of $1/r$, gives the dimensionless differential equation ${\frac {dn}{d\tau }}=n(1-n).$ Integral The antiderivative of the ecological form of the logistic function can be computed by the substitution $u=K+P_{0}\left(e^{rt}-1\right)$, since $du=rP_{0}e^{rt}dt$ $\int {\frac {KP_{0}e^{rt}}{K+P_{0}\left(e^{rt}-1\right)}}\,dt=\int {\frac {K}{r}}{\frac {1}{u}}\,du={\frac {K}{r}}\ln u+C={\frac {K}{r}}\ln \left(K+P_{0}(e^{rt}-1)\right)+C$ Time-varying carrying capacity Since the environmental conditions influence the carrying capacity, as a consequence it can be time-varying, with $K(t)>0$, leading to the following mathematical model: ${\frac {dP}{dt}}=rP\cdot \left(1-{\frac {P}{K(t)}}\right).$ A particularly important case is that of carrying capacity that varies periodically with period $T$: $K(t+T)=K(t).$ It can be shown[17] that in such a case, independently from the initial value $P(0)>0$, $P(t)$ will tend to a unique periodic solution $P_{*}(t)$, whose period is $T$. A typical value of $T$ is one year: In such case $K(t)$ may reflect periodical variations of weather conditions. Another interesting generalization is to consider that the carrying capacity $K(t)$ is a function of the population at an earlier time, capturing a delay in the way population modifies its environment. This leads to a logistic delay equation,[18] which has a very rich behavior, with bistability in some parameter range, as well as a monotonic decay to zero, smooth exponential growth, punctuated unlimited growth (i.e., multiple S-shapes), punctuated growth or alternation to a stationary level, oscillatory approach to a stationary level, sustainable oscillations, finite-time singularities as well as finite-time death. In statistics and machine learning Logistic functions are used in several roles in statistics. For example, they are the cumulative distribution function of the logistic family of distributions, and they are, a bit simplified, used to model the chance a chess player has to beat their opponent in the Elo rating system. More specific examples now follow. Logistic regression Main article: Logistic regression Logistic functions are used in logistic regression to model how the probability $p$ of an event may be affected by one or more explanatory variables: an example would be to have the model $p=f(a+bx),$ where $x$ is the explanatory variable, $a$ and $b$ are model parameters to be fitted, and $f$ is the standard logistic function. Logistic regression and other log-linear models are also commonly used in machine learning. A generalisation of the logistic function to multiple inputs is the softmax activation function, used in multinomial logistic regression. Another application of the logistic function is in the Rasch model, used in item response theory. In particular, the Rasch model forms a basis for maximum likelihood estimation of the locations of objects or persons on a continuum, based on collections of categorical data, for example the abilities of persons on a continuum based on responses that have been categorized as correct and incorrect. Neural networks Logistic functions are often used in neural networks to introduce nonlinearity in the model or to clamp signals to within a specified interval. A popular neural net element computes a linear combination of its input signals, and applies a bounded logistic function as the activation function to the result; this model can be seen as a "smoothed" variant of the classical threshold neuron. A common choice for the activation or "squashing" functions, used to clip for large magnitudes to keep the response of the neural network bounded[19] is $g(h)={\frac {1}{1+e^{-2\beta h}}},$ which is a logistic function. These relationships result in simplified implementations of artificial neural networks with artificial neurons. Practitioners caution that sigmoidal functions which are antisymmetric about the origin (e.g. the hyperbolic tangent) lead to faster convergence when training networks with backpropagation.[20] The logistic function is itself the derivative of another proposed activation function, the softplus. In medicine: modeling of growth of tumors See also: Gompertz curve § Growth of tumors Another application of logistic curve is in medicine, where the logistic differential equation is used to model the growth of tumors. This application can be considered an extension of the above-mentioned use in the framework of ecology (see also the Generalized logistic curve, allowing for more parameters). Denoting with $X(t)$ the size of the tumor at time $t$, its dynamics are governed by $X'=r\left(1-{\frac {X}{K}}\right)X,$ which is of the type $X'=F(X)X,\quad F'(X)\leq 0,$ where $F(X)$ is the proliferation rate of the tumor. If a chemotherapy is started with a log-kill effect, the equation may be revised to be $X'=r\left(1-{\frac {X}{K}}\right)X-c(t)X,$ where $c(t)$ is the therapy-induced death rate. In the idealized case of very long therapy, $c(t)$ can be modeled as a periodic function (of period $T$) or (in case of continuous infusion therapy) as a constant function, and one has that ${\frac {1}{T}}\int _{0}^{T}c(t)\,dt>r\to \lim _{t\to +\infty }x(t)=0,$ i.e. if the average therapy-induced death rate is greater than the baseline proliferation rate, then there is the eradication of the disease. Of course, this is an oversimplified model of both the growth and the therapy (e.g. it does not take into account the phenomenon of clonal resistance). In medicine: modeling of a pandemic A novel infectious pathogen to which a population has no immunity will generally spread exponentially in the early stages, while the supply of susceptible individuals is plentiful. The SARS-CoV-2 virus that causes COVID-19 exhibited exponential growth early in the course of infection in several countries in early 2020.[21] Factors including a lack of susceptible hosts (through the continued spread of infection until it passes the threshold for herd immunity) or reduction in the accessibility of potential hosts through physical distancing measures, may result in exponential-looking epidemic curves first linearizing (replicating the "logarithmic" to "logistic" transition first noted by Pierre-François Verhulst, as noted above) and then reaching a maximal limit.[22] A logistic function, or related functions (e.g. the Gompertz function) are usually used in a descriptive or phenomenological manner because they fit well not only to the early exponential rise, but to the eventual levelling off of the pandemic as the population develops a herd immunity. This is in contrast to actual models of pandemics which attempt to formulate a description based on the dynamics of the pandemic (e.g. contact rates, incubation times, social distancing, etc.). Some simple models have been developed, however, which yield a logistic solution.[23][24][25] Modeling early COVID-19 cases A generalized logistic function, also called the Richards growth curve, has been applied to model the early phase of the COVID-19 outbreak.[26] The authors fit the generalized logistic function to the cumulative number of infected cases, here referred to as infection trajectory. There are different parameterizations of the generalized logistic function in the literature. One frequently used forms is $f(t;\theta _{1},\theta _{2},\theta _{3},\xi )={\frac {\theta _{1}}{[1+\xi \exp(-\theta _{2}\cdot (t-\theta _{3}))]^{1/\xi }}}$ where $\theta _{1},\theta _{2},\theta _{3}$ are real numbers, and $\xi $ is a positive real number. The flexibility of the curve $f$ is due to the parameter $\xi $: (i) if $\xi =1$ then the curve reduces to the logistic function, and (ii) as $\xi $ approaches zero, the curve converges to the Gompertz function. In epidemiological modeling, $\theta _{1}$, $\theta _{2}$, and $\theta _{3}$ represent the final epidemic size, infection rate, and lag phase, respectively. See the right panel for an example infection trajectory when $(\theta _{1},\theta _{2},\theta _{3})$ is set to $(10000,0.2,40)$. One of the benefits of using a growth function such as the generalized logistic function in epidemiological modeling is its relatively easy application to the multilevel model framework, where information from different geographic regions can be pooled together. In chemistry: reaction models The concentration of reactants and products in autocatalytic reactions follow the logistic function. The degradation of Platinum group metal-free (PGM-free) oxygen reduction reaction (ORR) catalyst in fuel cell cathodes follows the logistic decay function,[27] suggesting an autocatalytic degradation mechanism. In physics: Fermi–Dirac distribution The logistic function determines the statistical distribution of fermions over the energy states of a system in thermal equilibrium. In particular, it is the distribution of the probabilities that each possible energy level is occupied by a fermion, according to Fermi–Dirac statistics. In optics: mirage The logistic function also finds applications in optics, particularly in modelling phenomena such as mirages. Under certain conditions, such as the presence of a temperature or concentration gradient due to diffusion and balancing with gravity, logistic curve behaviours can emerge.[28][29] A mirage, resulting from a temperature gradient that modifies the refractive index related to the density/concentration of the material over distance, can be modelled using a fluid with a refractive index gradient due to the concentration gradient. This mechanism can be equated to a limiting population growth model, where the concentrated region attempts to diffuse into the lower concentration region, while seeking equilibrium with gravity, thus yielding a logistic function curve.[28] In material science: Phase diagrams See Diffusion bonding. In linguistics: language change In linguistics, the logistic function can be used to model language change:[30] an innovation that is at first marginal begins to spread more quickly with time, and then more slowly as it becomes more universally adopted. In agriculture: modeling crop response The logistic S-curve can be used for modeling the crop response to changes in growth factors. There are two types of response functions: positive and negative growth curves. For example, the crop yield may increase with increasing value of the growth factor up to a certain level (positive function), or it may decrease with increasing growth factor values (negative function owing to a negative growth factor), which situation requires an inverted S-curve. S-curve model for crop yield versus depth of water table.[31] Inverted S-curve model for crop yield versus soil salinity.[32] In economics and sociology: diffusion of innovations The logistic function can be used to illustrate the progress of the diffusion of an innovation through its life cycle. In The Laws of Imitation (1890), Gabriel Tarde describes the rise and spread of new ideas through imitative chains. In particular, Tarde identifies three main stages through which innovations spread: the first one corresponds to the difficult beginnings, during which the idea has to struggle within a hostile environment full of opposing habits and beliefs; the second one corresponds to the properly exponential take-off of the idea, with $f(x)=2^{x}$; finally, the third stage is logarithmic, with $f(x)=\log(x)$, and corresponds to the time when the impulse of the idea gradually slows down while, simultaneously new opponent ideas appear. The ensuing situation halts or stabilizes the progress of the innovation, which approaches an asymptote. In a Sovereign state, the subnational units (constituent states or cities) may use loans to finance their projects. However, this funding source is usually subject to strict legal rules as well as to economy scarcity constraints, specially the resources the banks can lend (due to their equity or Basel limits). These restrictions, which represent a saturation level, along with an exponential rush in an economic competition for money, create a public finance diffusion of credit pleas and the aggregate national response is a sigmoid curve.[33] In the history of economy, when new products are introduced there is an intense amount of research and development which leads to dramatic improvements in quality and reductions in cost. This leads to a period of rapid industry growth. Some of the more famous examples are: railroads, incandescent light bulbs, electrification, cars and air travel. Eventually, dramatic improvement and cost reduction opportunities are exhausted, the product or process are in widespread use with few remaining potential new customers, and markets become saturated. Logistic analysis was used in papers by several researchers at the International Institute of Applied Systems Analysis (IIASA). These papers deal with the diffusion of various innovations, infrastructures and energy source substitutions and the role of work in the economy as well as with the long economic cycle. Long economic cycles were investigated by Robert Ayres (1989).[34] Cesare Marchetti published on long economic cycles and on diffusion of innovations.[35][36] Arnulf Grübler's book (1990) gives a detailed account of the diffusion of infrastructures including canals, railroads, highways and airlines, showing that their diffusion followed logistic shaped curves.[37] Carlota Perez used a logistic curve to illustrate the long (Kondratiev) business cycle with the following labels: beginning of a technological era as irruption, the ascent as frenzy, the rapid build out as synergy and the completion as maturity.[38] See also • Cross fluid • Hyperbolic growth • Heaviside step function • Hill equation (biochemistry) • Hubbert curve • List of mathematical functions • STAR model • Michaelis–Menten kinetics • r/K selection theory • Rectifier (neural networks) • Shifted Gompertz distribution • Tipping point (sociology) Notes 1. The paper was presented in 1844, and published in 1845: "(Lu à la séance du 30 novembre 1844)." "(Read at the session of 30 November 1844).", p. 1. 2. Verhulst first refers to arithmetic progression and geometric progression, and refers to the geometric growth curve as a logarithmic curve (confusingly, the modern term is instead exponential curve, which is the inverse). He then calls his curve logistic, in contrast to logarithmic, and compares the logarithmic curve and logistic curve in the figure of his paper. 3. In Ancient Greece, λογῐστῐκός referred to practical computation and accounting, in contrast to ἀριθμητική (arithmētikḗ), the theoretical or philosophical study of numbers. Confusingly, in English, arithmetic refers to practical computation, even though it derives from ἀριθμητική, not λογῐστῐκός. See for example Louis Charles Karpinski, Nicomachus of Gerasa: Introduction to Arithmetic (1926) p. 3: "Arithmetic is fundamentally associated by modern readers, particularly by scientists and mathematicians, with the art of computation. For the ancient Greeks after Pythagoras, however, arithmetic was primarily a philosophical study, having no necessary connection with practical affairs. Indeed the Greeks gave a separate name to the arithmetic of business, λογιστική [accounting or practical logistic] ... In general the philosophers and mathematicians of Greece undoubtedly considered it beneath their dignity to treat of this branch, which probably formed a part of the elementary instruction of children." References 1. Verhulst, Pierre-François (1838). "Notice sur la loi que la population poursuit dans son accroissement" (PDF). Correspondance Mathématique et Physique. 10: 113–121. Retrieved 3 December 2014. 2. "Sigmoid — PyTorch 1.10.1 documentation". 3. expit documentation for R's clusterPower package. 4. "Scipy.special.expit — SciPy v1.7.1 Manual". 5. Cramer 2002, pp. 3–5. 6. Verhulst, Pierre-François (1845). "Recherches mathématiques sur la loi d'accroissement de la population" [Mathematical Researches into the Law of Population Growth Increase]. Nouveaux Mémoires de l'Académie Royale des Sciences et Belles-Lettres de Bruxelles. 18: 8. Retrieved 18 February 2013. Nous donnerons le nom de logistique à la courbe [We will give the name logistic to the curve] 7. Verhulst, Pierre-François (1847). "Deuxième mémoire sur la loi d'accroissement de la population". Mémoires de l'Académie Royale des Sciences, des Lettres et des Beaux-Arts de Belgique. 20: 1–32. Retrieved 18 February 2013. 8. Shulman, Bonnie (1998). "Math-alive! using original sources to teach mathematics in social context". PRIMUS. 8 (March): 1–14. doi:10.1080/10511979808965879. The diagram clinched it for me: there two curves labeled "Logistique" and "Logarithmique" are drawn on the same axes, and one can see that there is a region where they match almost exactly, and then diverge. I concluded that Verhulst's intention in naming the curve was indeed to suggest this comparison, and that "logistic" was meant to convey the curve's "log-like" quality. 9. Kocian, Alexander; Carmassi, Giulia; Cela, Fatjon; Incrocci, Luca; Milazzo, Paolo; Chessa, Stefano (7 June 2020). "Bayesian Sigmoid-Type Time Series Forecasting with Missing Data for Greenhouse Crops". Sensors. 20 (11): 3246. Bibcode:2020Senso..20.3246K. doi:10.3390/s20113246. PMC 7309099. PMID 32517314. 10. Kyurkchiev, Nikolay, and Svetoslav Markov. "Sigmoid functions: some approximation and modelling aspects". LAP LAMBERT Academic Publishing, Saarbrucken (2015). 11. Raul Rojas. Neural Networks – A Systematic Introduction (PDF). Retrieved 15 October 2016. 12. S. W. Link, Psychometrika, 1975, 40, 1, 77–105 13. S. W. Link, Attention and Performance VII, 1978, 619–630 14. S. W. Link, The wave theory of difference and similarity (book), Taylor and Francis, 1992 15. A. G. McKendricka; M. Kesava Paia1 (January 1912). "XLV.—The Rate of Multiplication of Micro-organisms: A Mathematical Study". Proceedings of the Royal Society of Edinburgh. 31: 649–653. doi:10.1017/S0370164600025426. 16. Raymond Pearl & Lowell Reed (June 1920). "On the Rate of Growth of the Population of the United States" (PDF). Proceedings of the National Academy of Sciences of the United States of America. Vol. 6, no. 6. p. 275. 17. Griffiths, Graham; Schiesser, William (2009). "Linear and nonlinear waves". Scholarpedia. 4 (7): 4308. Bibcode:2009SchpJ...4.4308G. doi:10.4249/scholarpedia.4308. ISSN 1941-6016. 18. Yukalov, V. I.; Yukalova, E. P.; Sornette, D. (2009). "Punctuated evolution due to delayed carrying capacity". Physica D: Nonlinear Phenomena. 238 (17): 1752–1767. arXiv:0901.4714. Bibcode:2009PhyD..238.1752Y. doi:10.1016/j.physd.2009.05.011. S2CID 14456352. 19. Gershenfeld 1999, p. 150. 20. LeCun, Y.; Bottou, L.; Orr, G.; Muller, K. (1998). Orr, G.; Muller, K. (eds.). Efficient BackProp (PDF). ISBN 3-540-65311-2. {{cite book}}: |work= ignored (help) 21. Worldometer: COVID-19 CORONAVIRUS PANDEMIC 22. Villalobos-Arias, Mario (2020). "Using generalized logistics regression to forecast population infected by Covid-19". arXiv:2004.02406 [q-bio.PE]. 23. Postnikov, Eugene B. (June 2020). "Estimation of COVID-19 dynamics "on a back-of-envelope": Does the simplest SIR model provide quantitative parameters and predictions?". Chaos, Solitons & Fractals. 135: 109841. Bibcode:2020CSF...13509841P. doi:10.1016/j.chaos.2020.109841. PMC 7252058. PMID 32501369. 24. Saito, Takesi (June 2020). "A Logistic Curve in the SIR Model and Its Application to Deaths by COVID-19 in Japan". medRxiv 10.1101/2020.06.25.20139865v2. 25. Reiser, Paul A. (2020). "Modified SIR Model Yielding a Logistic Solution". arXiv:2006.01550 [q-bio.PE]. 26. Lee, Se Yoon; Lei, Bowen; Mallick, Bani (2020). "Estimation of COVID-19 spread curves integrating global data and borrowing information". PLOS ONE. 15 (7): e0236860. arXiv:2005.00662. Bibcode:2020PLoSO..1536860L. doi:10.1371/journal.pone.0236860. PMC 7390340. PMID 32726361. 27. Yin, Xi; Zelenay, Piotr (13 July 2018). "Kinetic Models for the Degradation Mechanisms of PGM-Free ORR Catalysts". ECS Transactions. 85 (13): 1239–1250. doi:10.1149/08513.1239ecst. OSTI 1471365. S2CID 103125742. 28. Tanalikhit, Pattarapon; Worakitthamrong, Thanabodi; Chaidet, Nattanon; Kanchanapusakit, Wittaya (24–25 May 2021). "Measuring refractive index gradient of sugar solution". Journal of Physics: Conference Series. 2145: 012072. doi:10.1088/1742-6596/2145/1/012072. S2CID 245811843. 29. López-Arias, T; Calzà, G; Gratton, L M; Oss, S (2009). "Mirages in a bottle". Physics Education. 44 (6): 582. doi:10.1088/0031-9120/44/6/002. S2CID 59380632. 30. Bod, Hay, Jennedy (eds.) 2003, pp. 147–156 31. Collection of data on crop production and depth of the water table in the soil of various authors. On line: 32. Collection of data on crop production and soil salinity of various authors. On line: 33. Rocha, Leno S.; Rocha, Frederico S. A.; Souza, Thársis T. P. (5 October 2017). "Is the public sector of your country a diffusion borrower? Empirical evidence from Brazil". PLOS ONE. 12 (10): e0185257. arXiv:1604.07782. Bibcode:2017PLoSO..1285257R. doi:10.1371/journal.pone.0185257. ISSN 1932-6203. PMC 5628819. PMID 28981532. 34. Ayres, Robert (February 1989). "Technological Transformations and Long Waves" (PDF). International Institute for Applied Systems Analysis. Archived from the original (PDF) on 1 March 2012. Retrieved 6 November 2010. 35. Marchetti, Cesare (1996). "Pervasive Long Waves: Is Society Cyclotymic" (PDF). Aspen Global Change INstitute. Archived from the original (PDF) on 5 March 2012. 36. Marchetti, Cesare (1988). "Kondratiev Revisited-After One Cycle" (PDF). Cesare Marchetti. 37. Grübler, Arnulf (1990). The Rise and Fall of Infrastructures: Dynamics of Evolution and Technological Change in Transport (PDF). Heidelberg and New York: Physica-Verlag. 38. Perez, Carlota (2002). Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages. UK: Edward Elgar Publishing Limited. ISBN 1-84376-331-1. • Cramer, J. S. (2002). The origins of logistic regression (PDF) (Technical report). Vol. 119. Tinbergen Institute. pp. 167–178. doi:10.2139/ssrn.360300. • Published as:Cramer, J. S. (2004). "The early origins of the logit model". Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences. 35 (4): 613–626. doi:10.1016/j.shpsc.2004.09.003. • Jannedy, Stefanie; Bod, Rens; Hay, Jennifer (2003). Probabilistic Linguistics. Cambridge, Massachusetts: MIT Press. ISBN 0-262-52338-8. • Gershenfeld, Neil A. (1999). The Nature of Mathematical Modeling. Cambridge, UK: Cambridge University Press. ISBN 978-0-521-57095-4. • Kingsland, Sharon E. (1995). Modeling nature: episodes in the history of population ecology. Chicago: University of Chicago Press. ISBN 0-226-43728-0. • Weisstein, Eric W. "Logistic Equation". MathWorld. External links Wikimedia Commons has media related to Logistic functions. • L.J. Linacre, Why logistic ogive and not autocatalytic curve?, accessed 2009-09-12. • https://web.archive.org/web/20060914155939/http://luna.cas.usf.edu/~mbrannic/files/regression/Logistic.html • Weisstein, Eric W. "Sigmoid Function". MathWorld. • Online experiments with JSXGraph • Esses are everywhere. • Seeing the s-curve is everything. • Restricted Logarithmic Growth with Injection
Wikipedia
Verification and validation of computer simulation models Verification and validation of computer simulation models is conducted during the development of a simulation model with the ultimate goal of producing an accurate and credible model.[1][2] "Simulation models are increasingly being used to solve problems and to aid in decision-making. The developers and users of these models, the decision makers using information obtained from the results of these models, and the individuals affected by decisions based on such models are all rightly concerned with whether a model and its results are "correct".[3] This concern is addressed through verification and validation of the simulation model. Simulation models are approximate imitations of real-world systems and they never exactly imitate the real-world system. Due to that, a model should be verified and validated to the degree needed for the model's intended purpose or application.[3] The verification and validation of a simulation model starts after functional specifications have been documented and initial model development has been completed.[4] Verification and validation is an iterative process that takes place throughout the development of a model.[1][4] Verification In the context of computer simulation, verification of a model is the process of confirming that it is correctly implemented with respect to the conceptual model (it matches specifications and assumptions deemed acceptable for the given purpose of application).[1][4] During verification the model is tested to find and fix errors in the implementation of the model.[4] Various processes and techniques are used to assure the model matches specifications and assumptions with respect to the model concept. The objective of model verification is to ensure that the implementation of the model is correct. There are many techniques that can be utilized to verify a model. These include, but are not limited to, having the model checked by an expert, making logic flow diagrams that include each logically possible action, examining the model output for reasonableness under a variety of settings of the input parameters, and using an interactive debugger.[1] Many software engineering techniques used for software verification are applicable to simulation model verification.[1] Validation Validation checks the accuracy of the model's representation of the real system. Model validation is defined to mean "substantiation that a computerized model within its domain of applicability possesses a satisfactory range of accuracy consistent with the intended application of the model".[3] A model should be built for a specific purpose or set of objectives and its validity determined for that purpose.[3] There are many approaches that can be used to validate a computer model. The approaches range from subjective reviews to objective statistical tests. One approach that is commonly used is to have the model builders determine validity of the model through a series of tests.[3] Naylor and Finger [1967] formulated a three-step approach to model validation that has been widely followed:[1] Step 1. Build a model that has high face validity. Step 2. Validate model assumptions. Step 3. Compare the model input-output transformations to corresponding input-output transformations for the real system.[5] Face validity A model that has face validity appears to be a reasonable imitation of a real-world system to people who are knowledgeable of the real world system.[4] Face validity is tested by having users and people knowledgeable with the system examine model output for reasonableness and in the process identify deficiencies.[1] An added advantage of having the users involved in validation is that the model's credibility to the users and the user's confidence in the model increases.[1][4] Sensitivity to model inputs can also be used to judge face validity.[1] For example, if a simulation of a fast food restaurant drive through was run twice with customer arrival rates of 20 per hour and 40 per hour then model outputs such as average wait time or maximum number of customers waiting would be expected to increase with the arrival rate. Validation of model assumptions Assumptions made about a model generally fall into two categories: structural assumptions about how system works and data assumptions. Also we can consider the simplification assumptions that are those that we use to simplify the reality.[6] Structural assumptions Assumptions made about how the system operates and how it is physically arranged are structural assumptions. For example, the number of servers in a fast food drive through lane and if there is more than one how are they utilized? Do the servers work in parallel where a customer completes a transaction by visiting a single server or does one server take orders and handle payment while the other prepares and serves the order. Many structural problems in the model come from poor or incorrect assumptions.[4] If possible the workings of the actual system should be closely observed to understand how it operates.[4] The systems structure and operation should also be verified with users of the actual system.[1] Data assumptions There must be a sufficient amount of appropriate data available to build a conceptual model and validate a model. Lack of appropriate data is often the reason attempts to validate a model fail.[3] Data should be verified to come from a reliable source. A typical error is assuming an inappropriate statistical distribution for the data.[1] The assumed statistical model should be tested using goodness of fit tests and other techniques.[1][3] Examples of goodness of fit tests are the Kolmogorov–Smirnov test and the chi-square test. Any outliers in the data should be checked.[3] Simplification assumptions Are those assumptions that we know that are not true, but are needed to simplify the problem we want to solve.[6] The use of this assumptions must be restricted to assure that the model is correct enough to serve as an answer for the problem we want to solve. Validating input-output transformations The model is viewed as an input-output transformation for these tests. The validation test consists of comparing outputs from the system under consideration to model outputs for the same set of input conditions. Data recorded while observing the system must be available in order to perform this test.[3] The model output that is of primary interest should be used as the measure of performance.[1] For example, if system under consideration is a fast food drive through where input to model is customer arrival time and the output measure of performance is average customer time in line, then the actual arrival time and time spent in line for customers at the drive through would be recorded. The model would be run with the actual arrival times and the model average time in line would be compared with the actual average time spent in line using one or more tests. Hypothesis testing Statistical hypothesis testing using the t-test can be used as a basis to accept the model as valid or reject it as invalid. The hypothesis to be tested is H0 the model measure of performance = the system measure of performance versus H1 the model measure of performance ≠ the system measure of performance. The test is conducted for a given sample size and level of significance or α. To perform the test a number n statistically independent runs of the model are conducted and an average or expected value, E(Y), for the variable of interest is produced. Then the test statistic, t0 is computed for the given α, n, E(Y) and the observed value for the system μ0 $t_{0}={(E(Y)-u_{0})}/{(S/{\sqrt {n}})}$ and the critical value for α and n-1 the degrees of freedom $t_{a/2,n-1}$ is calculated. If $\left\vert t_{0}\right\vert >t_{a/2,n-1}$ reject H0, the model needs adjustment. There are two types of error that can occur using hypothesis testing, rejecting a valid model called type I error or "model builders risk" and accepting an invalid model called Type II error, β, or "model user's risk".[3] The level of significance or α is equal the probability of type I error.[3] If α is small then rejecting the null hypothesis is a strong conclusion.[1] For example, if α = 0.05 and the null hypothesis is rejected there is only a 0.05 probability of rejecting a model that is valid. Decreasing the probability of a type II error is very important.[1][3] The probability of correctly detecting an invalid model is 1 - β. The probability of a type II error is dependent of the sample size and the actual difference between the sample value and the observed value. Increasing the sample size decreases the risk of a type II error. Model accuracy as a range A statistical technique where the amount of model accuracy is specified as a range has recently been developed. The technique uses hypothesis testing to accept a model if the difference between a model's variable of interest and a system's variable of interest is within a specified range of accuracy.[7] A requirement is that both the system data and model data be approximately Normally Independent and Identically Distributed (NIID). The t-test statistic is used in this technique. If the mean of the model is μm and the mean of system is μs then the difference between the model and the system is D = μm - μs. The hypothesis to be tested is if D is within the acceptable range of accuracy. Let L = the lower limit for accuracy and U = upper limit for accuracy. Then H0 L ≤ D ≤ U versus H1 D < L or D > U is to be tested. The operating characteristic (OC) curve is the probability that the null hypothesis is accepted when it is true. The OC curve characterizes the probabilities of both type I and II errors. Risk curves for model builder's risk and model user's can be developed from the OC curves. Comparing curves with fixed sample size tradeoffs between model builder's risk and model user's risk can be seen easily in the risk curves.[7] If model builder's risk, model user's risk, and the upper and lower limits for the range of accuracy are all specified then the sample size needed can be calculated.[7] Confidence intervals Confidence intervals can be used to evaluate if a model is "close enough"[1] to a system for some variable of interest. The difference between the known model value, μ0, and the system value, μ, is checked to see if it is less than a value small enough that the model is valid with respect that variable of interest. The value is denoted by the symbol ε. To perform the test a number, n, statistically independent runs of the model are conducted and a mean or expected value, E(Y) or μ for simulation output variable of interest Y, with a standard deviation S is produced. A confidence level is selected, 100(1-α). An interval, [a,b], is constructed by $a=E(Y)-t_{a/2,n-1}S/{\sqrt {n}}\qquad and\qquad b=E(Y)+t_{a/2,n-1}S/{\sqrt {n}}$, where $t_{a/2,n-1}$ is the critical value from the t-distribution for the given level of significance and n-1 degrees of freedom. If |a-μ0| > ε and |b-μ0| > ε then the model needs to be calibrated since in both cases the difference is larger than acceptable. If |a-μ0| < ε and |b-μ0| < ε then the model is acceptable as in both cases the error is close enough. If |a-μ0| < ε and |b-μ0| > ε or vice versa then additional runs of the model are needed to shrink the interval. Graphical comparisons If statistical assumptions cannot be satisfied or there is insufficient data for the system a graphical comparisons of model outputs to system outputs can be used to make a subjective decisions, however other objective tests are preferable.[3] ASME Standards Documents and standards involving verification and validation of computational modeling and simulation are developed by the American Society of Mechanical Engineers (ASME) Verification and Validation (V&V) Committee. ASME V&V 10 provides guidance in assessing and increasing the credibility of computational solid mechanics models through the processes of verification, validation, and uncertainty quantification.[8] ASME V&V 10.1 provides a detailed example to illustrate the concepts described in ASME V&V 10.[9] ASME V&V 20 provides a detailed methodology for validating computational simulations as applied to fluid dynamics and heat transfer.[10] ASME V&V 40 provides a framework for establishing model credibility requirements for computational modeling, and presents examples specific in the medical device industry. [11] See also • Verification and validation • Software verification and validation References 1. Banks, Jerry; Carson, John S.; Nelson, Barry L.; Nicol, David M. Discrete-Event System Simulation Fifth Edition, Upper Saddle River, Pearson Education, Inc. 2010 ISBN 0136062121 2. Schlesinger, S.; et al. (1979). "Terminology for model credibility". Simulation. 32 (3): 103–104. doi:10.1177/003754977903200304. 3. Sargent, Robert G. "VERIFICATION AND VALIDATION OF SIMULATION MODELS". Proceedings of the 2011 Winter Simulation Conference. 4. Carson, John, "MODEL VERIFICATION AND VALIDATION". Proceedings of the 2002 Winter Simulation Conference. 5. NAYLOR, T. H., AND J. M. FINGER [1967], "Verification of Computer Simulation Models", Management Science, Vol. 2, pp. B92– B101., cited in Banks, Jerry; Carson, John S.; Nelson, Barry L.; Nicol, David M. Discrete-Event System Simulation Fifth Edition, Upper Saddle River, Pearson Education, Inc. 2010 p. 396. ISBN 0136062121 6. 1. Fonseca, P. Simulation hypotheses. In Proceedings of SIMUL 2011; 2011; pp. 114–119. https://www.researchgate.net/publication/262187532_Simulation_hypotheses_A_proposed_taxonomy_for_the_hypotheses_used_in_a_simulation_model 7. Sargent, R. G. 2010. "A New Statistical Procedure for Validation of Simulation and Stochastic Models." Technical Report SYR-EECS-2010-06, Department of Electrical Engineering and Computer Science, Syracuse University, Syracuse, New York. 8. “V&V 10 – 2006 Guide for Verification and Validation in Computational Solid Mechanics”. Standards. ASME. Retrieved 2 September 2018. 9. “V&V 10.1 – 2012 An Illustration of the Concepts of Verification and Validation in Computational Solid Mechanics”. Standards. ASME. Retrieved 2 September 2018. 10. “V&V 20 – 2009 Standard for Verification and Validation in Computational Fluid Dynamics and Heat Transfer”. Standards. ASME. Retrieved 2 September 2018. 11. “V&V 40 Industry Day”. Verification and Validation Symposium. ASME. Retrieved 2 September 2018.
Wikipedia
Verlinde algebra In mathematics, a Verlinde algebra is a finite-dimensional associative algebra introduced by Erik Verlinde (1988), with a basis of elements φλ corresponding to primary fields of a rational two-dimensional conformal field theory, whose structure constants Nν λμ describe fusion of primary fields. Verlinde formula In terms of the modular S-matrix, the fusion coefficients are given by[1] $N_{\lambda \mu }^{\nu }=\sum _{\sigma }{\frac {S_{\lambda \sigma }S_{\mu \sigma }S_{\sigma \nu }^{*}}{S_{0\sigma }}}$ where $S^{*}$ is the component-wise complex conjugate of $S$. Twisted equivariant K-theory If G is a compact Lie group, there is a rational conformal field theory whose primary fields correspond to the representations λ of some fixed level of loop group of G. For this special case Freed, Hopkins & Teleman (2001) showed that the Verlinde algebra can be identified with twisted equivariant K-theory of G. See also • Fusion rules Notes 1. Blumenhagen, Ralph (2009). Introduction to Conformal Field Theory. Plauschinn, Erik. Dordrecht: Springer. pp. 143. ISBN 9783642004490. OCLC 437345787. References • Beauville, Arnaud (1996), "Conformal blocks, fusion rules and the Verlinde formula" (PDF), in Teicher, Mina (ed.), Proceedings of the Hirzebruch 65 Conference on Algebraic Geometry (Ramat Gan, 1993), Israel Math. Conf. Proc., vol. 9, Ramat Gan: Bar-Ilan Univ., pp. 75–96, arXiv:alg-geom/9405001, MR 1360497 • Bott, Raoul (1991), "On E. Verlinde's formula in the context of stable bundles", International Journal of Modern Physics A, 6 (16): 2847–2858, Bibcode:1991IJMPA...6.2847B, doi:10.1142/S0217751X91001404, ISSN 0217-751X, MR 1117752 • Faltings, Gerd (1994), "A proof for the Verlinde formula", Journal of Algebraic Geometry, 3 (2): 347–374, ISSN 1056-3911, MR 1257326 • Freed, Daniel S.; Hopkins, M.; Teleman, C. (2001), "The Verlinde algebra is twisted equivariant K-theory", Turkish Journal of Mathematics, 25 (1): 159–167, arXiv:math/0101038, Bibcode:2001math......1038F, ISSN 1300-0098, MR 1829086 • Verlinde, Erik (1988), "Fusion rules and modular transformations in 2D conformal field theory", Nuclear Physics B, 300 (3): 360–376, Bibcode:1988NuPhB.300..360V, doi:10.1016/0550-3213(88)90603-7, ISSN 0550-3213, MR 0954762 • Witten, Edward (1995), "The Verlinde algebra and the cohomology of the Grassmannian", Geometry, topology, & physics, Conf. Proc. Lecture Notes Geom. Topology, IV, Int. Press, Cambridge, MA, pp. 357–422, arXiv:hep-th/9312104, Bibcode:1993hep.th...12104W, MR 1358625 • MathOverflow discussion with a number of references.
Wikipedia
Vermeil's theorem In differential geometry, Vermeil's theorem essentially states that the scalar curvature is the only (non-trivial) absolute invariant among those of prescribed type suitable for Albert Einstein’s theory of General Relativity. The theorem was proved by the German mathematician Hermann Vermeil in 1917. Standard version of the theorem The theorem states that the Ricci scalar $R$[1] is the only scalar invariant (or absolute invariant) linear in the second derivatives of the metric tensor $g_{\mu \nu }$. See also • Scalar curvature • Differential invariant • Einstein–Hilbert action • Lovelock's theorem Notes 1. Let us recall that Ricci scalar $R$ is linear in the second derivatives of the metric tensor $g_{\mu \nu }$, quadratic in the first derivatives and contains the inverse matrix $g^{\mu \nu },$ which is a rational function of the components $g_{\mu \nu }$. References • Vermeil, H. (1917). "Notiz über das mittlere Krümmungsmaß einer n-fach ausgedehnten Riemann'schen Mannigfaltigkeit". Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen. Mathematisch-Physikalische Klasse. 21: 334–344. • Weyl, Hermann (1922). Space, time, matter. Translated by Brose, Henry L. Courier Corporation. ISBN 0-486-60267-2. JFM 48.1059.12.
Wikipedia
Veronese surface In mathematics, the Veronese surface is an algebraic surface in five-dimensional projective space, and is realized by the Veronese embedding, the embedding of the projective plane given by the complete linear system of conics. It is named after Giuseppe Veronese (1854–1917). Its generalization to higher dimension is known as the Veronese variety. The surface admits an embedding in the four-dimensional projective space defined by the projection from a general point in the five-dimensional space. Its general projection to three-dimensional projective space is called a Steiner surface. Definition The Veronese surface is the image of the mapping $\nu :\mathbb {P} ^{2}\to \mathbb {P} ^{5}$ :\mathbb {P} ^{2}\to \mathbb {P} ^{5}} given by $\nu :[x:y:z]\mapsto [x^{2}:y^{2}:z^{2}:yz:xz:xy]$ :[x:y:z]\mapsto [x^{2}:y^{2}:z^{2}:yz:xz:xy]} where $[x:\cdots ]$ denotes homogeneous coordinates. The map $\nu $ is known as the Veronese embedding. Motivation The Veronese surface arises naturally in the study of conics. A conic is a degree 2 plane curve, thus defined by an equation: $Ax^{2}+Bxy+Cy^{2}+Dxz+Eyz+Fz^{2}=0.$ The pairing between coefficients $(A,B,C,D,E,F)$ and variables $(x,y,z)$ is linear in coefficients and quadratic in the variables; the Veronese map makes it linear in the coefficients and linear in the monomials. Thus for a fixed point $[x:y:z],$ the condition that a conic contains the point is a linear equation in the coefficients, which formalizes the statement that "passing through a point imposes a linear condition on conics". Veronese map The Veronese map or Veronese variety generalizes this idea to mappings of general degree d in n+1 variables. That is, the Veronese map of degree d is the map $\nu _{d}\colon \mathbb {P} ^{n}\to \mathbb {P} ^{m}$ with m given by the multiset coefficient, or more familiarly the binomial coefficient, as: $m=\left(\!\!{n+1 \choose d}\!\!\right)-1={n+d \choose d}-1.$ The map sends $[x_{0}:\ldots :x_{n}]$ to all possible monomials of total degree d (of which there are $m+1$); we have $n+1$ since there are $n+1$ variables $x_{0},\ldots ,x_{n}$ to choose from; and we subtract $1$ since the projective space $\mathbb {P} ^{m}$ has $m+1$ coordinates. The second equality shows that for fixed source dimension n, the target dimension is a polynomial in d of degree n and leading coefficient $1/n!.$ For low degree, $d=0$ is the trivial constant map to $\mathbf {P} ^{0},$ and $d=1$ is the identity map on $\mathbf {P} ^{n},$ so d is generally taken to be 2 or more. One may define the Veronese map in a coordinate-free way, as $\nu _{d}:\mathbb {P} (V)\ni [v]\mapsto [v^{d}]\in \mathbb {P} ({\rm {{Sym}^{d}V)}}$ where V is any vector space of finite dimension, and ${\rm {{Sym}^{d}V}}$ are its symmetric powers of degree d. This is homogeneous of degree d under scalar multiplication on V, and therefore passes to a mapping on the underlying projective spaces. If the vector space V is defined over a field K which does not have characteristic zero, then the definition must be altered to be understood as a mapping to the dual space of polynomials on V. This is because for fields with finite characteristic p, the pth powers of elements of V are not rational normal curves, but are of course a line. (See, for example additive polynomial for a treatment of polynomials over a field of finite characteristic). Rational normal curve Further information: Rational normal curve For $n=1,$ the Veronese variety is known as the rational normal curve, of which the lower-degree examples are familiar. • For $n=1,d=1$ the Veronese map is simply the identity map on the projective line. • For $n=1,d=2,$ the Veronese variety is the standard parabola $[x^{2}:xy:y^{2}],$ in affine coordinates $(x,x^{2}).$ • For $n=1,d=3,$ the Veronese variety is the twisted cubic, $[x^{3}:x^{2}y:xy^{2}:y^{3}],$ in affine coordinates $(x,x^{2},x^{3}).$ Biregular The image of a variety under the Veronese map is again a variety, rather than simply a constructible set; furthermore, these are isomorphic in the sense that the inverse map exists and is regular – the Veronese map is biregular. More precisely, the images of open sets in the Zariski topology are again open. See also • The Veronese surface is the only Severi variety of dimension 2 References • Joe Harris, Algebraic Geometry, A First Course, (1992) Springer-Verlag, New York. ISBN 0-387-97716-3
Wikipedia
Verschiebung operator In mathematics, the Verschiebung or Verschiebung operator V is a homomorphism between affine commutative group schemes over a field of nonzero characteristic p. For finite group schemes it is the Cartier dual of the Frobenius homomorphism. It was introduced by Witt (1937) as the shift operator on Witt vectors taking (a0, a1, a2, ...) to (0, a0, a1, ...). ("Verschiebung" is German for "shift", but the term "Verschiebung" is often used for this operator even in other languages.) The Verschiebung operator V and the Frobenius operator F are related by FV = VF = [p], where [p] is the pth power homomorphism of an abelian group scheme. Examples • If G is the discrete group with n elements over the finite field Fp of order p, then the Frobenius homomorphism F is the identity homomorphism and the Verschiebung V is the homomorphism [p] (multiplication by p in the group). Its dual is the group scheme of nth roots of unity, whose Frobenius homomorphism is [p] and whose Verschiebung is the identity homomorphism. • For Witt vectors the Verschiebung takes (a0, a1, a2, ...) to (0, a0, a1, ...). • On the Hopf algebra of symmetric functions the Verschiebung Vn is the algebra endomorphism that takes the complete symmetric function hr to hr/n if n divides r and to 0 otherwise. See also • Dieudonne module References • Demazure, Michel (1972), Lectures on p-divisible groups, Lecture Notes in Mathematics, vol. 302, Berlin, New York: Springer-Verlag, doi:10.1007/BFb0060741, ISBN 978-3-540-06092-5, MR 0344261 • Witt, Ernst (1937), "Zyklische Körper und Algebren der Characteristik p vom Grad pn. Struktur diskret bewerteter perfekter Körper mit vollkommenem Restklassenkörper der Charakteristik pn", Journal für die Reine und Angewandte Mathematik (in German), 176: 126–140, doi:10.1515/crll.1937.176.126
Wikipedia
Crossed product In mathematics, and more specifically in the theory of von Neumann algebras, a crossed product is a basic method of constructing a new von Neumann algebra from a von Neumann algebra acted on by a group. It is related to the semidirect product construction for groups. (Roughly speaking, crossed product is the expected structure for a group ring of a semidirect product group. Therefore crossed products have a ring theory aspect also. This article concentrates on an important case, where they appear in functional analysis.) Not to be confused with cross product. Motivation Recall that if we have two finite groups $G$ and N with an action of G on N we can form the semidirect product $N\rtimes G$. This contains N as a normal subgroup, and the action of G on N is given by conjugation in the semidirect product. We can replace N by its complex group algebra C[N], and again form a product $C[N]\rtimes G$ in a similar way; this algebra is a sum of subspaces gC[N] as g runs through the elements of G, and is the group algebra of $N\rtimes G$. We can generalize this construction further by replacing C[N] by any algebra A acted on by G to get a crossed product $A\rtimes G$, which is the sum of subspaces gA and where the action of G on A is given by conjugation in the crossed product. The crossed product of a von Neumann algebra by a group G acting on it is similar except that we have to be more careful about topologies, and need to construct a Hilbert space acted on by the crossed product. (Note that the von Neumann algebra crossed product is usually larger than the algebraic crossed product discussed above; in fact it is some sort of completion of the algebraic crossed product.) In physics, this structure appears in presence of the so called gauge group of the first kind. G is the gauge group, and N the "field" algebra. The observables are then defined as the fixed points of N under the action of G. A result by Doplicher, Haag and Roberts says that under some assumptions the crossed product can be recovered from the algebra of observables. Construction Suppose that A is a von Neumann algebra of operators acting on a Hilbert space H and G is a discrete group acting on A. We let K be the Hilbert space of all square summable H-valued functions on G. There is an action of A on K given by • a(k)(g) = g−1(a)k(g) for k in K, g, h in G, and a in A, and there is an action of G on K given by • g(k)(h) = k(g−1h). The crossed product $A\rtimes G$ is the von Neumann algebra acting on K generated by the actions of A and G on K. It does not depend (up to isomorphism) on the choice of the Hilbert space H. This construction can be extended to work for any locally compact group G acting on any von Neumann algebra A. When $A$ is an abelian von Neumann algebra, this is the original group-measure space construction of Murray and von Neumann. Properties We let G be an infinite countable discrete group acting on the abelian von Neumann algebra A. The action is called free if A has no non-zero projections p such that some nontrivial g fixes all elements of pAp. The action is called ergodic if the only invariant projections are 0 and 1. Usually A can be identified as the abelian von Neumann algebra $L^{\infty }(X)$ of essentially bounded functions on a measure space X acted on by G, and then the action of G on X is ergodic (for any measurable invariant subset, either the subset or its complement has measure 0) if and only if the action of G on A is ergodic. If the action of G on A is free and ergodic then the crossed product $A\rtimes G$ is a factor. Moreover: • The factor is of type I if A has a minimal projection such that 1 is the sum of the G conjugates of this projection. This corresponds to the action of G on X being transitive. Example: X is the integers, and G is the group of integers acting by translations. • The factor has type II1 if A has a faithful finite normal G-invariant trace. This corresponds to X having a finite G invariant measure, absolutely continuous with respect to the measure on X. Example: X is the unit circle in the complex plane, and G is the group of all roots of unity. • The factor has type II∞ if it is not of types I or II1 and has a faithful semifinite normal G-invariant trace. This corresponds to X having an infinite G invariant measure without atoms, absolutely continuous with respect to the measure on X. Example: X is the real line, and G is the group of rationals acting by translations. • The factor has type III if A has no faithful semifinite normal G-invariant trace. This corresponds to X having no non-zero absolutely continuous G-invariant measure. Example: X is the real line, and G is the group of all transformations ax+b for a and b rational, a non-zero. In particular one can construct examples of all the different types of factors as crossed products. Duality If $A$ is a von Neumann algebra on which a locally compact Abelian $G$ acts, then $\Gamma $, the dual group of characters $\chi $ of $G$, acts by unitaries on $K$ : • $(\chi \cdot k)(h)=\chi (h)k(h)$ These unitaries normalise the crossed product, defining the dual action of $\Gamma $. Together with the crossed product, they generate $A\otimes B(L^{2}(G))$, which can be identified with the iterated crossed product by the dual action $(A\rtimes G)\rtimes \Gamma $ . Under this identification, the double dual action of $G$ (the dual group of $\Gamma $) corresponds to the tensor product of the original action on $A$ and conjugation by the following unitaries on $L^{2}(G)$ : • $(g\cdot f)(h)=f(hg)$ The crossed product may be identified with the fixed point algebra of the double dual action. More generally $A$ is the fixed point algebra of $\Gamma $ in the crossed product. Similar statements hold when $G$ is replaced by a non-Abelian locally compact group or more generally a locally compact quantum group, a class of Hopf algebra related to von Neumann algebras. An analogous theory has also been developed for actions on C* algebras and their crossed products. Duality first appeared for actions of the reals in the work of Connes and Takesaki on the classification of Type III factors. According to Tomita–Takesaki theory, every vector which is cyclic for the factor and its commutant gives rise to a 1-parameter modular automorphism group. The corresponding crossed product is a Type $II_{\infty }$ von Neumann algebra and the corresponding dual action restricts to an ergodic action of the reals on its centre, an Abelian von Neumann algebra. This ergodic flow is called the flow of weights; it is independent of the choice of cyclic vector. The Connes spectrum, a closed subgroup of the positive reals ℝ+, is obtained by applying the exponential to the kernel of this flow. • When the kernel is the whole of $R$, the factor is type $III_{1}$. • When the kernel is $(\log \lambda )Z$ for $\lambda $ in (0,1), the factor is type $III_{\lambda }$. • When the kernel is trivial, the factor is type $III_{0}$. Connes and Haagerup proved that the Connes spectrum and the flow of weights are complete invariants of hyperfinite Type III factors. From this classification and results in ergodic theory, it is known that every infinite-dimensional hyperfinite factor has the form $L^{\infty }(X)\rtimes Z$ for some free ergodic action of $Z$. Examples • If we take $C$ to be the complex numbers, then the crossed product $C\rtimes G$ is called the von Neumann group algebra of G. • If $G$ is an infinite discrete group such that every conjugacy class has infinite order then the von Neumann group algebra is a factor of type II1. Moreover if every finite set of elements of $G$ generates a finite subgroup (or more generally if G is amenable) then the factor is the hyperfinite factor of type II1. See also • Crossed product algebra References • Takesaki, Masamichi (2002), Theory of Operator Algebras I, II, III, Berlin, New York: Springer-Verlag, ISBN 978-3-540-42248-8, ISBN 3-540-42914-X (II), ISBN 3-540-42913-1 (III) • Connes, Alain (1994), Non-commutative geometry, Boston, MA: Academic Press, ISBN 978-0-12-185860-5 • Pedersen, Gert Kjaergard (1979), C*-algebras and their automorphism groups, London Math. Soc. Monographs, vol. 14, Boston, MA: Academic Press, ISBN 978-0-12-549450-2
Wikipedia
Isogonal figure In geometry, a polytope (e.g. a polygon or polyhedron) or a tiling is isogonal or vertex-transitive if all its vertices are equivalent under the symmetries of the figure. This implies that each vertex is surrounded by the same kinds of face in the same or reverse order, and with the same angles between corresponding faces. For graph theory, see vertex-transitive graph. Technically, one says that for any two vertices there exists a symmetry of the polytope mapping the first isometrically onto the second. Other ways of saying this are that the group of automorphisms of the polytope acts transitively on its vertices, or that the vertices lie within a single symmetry orbit. All vertices of a finite n-dimensional isogonal figure exist on an (n−1)-sphere. The term isogonal has long been used for polyhedra. Vertex-transitive is a synonym borrowed from modern ideas such as symmetry groups and graph theory. The pseudorhombicuboctahedron – which is not isogonal – demonstrates that simply asserting that "all vertices look the same" is not as restrictive as the definition used here, which involves the group of isometries preserving the polyhedron or tiling. Isogonal polygons and apeirogons Isogonal apeirogons Isogonal skew apeirogons All regular polygons, apeirogons and regular star polygons are isogonal. The dual of an isogonal polygon is an isotoxal polygon. Some even-sided polygons and apeirogons which alternate two edge lengths, for example a rectangle, are isogonal. All planar isogonal 2n-gons have dihedral symmetry (Dn, n = 2, 3, ...) with reflection lines across the mid-edge points. D2 D3 D4 D7 Isogonal rectangles and crossed rectangles sharing the same vertex arrangement Isogonal hexagram with 6 identical vertices and 2 edge lengths.[1] Isogonal convex octagon with blue and red radial lines of reflection Isogonal "star" tetradecagon with one vertex type, and two edge types[2] Isogonal polyhedra and 2D tilings Isogonal tilings Distorted square tiling A distorted truncated square tiling An isogonal polyhedron and 2D tiling has a single kind of vertex. An isogonal polyhedron with all regular faces is also a uniform polyhedron and can be represented by a vertex configuration notation sequencing the faces around each vertex. Geometrically distorted variations of uniform polyhedra and tilings can also be given the vertex configuration. Isogonal polyhedra D3d, order 12 Th, order 24 Oh, order 48 4.4.6 3.4.4.4 4.6.8 3.8.8 A distorted hexagonal prism (ditrigonal trapezoprism) A distorted rhombicuboctahedron A shallow truncated cuboctahedron A hyper-truncated cube Isogonal polyhedra and 2D tilings may be further classified: • Regular if it is also isohedral (face-transitive) and isotoxal (edge-transitive); this implies that every face is the same kind of regular polygon. • Quasi-regular if it is also isotoxal (edge-transitive) but not isohedral (face-transitive). • Semi-regular if every face is a regular polygon but it is not isohedral (face-transitive) or isotoxal (edge-transitive). (Definition varies among authors; e.g. some exclude solids with dihedral symmetry, or nonconvex solids.) • Uniform if every face is a regular polygon, i.e. it is regular, quasiregular or semi-regular. • Semi-uniform if its elements are also isogonal. • Scaliform if all the edges are the same length. • Noble if it is also isohedral (face-transitive). N dimensions: Isogonal polytopes and tessellations These definitions can be extended to higher-dimensional polytopes and tessellations. All uniform polytopes are isogonal, for example, the uniform 4-polytopes and convex uniform honeycombs. The dual of an isogonal polytope is an isohedral figure, which is transitive on its facets. k-isogonal and k-uniform figures A polytope or tiling may be called k-isogonal if its vertices form k transitivity classes. A more restrictive term, k-uniform is defined as an k-isogonal figure constructed only from regular polygons. They can be represented visually with colors by different uniform colorings. This truncated rhombic dodecahedron is 2-isogonal because it contains two transitivity classes of vertices. This polyhedron is made of squares and flattened hexagons. This demiregular tiling is also 2-isogonal (and 2-uniform). This tiling is made of equilateral triangle and regular hexagonal faces. 2-isogonal 9/4 enneagram (face of the final stellation of the icosahedron) See also • Edge-transitive (Isotoxal figure) • Face-transitive (Isohedral figure) References 1. Coxeter, The Densities of the Regular Polytopes II, p54-55, "hexagram" vertex figure of h{5/2,5}. 2. The Lighter Side of Mathematics: Proceedings of the Eugène Strens Memorial Conference on Recreational Mathematics and its History, (1994), Metamorphoses of polygons, Branko Grünbaum, Figure 1. Parameter t=2.0 • Peter R. Cromwell, Polyhedra, Cambridge University Press 1997, ISBN 0-521-55432-2, p. 369 Transitivity • Grünbaum, Branko; Shephard, G. C. (1987). Tilings and Patterns. W. H. Freeman and Company. ISBN 0-7167-1193-1. (p. 33 k-isogonal tiling, p. 65 k-uniform tilings) External links • Weisstein, Eric W. "Vertex-transitive graph". MathWorld. • Isogonal Kaleidoscopical Polyhedra Vladimir L. Bulatov, Physics Department, Oregon State University, Corvallis, Presented at Mosaic2000, Millennial Open Symposium on the Arts and Interdisciplinary Computing, 21–24 August 2000, Seattle, WA VRML models • Steven Dutch uses the term k-uniform for enumerating k-isogonal tilings • List of n-uniform tilings • Weisstein, Eric W. "Demiregular tessellations". MathWorld. (Also uses term k-uniform for k-isogonal) Polygons (List) Triangles • Acute • Equilateral • Ideal • Isosceles • Kepler • Obtuse • Right Quadrilaterals • Antiparallelogram • Bicentric • Crossed • Cyclic • Equidiagonal • Ex-tangential • Harmonic • Isosceles trapezoid • Kite • Orthodiagonal • Parallelogram • Rectangle • Right kite • Right trapezoid • Rhombus • Square • Tangential • Tangential trapezoid • Trapezoid By number of sides 1–10 sides • Monogon (1) • Digon (2) • Triangle (3) • Quadrilateral (4) • Pentagon (5) • Hexagon (6) • Heptagon (7) • Octagon (8) • Nonagon (Enneagon, 9) • Decagon (10) 11–20 sides • Hendecagon (11) • Dodecagon (12) • Tridecagon (13) • Tetradecagon (14) • Pentadecagon (15) • Hexadecagon (16) • Heptadecagon (17) • Octadecagon (18) • Icosagon (20) >20 sides • Icositrigon (23) • Icositetragon (24) • Triacontagon (30) • 257-gon • Chiliagon (1000) • Myriagon (10,000) • 65537-gon • Megagon (1,000,000) • Apeirogon (∞) Star polygons • Pentagram • Hexagram • Heptagram • Octagram • Enneagram • Decagram • Hendecagram • Dodecagram Classes • Concave • Convex • Cyclic • Equiangular • Equilateral • Infinite skew • Isogonal • Isotoxal • Magic • Pseudotriangle • Rectilinear • Regular • Reinhardt • Simple • Skew • Star-shaped • Tangential • Weakly simple
Wikipedia
Vertex (curve) In the geometry of plane curves, a vertex is a point of where the first derivative of curvature is zero.[1] This is typically a local maximum or minimum of curvature,[2] and some authors define a vertex to be more specifically a local extremum of curvature.[3] However, other special cases may occur, for instance when the second derivative is also zero, or when the curvature is constant. For space curves, on the other hand, a vertex is a point where the torsion vanishes. Not to be confused with Vertex (geometry). Examples A hyperbola has two vertices, one on each branch; they are the closest of any two points lying on opposite branches of the hyperbola, and they lie on the principal axis. On a parabola, the sole vertex lies on the axis of symmetry and in a quadratic of the form: $ax^{2}+bx+c\,\!$ it can be found by completing the square or by differentiation.[2] On an ellipse, two of the four vertices lie on the major axis and two lie on the minor axis.[4] For a circle, which has constant curvature, every point is a vertex. Cusps and osculation Vertices are points where the curve has 4-point contact with the osculating circle at that point.[5][6] In contrast, generic points on a curve typically only have 3-point contact with their osculating circle. The evolute of a curve will generically have a cusp when the curve has a vertex;[6] other, more degenerate and non-stable singularities may occur at higher-order vertices, at which the osculating circle has contact of higher order than four.[5] Although a single generic curve will not have any higher-order vertices, they will generically occur within a one-parameter family of curves, at the curve in the family for which two ordinary vertices coalesce to form a higher vertex and then annihilate. The symmetry set of a curve has endpoints at the cusps corresponding to the vertices, and the medial axis, a subset of the symmetry set, also has its endpoints in the cusps. Other properties According to the classical four-vertex theorem, every simple closed planar smooth curve must have at least four vertices.[7] A more general fact is that every simple closed space curve which lies on the boundary of a convex body, or even bounds a locally convex disk, must have four vertices.[8] Every curve of constant width must have at least six vertices.[9] If a planar curve is bilaterally symmetric, it will have a vertex at the point or points where the axis of symmetry crosses the curve. Thus, the notion of a vertex for a curve is closely related to that of an optical vertex, the point where an optical axis crosses a lens surface. Notes 1. Agoston (2005), p. 570; Gibson (2001), p. 126. 2. Gibson (2001), p. 127. 3. Fuchs & Tabachnikov (2007), p. 141. 4. Agoston (2005), p. 570; Gibson (2001), p. 127. 5. Gibson (2001), p. 126. 6. Fuchs & Tabachnikov (2007), p. 142. 7. Agoston (2005), Theorem 9.3.9, p. 570; Gibson (2001), Section 9.3, "The Four Vertex Theorem", pp. 133–136; Fuchs & Tabachnikov (2007), Theorem 10.3, p. 149. 8. Sedykh (1994); Ghomi (2015) 9. Martinez-Maure (1996); Craizer, Teixeira & Balestro (2018) References • Agoston, Max K. (2005), Computer Graphics and Geometric Modelling: Mathematics, Springer, ISBN 9781852338176. • Craizer, Marcos; Teixeira, Ralph; Balestro, Vitor (2018), "Closed cycloids in a normed plane", Monatshefte für Mathematik, 185 (1): 43–60, arXiv:1608.01651, doi:10.1007/s00605-017-1030-5, MR 3745700, S2CID 254062096. • Fuchs, D. B.; Tabachnikov, Serge (2007), Mathematical Omnibus: Thirty Lectures on Classic Mathematics, American Mathematical Society, ISBN 9780821843161 • Ghomi, Mohammad (2015), Boundary torsion and convex caps of locally convex surfaces, arXiv:1501.07626, Bibcode:2015arXiv150107626G • Gibson, C. G. (2001), Elementary Geometry of Differentiable Curves: An Undergraduate Introduction, Cambridge University Press, ISBN 9780521011075. • Martinez-Maure, Yves (1996), "A note on the tennis ball theorem", American Mathematical Monthly, 103 (4): 338–340, doi:10.2307/2975192, JSTOR 2975192, MR 1383672. • Sedykh, V.D. (1994), "Four vertices of a convex space curve", Bull. London Math. Soc., 26 (2): 177–180, doi:10.1112/blms/26.2.177
Wikipedia
Vertex arrangement In geometry, a vertex arrangement is a set of points in space described by their relative positions. They can be described by their use in polytopes. For the local description of faces around a vertex of a polyhedron or tiling, see vertex figure. For example, a square vertex arrangement is understood to mean four points in a plane, equal distance and angles from a center point. Two polytopes share the same vertex arrangement if they share the same 0-skeleton. A group of polytopes that shares a vertex arrangement is called an army. Vertex arrangement The same set of vertices can be connected by edges in different ways. For example, the pentagon and pentagram have the same vertex arrangement, while the second connects alternate vertices. Two polygons with same vertex arrangement. pentagon pentagram A vertex arrangement is often described by the convex hull polytope which contains it. For example, the regular pentagram can be said to have a (regular) pentagonal vertex arrangement. ABCD is a concave quadrilateral (green). Its vertex arrangement is the set {A, B, C, D}. Its convex hull is the triangle ABC (blue). The vertex arrangement of the convex hull is the set {A, B, C}, which is not the same as that of the quadrilateral; so here, the convex hull is not a way to describe the vertex arrangement. Infinite tilings can also share common vertex arrangements. For example, this triangular lattice of points can be connected to form either isosceles triangles or rhombic faces. Four tilings with same vertex arrangement. Lattice points Triangular tiling rhombic tiling Zig-zag rhombic tiling Rhombille tiling Edge arrangement Polyhedra can also share an edge arrangement while differing in their faces. For example, the self-intersecting great dodecahedron shares its edge arrangement with the convex icosahedron: Two polyhedra with same edge arrangement. icosahedron (20 triangles) great dodecahedron (12 intersecting pentagons) A group polytopes that share both a vertex arrangement and an edge arrangement are called a regiment. Face arrangement 4-polytopes can also have the same face arrangement which means they have similar vertex, edge, and face arrangements, but may differ in their cells. For example, of the ten nonconvex regular Schläfli-Hess polychora, there are only 7 unique face arrangements. For example, the grand stellated 120-cell and great stellated 120-cell, both with pentagrammic faces, appear visually indistinguishable without a representation of their cells: Two (projected) polychora with same face arrangement Grand stellated 120-cell (120 small stellated dodecahedra) Great stellated 120-cell (120 great stellated dodecahedra) Classes of similar polytopes George Olshevsky advocates the term regiment for a set of polytopes that share an edge arrangement, and more generally n-regiment for a set of polytopes that share elements up to dimension n. Synonyms for special cases include company for a 2-regiment (sharing faces) and army for a 0-regiment (sharing vertices). See also • n-skeleton - a set of elements of dimension n and lower in a higher polytope. • Vertex figure - A local arrangement of faces in a polyhedron (or arrangement of cells in a polychoron) around a single vertex. External links • Olshevsky, George. "Army". Glossary for Hyperspace. Archived from the original on 4 February 2007. (Same vertex arrangement) • Olshevsky, George. "Regiment". Glossary for Hyperspace. Archived from the original on 4 February 2007. (Same vertex and edge arrangement) • Olshevsky, George. "Company". Glossary for Hyperspace. Archived from the original on 4 February 2007. (Same vertex, edge and face arrangement)
Wikipedia
Vertex configuration In geometry, a vertex configuration[1][2][3][4] is a shorthand notation for representing the vertex figure of a polyhedron or tiling as the sequence of faces around a vertex. For uniform polyhedra there is only one vertex type and therefore the vertex configuration fully defines the polyhedron. (Chiral polyhedra exist in mirror-image pairs with the same vertex configuration.) Icosidodecahedron Vertex figure represented as 3.5.3.5 or (3.5)2 A vertex configuration is given as a sequence of numbers representing the number of sides of the faces going around the vertex. The notation "a.b.c" describes a vertex that has 3 faces around it, faces with a, b, and c sides. For example, "3.5.3.5" indicates a vertex belonging to 4 faces, alternating triangles and pentagons. This vertex configuration defines the vertex-transitive icosidodecahedron. The notation is cyclic and therefore is equivalent with different starting points, so 3.5.3.5 is the same as 5.3.5.3. The order is important, so 3.3.5.5 is different from 3.5.3.5 (the first has two triangles followed by two pentagons). Repeated elements can be collected as exponents so this example is also represented as (3.5)2. It has variously been called a vertex description,[5][6][7] vertex type,[8][9] vertex symbol,[10][11] vertex arrangement,[12] vertex pattern,[13] face-vector.[14] It is also called a Cundy and Rollett symbol for its usage for the Archimedean solids in their 1952 book Mathematical Models.[15][16][17] Vertex figures A vertex configuration can also be represented as a polygonal vertex figure showing the faces around the vertex. This vertex figure has a 3-dimensional structure since the faces are not in the same plane for polyhedra, but for vertex-uniform polyhedra all the neighboring vertices are in the same plane and so this plane projection can be used to visually represent the vertex configuration. Variations and uses Regular vertex figure nets, {p,q} = pq {3,3} = 33 Defect 180° {3,4} = 34 Defect 120° {3,5} = 35 Defect 60° {3,6} = 36 Defect 0° {4,3} Defect 90° {4,4} = 44 Defect 0° {5,3} = 53 Defect 36° {6,3} = 63 Defect 0° A vertex needs at least 3 faces, and an angle defect. A 0° angle defect will fill the Euclidean plane with a regular tiling. By Descartes' theorem, the number of vertices is 720°/defect (4π radians/defect). Different notations are used, sometimes with a comma (,) and sometimes a period (.) separator. The period operator is useful because it looks like a product and an exponent notation can be used. For example, 3.5.3.5 is sometimes written as (3.5)2. The notation can also be considered an expansive form of the simple Schläfli symbol for regular polyhedra. The Schläfli notation {p,q} means q p-gons around each vertex. So {p,q} can be written as p.p.p... (q times) or pq. For example, an icosahedron is {3,5} = 3.3.3.3.3 or 35. This notation applies to polygonal tilings as well as polyhedra. A planar vertex configuration denotes a uniform tiling just like a nonplanar vertex configuration denotes a uniform polyhedron. The notation is ambiguous for chiral forms. For example, the snub cube has clockwise and counterclockwise forms which are identical across mirror images. Both have a 3.3.3.3.4 vertex configuration. Star polygons The notation also applies for nonconvex regular faces, the star polygons. For example, a pentagram has the symbol {5/2}, meaning it has 5 sides going around the centre twice. For example, there are 4 regular star polyhedra with regular polygon or star polygon vertex figures. The small stellated dodecahedron has the Schläfli symbol of {5/2,5} which expands to an explicit vertex configuration 5/2.5/2.5/2.5/2.5/2 or combined as (5/2)5. The great stellated dodecahedron, {5/2,3} has a triangular vertex figure and configuration (5/2.5/2.5/2) or (5/2)3. The great dodecahedron, {5,5/2} has a pentagrammic vertex figure, with vertex configuration is (5.5.5.5.5)/2 or (55)/2. A great icosahedron, {3,5/2} also has a pentagrammic vertex figure, with vertex configuration (3.3.3.3.3)/2 or (35)/2. {5/2,5} = (5/2)5 {5/2,3} = (5/2)3 34.5/2 34.5/3 (34.5/2)/2 {5,5/2} = (55)/2 {3,5/2} = (35)/2 V.34.5/2 V34.5/3 V(34.5/2)/2 Inverted polygons Faces on a vertex figure are considered to progress in one direction. Some uniform polyhedra have vertex figures with inversions where the faces progress retrograde. A vertex figure represents this in the star polygon notation of sides p/q such that p<2q, where p is the number of sides and q the number of turns around a circle. For example, "3/2" means a triangle that has vertices that go around twice, which is the same as backwards once. Similarly "5/3" is a backwards pentagram 5/2. All uniform vertex configurations of regular convex polygons See also: Archimedean_solid § Classification, Tiling by regular polygons § Combinations of regular polygons that can meet at a vertex, and Uniform tiling § Expanded lists of uniform tilings Semiregular polyhedra have vertex configurations with positive angle defect. NOTE: The vertex figure can represent a regular or semiregular tiling on the plane if its defect is zero. It can represent a tiling of the hyperbolic plane if its defect is negative. For uniform polyhedra, the angle defect can be used to compute the number of vertices. Descartes' theorem states that all the angle defects in a topological sphere must sum to 4π radians or 720 degrees. Since uniform polyhedra have all identical vertices, this relation allows us to compute the number of vertices, which is 4π/defect or 720/defect. Example: A truncated cube 3.8.8 has an angle defect of 30 degrees. Therefore, it has 720/30 = 24 vertices. In particular it follows that {a,b} has 4 / (2 - b(1 - 2/a)) vertices. Every enumerated vertex configuration potentially uniquely defines a semiregular polyhedron. However, not all configurations are possible. Topological requirements limit existence. Specifically p.q.r implies that a p-gon is surrounded by alternating q-gons and r-gons, so either p is even or q equals r. Similarly q is even or p equals r, and r is even or p equals q. Therefore, potentially possible triples are 3.3.3, 3.4.4, 3.6.6, 3.8.8, 3.10.10, 3.12.12, 4.4.n (for any n>2), 4.6.6, 4.6.8, 4.6.10, 4.6.12, 4.8.8, 5.5.5, 5.6.6, 6.6.6. In fact, all these configurations with three faces meeting at each vertex turn out to exist. The number in parentheses is the number of vertices, determined by the angle defect. Triples • Platonic solids 3.3.3 (4), 4.4.4 (8), 5.5.5 (20) • prisms 3.4.4 (6), 4.4.4 (8; also listed above), 4.4.n (2n) • Archimedean solids 3.6.6 (12), 3.8.8 (24), 3.10.10 (60), 4.6.6 (24), 4.6.8 (48), 4.6.10 (120), 5.6.6 (60). • regular tiling 6.6.6 • semiregular tilings 3.12.12, 4.6.12, 4.8.8 Quadruples • Platonic solid 3.3.3.3 (6) • antiprisms 3.3.3.3 (6; also listed above), 3.3.3.n (2n) • Archimedean solids 3.4.3.4 (12), 3.5.3.5 (30), 3.4.4.4 (24), 3.4.5.4 (60) • regular tiling 4.4.4.4 • semiregular tilings 3.6.3.6, 3.4.6.4 Quintuples • Platonic solid 3.3.3.3.3 (12) • Archimedean solids 3.3.3.3.4 (24), 3.3.3.3.5 (60) (both chiral) • semiregular tilings 3.3.3.3.6 (chiral), 3.3.3.4.4, 3.3.4.3.4 (note that the two different orders of the same numbers give two different patterns) Sextuples • regular tiling 3.3.3.3.3.3 Face configuration The uniform dual or Catalan solids, including the bipyramids and trapezohedra, are vertically-regular (face-transitive) and so they can be identified by a similar notation which is sometimes called face configuration.[3] Cundy and Rollett prefixed these dual symbols by a V. In contrast, Tilings and Patterns uses square brackets around the symbol for isohedral tilings. This notation represents a sequential count of the number of faces that exist at each vertex around a face.[18] For example, V3.4.3.4 or V(3.4)2 represents the rhombic dodecahedron which is face-transitive: every face is a rhombus, and alternating vertices of the rhombus contain 3 or 4 faces each. Notes 1. Uniform Solution for Uniform Polyhedra Archived 2015-11-27 at the Wayback Machine (1993) 2. The Uniform Polyhedra Roman E. Maeder (1995) 3. Crystallography of Quasicrystals: Concepts, Methods and Structures by Walter Steurer, Sofia Deloudi, (2009) pp. 18–20 and 51–53 4. Physical Metallurgy: 3-Volume Set, Volume 1 edited by David E. Laughlin, (2014) pp. 16–20 5. Archimedean Polyhedra Steven Dutch 6. Uniform Polyhedra Jim McNeill 7. Uniform Polyhedra and their Duals Robert Webb 8. Symmetry-type graphs of Platonic and Archimedean solids, Jurij Kovič, (2011) 9. 3. General Theorems: Regular and Semi-Regular Tilings Kevin Mitchell, 1995 10. Resources for Teaching Discrete Mathematics: Classroom Projects, History, modules, and articles, edited by Brian Hopkins 11. Vertex Symbol Robert Whittaker 12. Structure and Form in Design: Critical Ideas for Creative Practice By Michael Hann 13. Symmetry-type graphs of Platonic and Archimedean solids Jurij Kovič 14. Deza, Michel; Shtogrin, Mikhail (1999). "Uniform Partitions of 3-space, their Relatives and Embedding". arXiv:math/9906034. Bibcode:1999math......6034D. {{cite journal}}: Cite journal requires |journal= (help) 15. Weisstein, Eric W. "Archimedean solid". MathWorld. 16. Divided Spheres: Geodesics and the Orderly Subdivision of the Sphere 6.4.1 Cundy-Rollett symbol, p. 164 17. Laughlin (2014), p. 16 18. Cundy and Rollett (1952) References • Cundy, H. and Rollett, A., Mathematical Models (1952), (3rd edition, 1989, Stradbroke, England: Tarquin Pub.), 3.7 The Archimedean Polyhedra. Pp. 101–115, pp. 118–119 Table I, Nets of Archimedean Duals, V.a.b.c... as vertically-regular symbols. • Peter Cromwell, Polyhedra, Cambridge University Press (1977) The Archimedean solids. Pp. 156–167. • Williams, Robert (1979). The Geometrical Foundation of Natural Structure: A Source Book of Design. Dover Publications, Inc. ISBN 0-486-23729-X. Uses Cundy-Rollett symbol. • Grünbaum, Branko; Shephard, G. C. (1987). Tilings and Patterns. W. H. Freeman and Company. ISBN 0-7167-1193-1. Pp. 58–64, Tilings of regular polygons a.b.c.... (Tilings by regular polygons and star polygons) pp. 95–97, 176, 283, 614–620, Monohedral tiling symbol [v1.v2. ... .vr]. pp. 632–642 hollow tilings. • The Symmetries of Things 2008, John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, ISBN 978-1-56881-220-5 (p. 289 Vertex figures, uses comma separator, for Archimedean solids and tilings). External links • Consistent Vertex Descriptions Stella (software), Robert Webb
Wikipedia
Vertex cover In graph theory, a vertex cover (sometimes node cover) of a graph is a set of vertices that includes at least one endpoint of every edge of the graph. In computer science, the problem of finding a minimum vertex cover is a classical optimization problem. It is NP-hard, so it cannot be solved by a polynomial-time algorithm if P ≠ NP. Moreover, it is hard to approximate – it cannot be approximated up to a factor smaller than 2 if the unique games conjecture is true. On the other hand, it has several simple 2-factor approximations. It is a typical example of an NP-hard optimization problem that has an approximation algorithm. Its decision version, the vertex cover problem, was one of Karp's 21 NP-complete problems and is therefore a classical NP-complete problem in computational complexity theory. Furthermore, the vertex cover problem is fixed-parameter tractable and a central problem in parameterized complexity theory. The minimum vertex cover problem can be formulated as a half-integral, linear program whose dual linear program is the maximum matching problem. Vertex cover problems have been generalized to hypergraphs, see Vertex cover in hypergraphs. Covering/packing-problem pairs Covering problems Packing problems Minimum set cover Maximum set packing Minimum edge cover Maximum matching Minimum vertex cover Maximum independent set Bin covering Bin packing Polygon covering Rectangle packing Definition Formally, a vertex cover $V'$ of an undirected graph $G=(V,E)$ is a subset of $V$ such that $uv\in E\Rightarrow u\in V'\lor v\in V'$, that is to say it is a set of vertices $V'$ where every edge has at least one endpoint in the vertex cover $V'$. Such a set is said to cover the edges of $G$. The upper figure shows two examples of vertex covers, with some vertex cover $V'$ marked in red. A minimum vertex cover is a vertex cover of smallest possible size. The vertex cover number $\tau $ is the size of a minimum vertex cover, i.e. $\tau =|V'|$. The lower figure shows examples of minimum vertex covers in the previous graphs. Examples • The set of all vertices is a vertex cover. • The endpoints of any maximal matching form a vertex cover. • The complete bipartite graph $K_{m,n}$ has a minimum vertex cover of size $\tau (K_{m,n})=\min\{\,m,n\,\}$. Properties • A set of vertices is a vertex cover if and only if its complement is an independent set. • Consequently, the number of vertices of a graph is equal to its minimum vertex cover number plus the size of a maximum independent set (Gallai 1959). Computational problem The minimum vertex cover problem is the optimization problem of finding a smallest vertex cover in a given graph. INSTANCE: Graph $G$ OUTPUT: Smallest number $k$ such that $G$ has a vertex cover of size $k$. If the problem is stated as a decision problem, it is called the vertex cover problem: INSTANCE: Graph $G$ and positive integer $k$. QUESTION: Does $G$ have a vertex cover of size at most $k$? The vertex cover problem is an NP-complete problem: it was one of Karp's 21 NP-complete problems. It is often used in computational complexity theory as a starting point for NP-hardness proofs. ILP formulation Assume that every vertex has an associated cost of $c(v)\geq 0$. The (weighted) minimum vertex cover problem can be formulated as the following integer linear program (ILP).[1] minimize $\textstyle \sum _{v\in V}c(v)x_{v}$    (minimize the total cost) subject to $x_{u}+x_{v}\geq 1$ for all $\{u,v\}\in E$ (cover every edge of the graph), $x_{v}\in \{0,1\}$ for all $v\in V$. (every vertex is either in the vertex cover or not) This ILP belongs to the more general class of ILPs for covering problems. The integrality gap of this ILP is $2$, so its relaxation (allowing each variable to be in the interval from 0 to 1, rather than requiring the variables to be only 0 or 1) gives a factor-$2$ approximation algorithm for the minimum vertex cover problem. Furthermore, the linear programming relaxation of that ILP is half-integral, that is, there exists an optimal solution for which each entry $x_{v}$ is either 0, 1/2, or 1. A 2-approximate vertex cover can be obtained from this fractional solution by selecting the subset of vertices whose variables are nonzero. Exact evaluation The decision variant of the vertex cover problem is NP-complete, which means it is unlikely that there is an efficient algorithm to solve it exactly for arbitrary graphs. NP-completeness can be proven by reduction from 3-satisfiability or, as Karp did, by reduction from the clique problem. Vertex cover remains NP-complete even in cubic graphs[2] and even in planar graphs of degree at most 3.[3] For bipartite graphs, the equivalence between vertex cover and maximum matching described by Kőnig's theorem allows the bipartite vertex cover problem to be solved in polynomial time. For tree graphs, an algorithm finds a minimal vertex cover in polynomial time by finding the first leaf in the tree and adding its parent to the minimal vertex cover, then deleting the leaf and parent and all associated edges and continuing repeatedly until no edges remain in the tree. Fixed-parameter tractability An exhaustive search algorithm can solve the problem in time 2knO(1), where k is the size of the vertex cover. Vertex cover is therefore fixed-parameter tractable, and if we are only interested in small k, we can solve the problem in polynomial time. One algorithmic technique that works here is called bounded search tree algorithm, and its idea is to repeatedly choose some vertex and recursively branch, with two cases at each step: place either the current vertex or all its neighbours into the vertex cover. The algorithm for solving vertex cover that achieves the best asymptotic dependence on the parameter runs in time $O(1.2738^{k}+(k\cdot n))$.[4] The klam value of this time bound (an estimate for the largest parameter value that could be solved in a reasonable amount of time) is approximately 190. That is, unless additional algorithmic improvements can be found, this algorithm is suitable only for instances whose vertex cover number is 190 or less. Under reasonable complexity-theoretic assumptions, namely the exponential time hypothesis, the running time cannot be improved to 2o(k), even when $n$ is $O(k)$. However, for planar graphs, and more generally, for graphs excluding some fixed graph as a minor, a vertex cover of size k can be found in time $2^{O({\sqrt {k}})}n^{O(1)}$, i.e., the problem is subexponential fixed-parameter tractable.[5] This algorithm is again optimal, in the sense that, under the exponential time hypothesis, no algorithm can solve vertex cover on planar graphs in time $2^{o({\sqrt {k}})}n^{O(1)}$.[6] Approximate evaluation One can find a factor-2 approximation by repeatedly taking both endpoints of an edge into the vertex cover, then removing them from the graph. Put otherwise, we find a maximal matching M with a greedy algorithm and construct a vertex cover C that consists of all endpoints of the edges in M. In the following figure, a maximal matching M is marked with red, and the vertex cover C is marked with blue. The set C constructed this way is a vertex cover: suppose that an edge e is not covered by C; then M ∪ {e} is a matching and e ∉ M, which is a contradiction with the assumption that M is maximal. Furthermore, if e = {u, v} ∈ M, then any vertex cover – including an optimal vertex cover – must contain u or v (or both); otherwise the edge e is not covered. That is, an optimal cover contains at least one endpoint of each edge in M; in total, the set C is at most 2 times as large as the optimal vertex cover. This simple algorithm was discovered independently by Fanica Gavril and Mihalis Yannakakis.[7] More involved techniques show that there are approximation algorithms with a slightly better approximation factor. For example, an approximation algorithm with an approximation factor of $ 2-\Theta \left(1/{\sqrt {\log |V|}}\right)$ is known.[8] The problem can be approximated with an approximation factor $2/(1+\delta )$ in $\delta $ - dense graphs.[9] Inapproximability No better constant-factor approximation algorithm than the above one is known. The minimum vertex cover problem is APX-complete, that is, it cannot be approximated arbitrarily well unless P = NP. Using techniques from the PCP theorem, Dinur and Safra proved in 2005 that minimum vertex cover cannot be approximated within a factor of 1.3606 for any sufficiently large vertex degree unless P = NP.[10] Later, the factor was improved to ${\sqrt {2}}-\epsilon $ for any $\epsilon >0$.[11][12] Moreover, if the unique games conjecture is true then minimum vertex cover cannot be approximated within any constant factor better than 2.[13] Although finding the minimum-size vertex cover is equivalent to finding the maximum-size independent set, as described above, the two problems are not equivalent in an approximation-preserving way: The Independent Set problem has no constant-factor approximation unless P = NP. Pseudocode APPROXIMATION-VERTEX-COVER(G) C = ∅ E'= G.E while E' ≠ ∅: let (u, v) be an arbitrary edge of E' C = C ∪ {u, v} remove from E' every edge incident on either u or v return C [14] [15] Applications Vertex cover optimization serves as a model for many real-world and theoretical problems. For example, a commercial establishment interested in installing the fewest possible closed circuit cameras covering all hallways (edges) connecting all rooms (nodes) on a floor might model the objective as a vertex cover minimization problem. The problem has also been used to model the elimination of repetitive DNA sequences for synthetic biology and metabolic engineering applications.[16][17] Notes 1. Vazirani 2003, pp. 121–122 2. Garey, Johnson & Stockmeyer 1974 3. Garey & Johnson 1977; Garey & Johnson 1979, pp. 190 and 195. 4. Chen, Kanj & Xia 2006 5. Demaine et al. 2005 6. Flum & Grohe (2006, p. 437) 7. Papadimitriou & Steiglitz 1998, p. 432, mentions both Gavril and Yannakakis. Garey & Johnson 1979, p. 134, cites Gavril. 8. Karakostas 2009 9. Karpinski & Zelikovsky 1998 10. Dinur & Safra 2005 11. Khot, Minzer & Safra 2017 harvnb error: no target: CITEREFKhotMinzerSafra2017 (help) 12. Dinur et al. 2018 harvnb error: no target: CITEREFDinurKhotKindlerMinzer2018 (help) 13. Khot & Regev 2008 14. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001) [1990]. "Section 35.1: The vertex-cover problem". Introduction to Algorithms (2nd ed.). MIT Press and McGraw-Hill. pp. 1024–1027. ISBN 0-262-03293-7. 15. Chakrabarti, Amit (Winter 2005). "Approximation Algorithms: Vertex Cover" (PDF). Computer Science 105. Dartmouth College. Retrieved 21 February 2005. 16. Hossain, Ayaan; Lopez, Eriberto; Halper, Sean M.; Cetnar, Daniel P.; Reis, Alexander C.; Strickland, Devin; Klavins, Eric; Salis, Howard M. (2020-07-13). "Automated design of thousands of nonrepetitive parts for engineering stable genetic systems". Nature Biotechnology. 38 (12): 1466–1475. doi:10.1038/s41587-020-0584-2. ISSN 1087-0156. PMID 32661437. S2CID 220506228. 17. Reis, Alexander C.; Halper, Sean M.; Vezeau, Grace E.; Cetnar, Daniel P.; Hossain, Ayaan; Clauer, Phillip R.; Salis, Howard M. (November 2019). "Simultaneous repression of multiple bacterial genes using nonrepetitive extra-long sgRNA arrays". Nature Biotechnology. 37 (11): 1294–1301. doi:10.1038/s41587-019-0286-9. ISSN 1546-1696. OSTI 1569832. PMID 31591552. S2CID 203852115. References • Chen, Jianer; Kanj, Iyad A.; Xia, Ge (2006). "Improved Parameterized Upper Bounds for Vertex Cover". Mathematical Foundations of Computer Science 2006: 31st International Symposium, MFCS 2006, Stará Lesná, Slovakia, August 28-September 1, 2006, Proceedings (PDF). Lecture Notes in Computer Science. Vol. 4162. Springer-Verlag. pp. 238–249. doi:10.1007/11821069_21. ISBN 978-3-540-37791-7. • Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001). Introduction to Algorithms. Cambridge, Mass.: MIT Press and McGraw-Hill. pp. 1024–1027. ISBN 0-262-03293-7. • Demaine, Erik; Fomin, Fedor V.; Hajiaghayi, Mohammad Taghi; Thilikos, Dimitrios M. (2005). "Subexponential parameterized algorithms on bounded-genus graphs and H-minor-free graphs". Journal of the ACM. 52 (6): 866–893. doi:10.1145/1101821.1101823. S2CID 6238832. Retrieved 2010-03-05. • Dinur, Irit; Safra, Samuel (2005). "On the hardness of approximating minimum vertex cover". Annals of Mathematics. 162 (1): 439–485. CiteSeerX 10.1.1.125.334. doi:10.4007/annals.2005.162.439. • Flum, Jörg; Grohe, Martin (2006). Parameterized Complexity Theory. Springer. ISBN 978-3-540-29952-3. Retrieved 2010-03-05. • Garey, Michael R.; Johnson, David S. (1977). "The rectilinear Steiner tree problem is NP-complete". SIAM Journal on Applied Mathematics. 32 (4): 826–834. doi:10.1137/0132071. • Garey, Michael R.; Johnson, David S. (1979). Computers and Intractability: A Guide to the Theory of NP-Completeness. W.H. Freeman. ISBN 0-7167-1045-5. A1.1: GT1, pg.190. • Garey, Michael R.; Johnson, David S.; Stockmeyer, Larry (1974). "Some simplified NP-complete problems". Proceedings of the Sixth Annual ACM Symposium on Theory of Computing. pp. 47–63. doi:10.1145/800119.803884. • Gallai, Tibor (1959). "Über extreme Punkt- und Kantenmengen". Ann. Univ. Sci. Budapest, Eötvös Sect. Math. 2: 133–138. • Karakostas, George (November 2009). "A better approximation ratio for the vertex cover problem" (PDF). ACM Transactions on Algorithms. 5 (4): 41:1–41:8. CiteSeerX 10.1.1.649.7407. doi:10.1145/1597036.1597045. S2CID 2525818. ECCC TR04-084. • Karpinski, Marek; Zelikovsky, Alexander (1998). "Approximating dense cases of covering problems". Proceedings of the DIMACS Workshop on Network Design: Connectivity and Facilities Location. DIMACS Series in Discrete Mathematics and Theoretical Computer Science. Vol. 40. American Mathematical Society. pp. 169–178. • Khot, Subhash; Regev, Oded (2008). "Vertex cover might be hard to approximate to within 2−ε". Journal of Computer and System Sciences. 74 (3): 335–349. doi:10.1016/j.jcss.2007.06.019. • Khot, Subhash; Minzer, Dor; Safra, Muli (2018), "Pseudorandom Sets in Grassmann Graph Have Near-Perfect Expansion", 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), pp. 592–601, doi:10.1109/FOCS.2018.00062, ISBN 978-1-5386-4230-6, S2CID 3688775 • O'Callahan, Robert; Choi, Jong-Deok (2003). "Hybrid dynamic data race detection". ACM SIGPLAN Notices. 38 (10): 167–178. doi:10.1145/966049.781528. • Papadimitriou, Christos H.; Steiglitz, Kenneth (1998). Combinatorial Optimization: Algorithms and Complexity. Dover. • Vazirani, Vijay V. (2003). Approximation Algorithms. Springer-Verlag. ISBN 978-3-662-04565-7. External links Wikimedia Commons has media related to Vertex cover problem. • Weisstein, Eric W. "Vertex Cover". MathWorld. • Weisstein, Eric W. "Minimum Vertex Cover". MathWorld. • Weisstein, Eric W. "Vertex Cover Number". MathWorld. • River Crossings (and Alcuin Numbers) – Numberphile
Wikipedia
Vertex enumeration problem In mathematics, the vertex enumeration problem for a polytope, a polyhedral cell complex, a hyperplane arrangement, or some other object of discrete geometry, is the problem of determination of the object's vertices given some formal representation of the object. A classical example is the problem of enumeration of the vertices of a convex polytope specified by a set of linear inequalities:[1] $Ax\leq b$ where A is an m×n matrix, x is an n×1 column vector of variables, and b is an m×1 column vector of constants. The inverse (dual) problem of finding the bounding inequalities given the vertices is called facet enumeration (see convex hull algorithms). Computational complexity The computational complexity of the problem is a subject of research in computer science. For unbounded polyhedra, the problem is known to be NP-hard, more precisely, there is no algorithm that runs in polynomial time in the combined input-output size, unless P=NP.[2] A 1992 article by David Avis and Komei Fukuda[3] presents a reverse-search algorithm which finds the v vertices of a polytope defined by a nondegenerate system of n inequalities in d dimensions (or, dually, the v facets of the convex hull of n points in d dimensions, where each facet contains exactly d given points) in time O(ndv) and space O(nd). The v vertices in a simple arrangement of n hyperplanes in d dimensions can be found in O(n2dv) time and O(nd) space complexity. The Avis–Fukuda algorithm adapted the criss-cross algorithm for oriented matroids. Notes 1. Eric W. Weisstein CRC Concise Encyclopedia of Mathematics, 2002, ISBN 1-58488-347-2, p. 3154, article "vertex enumeration" 2. Leonid Khachiyan; Endre Boros; Konrad Borys; Khaled Elbassioni; Vladimir Gurvich (March 2008). "Generating All Vertices of a Polyhedron Is Hard". Discrete and Computational Geometry. 39 (1–3): 174–190. doi:10.1007/s00454-008-9050-5. 3. David Avis; Komei Fukuda (December 1992). "A pivoting algorithm for convex hulls and vertex enumeration of arrangements and polyhedra". Discrete and Computational Geometry. 8 (1): 295–313. doi:10.1007/BF02293050. References • Avis, David; Fukuda, Komei (December 1992). "A pivoting algorithm for convex hulls and vertex enumeration of arrangements and polyhedra". Discrete and Computational Geometry. 8 (1): 295–313. doi:10.1007/BF02293050. MR 1174359.
Wikipedia
Vertex figure In geometry, a vertex figure, broadly speaking, is the figure exposed when a corner of a polyhedron or polytope is sliced off. Definitions Take some corner or vertex of a polyhedron. Mark a point somewhere along each connected edge. Draw lines across the connected faces, joining adjacent points around the face. When done, these lines form a complete circuit, i.e. a polygon, around the vertex. This polygon is the vertex figure. More precise formal definitions can vary quite widely, according to circumstance. For example Coxeter (e.g. 1948, 1954) varies his definition as convenient for the current area of discussion. Most of the following definitions of a vertex figure apply equally well to infinite tilings or, by extension, to space-filling tessellation with polytope cells and other higher-dimensional polytopes. As a flat slice Make a slice through the corner of the polyhedron, cutting through all the edges connected to the vertex. The cut surface is the vertex figure. This is perhaps the most common approach, and the most easily understood. Different authors make the slice in different places. Wenninger (2003) cuts each edge a unit distance from the vertex, as does Coxeter (1948). For uniform polyhedra the Dorman Luke construction cuts each connected edge at its midpoint. Other authors make the cut through the vertex at the other end of each edge.[1][2] For an irregular polyhedron, cutting all edges incident to a given vertex at equal distances from the vertex may produce a figure that does not lie in a plane. A more general approach, valid for arbitrary convex polyhedra, is to make the cut along any plane which separates the given vertex from all the other vertices, but is otherwise arbitrary. This construction determines the combinatorial structure of the vertex figure, similar to a set of connected vertices (see below), but not its precise geometry; it may be generalized to convex polytopes in any dimension. However, for non-convex polyhedra, there may not exist a plane near the vertex that cuts all of the faces incident to the vertex. As a spherical polygon Cromwell (1999) forms the vertex figure by intersecting the polyhedron with a sphere centered at the vertex, small enough that it intersects only edges and faces incident to the vertex. This can be visualized as making a spherical cut or scoop, centered on the vertex. The cut surface or vertex figure is thus a spherical polygon marked on this sphere. One advantage of this method is that the shape of the vertex figure is fixed (up to the scale of the sphere), whereas the method of intersecting with a plane can produce different shapes depending on the angle of the plane. Additionally, this method works for non-convex polyhedra. As the set of connected vertices Many combinatorial and computational approaches (e.g. Skilling, 1975) treat a vertex figure as the ordered (or partially ordered) set of points of all the neighboring (connected via an edge) vertices to the given vertex. Abstract definition In the theory of abstract polytopes, the vertex figure at a given vertex V comprises all the elements which are incident on the vertex; edges, faces, etc. More formally it is the (n−1)-section Fn/V, where Fn is the greatest face. This set of elements is elsewhere known as a vertex star. The geometrical vertex figure and the vertex star may be understood as distinct realizations of the same abstract section. General properties A vertex figure of an n-polytope is an (n−1)-polytope. For example, a vertex figure of a polyhedron is a polygon, and the vertex figure for a 4-polytope is a polyhedron. In general a vertex figure need not be planar. For nonconvex polyhedra, the vertex figure may also be nonconvex. Uniform polytopes, for instance, can have star polygons for faces and/or for vertex figures. Isogonal figures Vertex figures are especially significant for uniforms and other isogonal (vertex-transitive) polytopes because one vertex figure can define the entire polytope. For polyhedra with regular faces, a vertex figure can be represented in vertex configuration notation, by listing the faces in sequence around the vertex. For example 3.4.4.4 is a vertex with one triangle and three squares, and it defines the uniform rhombicuboctahedron. If the polytope is isogonal, the vertex figure will exist in a hyperplane surface of the n-space. Constructions From the adjacent vertices By considering the connectivity of these neighboring vertices, a vertex figure can be constructed for each vertex of a polytope: • Each vertex of the vertex figure coincides with a vertex of the original polytope. • Each edge of the vertex figure exists on or inside of a face of the original polytope connecting two alternate vertices from an original face. • Each face of the vertex figure exists on or inside a cell of the original n-polytope (for n > 3). • ... and so on to higher order elements in higher order polytopes. Dorman Luke construction For a uniform polyhedron, the face of the dual polyhedron may be found from the original polyhedron's vertex figure using the "Dorman Luke" construction. Regular polytopes If a polytope is regular, it can be represented by a Schläfli symbol and both the cell and the vertex figure can be trivially extracted from this notation. In general a regular polytope with Schläfli symbol {a,b,c,...,y,z} has cells as {a,b,c,...,y}, and vertex figures as {b,c,...,y,z}. 1. For a regular polyhedron {p,q}, the vertex figure is {q}, a q-gon. • Example, the vertex figure for a cube {4,3}, is the triangle {3}. 2. For a regular 4-polytope or space-filling tessellation {p,q,r}, the vertex figure is {q,r}. • Example, the vertex figure for a hypercube {4,3,3}, the vertex figure is a regular tetrahedron {3,3}. • Also the vertex figure for a cubic honeycomb {4,3,4}, the vertex figure is a regular octahedron {3,4}. Since the dual polytope of a regular polytope is also regular and represented by the Schläfli symbol indices reversed, it is easy to see the dual of the vertex figure is the cell of the dual polytope. For regular polyhedra, this is a special case of the Dorman Luke construction. An example vertex figure of a honeycomb The vertex figure of a truncated cubic honeycomb is a nonuniform square pyramid. One octahedron and four truncated cubes meet at each vertex form a space-filling tessellation. Vertex figure: A nonuniform square pyramid Schlegel diagram Perspective Created as a square base from an octahedron (3.3.3.3) And four isosceles triangle sides from truncated cubes (3.8.8) Edge figure Related to the vertex figure, an edge figure is the vertex figure of a vertex figure.[3] Edge figures are useful for expressing relations between the elements within regular and uniform polytopes. An edge figure will be a (n−2)-polytope, representing the arrangement of facets around a given edge. Regular and single-ringed coxeter diagram uniform polytopes will have a single edge type. In general, a uniform polytope can have as many edge types as active mirrors in the construction, since each active mirror produces one edge in the fundamental domain. Regular polytopes (and honeycombs) have a single edge figure which is also regular. For a regular polytope {p,q,r,s,...,z}, the edge figure is {r,s,...,z}. In four dimensions, the edge figure of a 4-polytope or 3-honeycomb is a polygon representing the arrangement of a set of facets around an edge. For example, the edge figure for a regular cubic honeycomb {4,3,4} is a square, and for a regular 4-polytope {p,q,r} is the polygon {r}. Less trivially, the truncated cubic honeycomb t0,1{4,3,4}, has a square pyramid vertex figure, with truncated cube and octahedron cells. Here there are two types of edge figures. One is a square edge figure at the apex of the pyramid. This represents the four truncated cubes around an edge. The other four edge figures are isosceles triangles on the base vertices of the pyramid. These represent the arrangement of two truncated cubes and one octahedron around the other edges. See also • Simplicial link - an abstract concept related to vertex figure. • List of regular polytopes References Notes 1. Coxeter, H. et al. (1954). 2. Skilling, J. (1975). 3. Klitzing: Vertex figures, etc. Bibliography • H. S. M. Coxeter, Regular Polytopes, Hbk (1948), ppbk (1973). • H.S.M. Coxeter (et al.), Uniform Polyhedra, Phil. Trans. 246 A (1954) pp. 401–450. • P. Cromwell, Polyhedra, CUP pbk. (1999). • H.M. Cundy and A.P. Rollett, Mathematical Models, Oxford Univ. Press (1961). • J. Skilling, The Complete Set of Uniform Polyhedra, Phil. Trans. 278 A (1975) pp. 111–135. • M. Wenninger, Dual Models, CUP hbk (1983) ppbk (2003). • The Symmetries of Things 2008, John H. Conway, Heidi Burgiel, Chaim Goodman-Strass, ISBN 978-1-56881-220-5 (p289 Vertex figures) External links Wikimedia Commons has media related to Vertex figures. • Weisstein, Eric W. "Vertex figure". MathWorld. • Olshevsky, George. "Vertex figure". Glossary for Hyperspace. Archived from the original on 4 February 2007. • Vertex Figures • Consistent Vertex Descriptions
Wikipedia
Completing the square In elementary algebra, completing the square is a technique for converting a quadratic polynomial of the form $ax^{2}+bx+c$ to the form $a(x-h)^{2}+k$ for some values of h and k. In other words, completing the square places a perfect square trinomial inside of a quadratic expression. Completing the square is used in • solving quadratic equations, • deriving the quadratic formula, • graphing quadratic functions, • evaluating integrals in calculus, such as Gaussian integrals with a linear term in the exponent,[1] • finding Laplace transforms.[2][3] In mathematics, completing the square is often applied in any computation involving quadratic polynomials. History Further information: Algebra § History The technique of completing the square was known in the Old Babylonian Empire.[4] Muhammad ibn Musa Al-Khwarizmi, a famous polymath who wrote the early algebraic treatise Al-Jabr, used the technique of completing the square to solve quadratic equations.[5] Overview Background The formula in elementary algebra for computing the square of a binomial is: $(x+p)^{2}\,=\,x^{2}+2px+p^{2}.$ For example: ${\begin{alignedat}{2}(x+3)^{2}\,&=\,x^{2}+6x+9&&(p=3)\\[3pt](x-5)^{2}\,&=\,x^{2}-10x+25\qquad &&(p=-5).\end{alignedat}}$ In any perfect square, the coefficient of x is twice the number p, and the constant term is equal to p2. Basic example Consider the following quadratic polynomial: $x^{2}+10x+28.$ This quadratic is not a perfect square, since 28 is not the square of 5: $(x+5)^{2}\,=\,x^{2}+10x+25.$ However, it is possible to write the original quadratic as the sum of this square and a constant: $x^{2}+10x+28\,=\,(x+5)^{2}+3.$ This is called completing the square. General description Given any monic quadratic $x^{2}+bx+c,$ it is possible to form a square that has the same first two terms: $\left(x+{\tfrac {1}{2}}b\right)^{2}\,=\,x^{2}+bx+{\tfrac {1}{4}}b^{2}.$ This square differs from the original quadratic only in the value of the constant term. Therefore, we can write $x^{2}+bx+c\,=\,\left(x+{\tfrac {1}{2}}b\right)^{2}+k,$ where $k=c-{\frac {b^{2}}{4}}$. This operation is known as completing the square. For example: ${\begin{alignedat}{1}x^{2}+6x+11\,&=\,(x+3)^{2}+2\\[3pt]x^{2}+14x+30\,&=\,(x+7)^{2}-19\\[3pt]x^{2}-2x+7\,&=\,(x-1)^{2}+6.\end{alignedat}}$ Non-monic case Given a quadratic polynomial of the form $ax^{2}+bx+c$ it is possible to factor out the coefficient a, and then complete the square for the resulting monic polynomial. Example: ${\begin{aligned}3x^{2}+12x+27&=3[x^{2}+4x+9]\\&{}=3\left[(x+2)^{2}+5\right]\\&{}=3(x+2)^{2}+3(5)\\&{}=3(x+2)^{2}+15\end{aligned}}$ This process of factoring out the coefficient a can further be simplified by only factorising it out of the first 2 terms. The integer at the end of the polynomial does not have to be included. Example: ${\begin{aligned}3x^{2}+12x+27&=3\left[x^{2}+4x\right]+27\\[1ex]&{}=3\left[(x+2)^{2}-4\right]+27\\[1ex]&{}=3(x+2)^{2}+3(-4)+27\\[1ex]&{}=3(x+2)^{2}-12+27\\[1ex]&{}=3(x+2)^{2}+15\end{aligned}}$ This allows the writing of any quadratic polynomial in the form $a(x-h)^{2}+k.$ Scalar case The result of completing the square may be written as a formula. In the general case, one has[6] $ax^{2}+bx+c=a(x-h)^{2}+k,$ with $h=-{\frac {b}{2a}}\quad {\text{and}}\quad k=c-ah^{2}=c-{\frac {b^{2}}{4a}}.$ In particular, when a = 1, one has $x^{2}+bx+c=(x-h)^{2}+k,$ with $h=-{\frac {b}{2}}\quad {\text{and}}\quad k=c-h^{2}=c-{\frac {b^{2}}{4}}.$ By solving the equation $a(x-h)^{2}+k=0$ in terms of $x-h,$ and reorganizing the resulting expression, one gets the quadratic formula for the roots of the quadratic equation: $x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}.$ Matrix case The matrix case looks very similar: $x^{\mathrm {T} }Ax+x^{\mathrm {T} }b+c=(x-h)^{\mathrm {T} }A(x-h)+k$ where $ h=-{\frac {1}{2}}A^{-1}b$ and $ k=c-{\frac {1}{4}}b^{\mathrm {T} }A^{-1}b$. Note that $A$ has to be symmetric. If $A$ is not symmetric the formulae for $h$ and $k$ have to be generalized to: $h=-(A+A^{\mathrm {T} })^{-1}b\quad {\text{and}}\quad k=c-h^{\mathrm {T} }Ah=c-b^{\mathrm {T} }(A+A^{\mathrm {T} })^{-1}A(A+A^{\mathrm {T} })^{-1}b$ Relation to the graph Graphs of quadratic functions shifted to the right by h = 0, 5, 10, and 15. Graphs of quadratic functions shifted upward by k = 0, 5, 10, and 15. Graphs of quadratic functions shifted upward and to the right by 0, 5, 10, and 15. In analytic geometry, the graph of any quadratic function is a parabola in the xy-plane. Given a quadratic polynomial of the form $a(x-h)^{2}+k$ the numbers h and k may be interpreted as the Cartesian coordinates of the vertex (or stationary point) of the parabola. That is, h is the x-coordinate of the axis of symmetry (i.e. the axis of symmetry has equation x = h), and k is the minimum value (or maximum value, if a < 0) of the quadratic function. One way to see this is to note that the graph of the function f(x) = x2 is a parabola whose vertex is at the origin (0, 0). Therefore, the graph of the function f(x − h) = (x − h)2 is a parabola shifted to the right by h whose vertex is at (h, 0), as shown in the top figure. In contrast, the graph of the function f(x) + k = x2 + k is a parabola shifted upward by k whose vertex is at (0, k), as shown in the center figure. Combining both horizontal and vertical shifts yields f(x − h) + k = (x − h)2 + k is a parabola shifted to the right by h and upward by k whose vertex is at (h, k), as shown in the bottom figure. Solving quadratic equations Completing the square may be used to solve any quadratic equation. For example: $x^{2}+6x+5=0.$ The first step is to complete the square: $(x+3)^{2}-4=0.$ Next we solve for the squared term: $(x+3)^{2}=4.$ Then either $x+3=-2\quad {\text{or}}\quad x+3=2,$ and therefore $x=-5\quad {\text{or}}\quad x=-1.$ This can be applied to any quadratic equation. When the x2 has a coefficient other than 1, the first step is to divide out the equation by this coefficient: for an example see the non-monic case below. Irrational and complex roots Unlike methods involving factoring the equation, which is reliable only if the roots are rational, completing the square will find the roots of a quadratic equation even when those roots are irrational or complex. For example, consider the equation $x^{2}-10x+18=0.$ Completing the square gives $(x-5)^{2}-7=0,$ so $(x-5)^{2}=7.$ Then either $x-5=-{\sqrt {7}}\quad {\text{or}}\quad x-5={\sqrt {7}}.$ In terser language: $x-5=\pm {\sqrt {7}},$ so $x=5\pm {\sqrt {7}}.$ Equations with complex roots can be handled in the same way. For example: ${\begin{aligned}x^{2}+4x+5&=0\\[6pt](x+2)^{2}+1&=0\\[6pt](x+2)^{2}&=-1\\[6pt]x+2&=\pm i\\[6pt]x&=-2\pm i.\end{aligned}}$ Non-monic case For an equation involving a non-monic quadratic, the first step to solving them is to divide through by the coefficient of x2. For example: ${\begin{array}{c}2x^{2}+7x+6\,=\,0\\[6pt]x^{2}+{\tfrac {7}{2}}x+3\,=\,0\\[6pt]\left(x+{\tfrac {7}{4}}\right)^{2}-{\tfrac {1}{16}}\,=\,0\\[6pt]\left(x+{\tfrac {7}{4}}\right)^{2}\,=\,{\tfrac {1}{16}}\\[6pt]x+{\tfrac {7}{4}}={\tfrac {1}{4}}\quad {\text{or}}\quad x+{\tfrac {7}{4}}=-{\tfrac {1}{4}}\\[6pt]x=-{\tfrac {3}{2}}\quad {\text{or}}\quad x=-2.\end{array}}$ Applying this procedure to the general form of a quadratic equation leads to the quadratic formula. Other applications Integration Completing the square may be used to evaluate any integral of the form $\int {\frac {dx}{ax^{2}+bx+c}}$ using the basic integrals $\int {\frac {dx}{x^{2}-a^{2}}}={\frac {1}{2a}}\ln \left|{\frac {x-a}{x+a}}\right|+C\quad {\text{and}}\quad \int {\frac {dx}{x^{2}+a^{2}}}={\frac {1}{a}}\arctan \left({\frac {x}{a}}\right)+C.$ For example, consider the integral $\int {\frac {dx}{x^{2}+6x+13}}.$ Completing the square in the denominator gives: $\int {\frac {dx}{(x+3)^{2}+4}}\,=\,\int {\frac {dx}{(x+3)^{2}+2^{2}}}.$ This can now be evaluated by using the substitution u = x + 3, which yields $\int {\frac {dx}{(x+3)^{2}+4}}\,=\,{\frac {1}{2}}\arctan \left({\frac {x+3}{2}}\right)+C.$ Complex numbers Consider the expression $|z|^{2}-b^{*}z-bz^{*}+c,$ where z and b are complex numbers, z* and b* are the complex conjugates of z and b, respectively, and c is a real number. Using the identity |u|2 = uu* we can rewrite this as $|z-b|^{2}-|b|^{2}+c,$ which is clearly a real quantity. This is because ${\begin{aligned}|z-b|^{2}&{}=(z-b)(z-b)^{*}\\&{}=(z-b)(z^{*}-b^{*})\\&{}=zz^{*}-zb^{*}-bz^{*}+bb^{*}\\&{}=|z|^{2}-zb^{*}-bz^{*}+|b|^{2}.\end{aligned}}$ As another example, the expression $ax^{2}+by^{2}+c,$ where a, b, c, x, and y are real numbers, with a > 0 and b > 0, may be expressed in terms of the square of the absolute value of a complex number. Define $z={\sqrt {a}}\,x+i{\sqrt {b}}\,y.$ Then ${\begin{aligned}|z|^{2}&{}=zz^{*}\\[1ex]&{}=\left({\sqrt {a}}\,x+i{\sqrt {b}}\,y\right)\left({\sqrt {a}}\,x-i{\sqrt {b}}\,y\right)\\[1ex]&{}=ax^{2}-i{\sqrt {ab}}\,xy+i{\sqrt {ba}}\,yx-i^{2}by^{2}\\[1ex]&{}=ax^{2}+by^{2},\end{aligned}}$ so $ax^{2}+by^{2}+c=|z|^{2}+c.$ Idempotent matrix A matrix M is idempotent when M2 = M. Idempotent matrices generalize the idempotent properties of 0 and 1. The completion of the square method of addressing the equation $a^{2}+b^{2}=a,$ shows that some idempotent 2×2 matrices are parametrized by a circle in the (a,b)-plane: The matrix ${\begin{pmatrix}a&b\\b&1-a\end{pmatrix}}$ will be idempotent provided $a^{2}+b^{2}=a,$ which, upon completing the square, becomes $(a-{\tfrac {1}{2}})^{2}+b^{2}={\tfrac {1}{4}}.$ In the (a,b)-plane, this is the equation of a circle with center (1/2, 0) and radius 1/2. Geometric perspective Consider completing the square for the equation $x^{2}+bx=a.$ Since x2 represents the area of a square with side of length x, and bx represents the area of a rectangle with sides b and x, the process of completing the square can be viewed as visual manipulation of rectangles. Simple attempts to combine the x2 and the bx rectangles into a larger square result in a missing corner. The term (b/2)2 added to each side of the above equation is precisely the area of the missing corner, whence derives the terminology "completing the square". A variation on the technique As conventionally taught, completing the square consists of adding the third term, v2 to $u^{2}+2uv$ to get a square. There are also cases in which one can add the middle term, either 2uv or −2uv, to $u^{2}+v^{2}$ to get a square. Example: the sum of a positive number and its reciprocal By writing ${\begin{aligned}x+{1 \over x}&{}=\left(x-2+{1 \over x}\right)+2\\&{}=\left({\sqrt {x}}-{1 \over {\sqrt {x}}}\right)^{2}+2\end{aligned}}$ we show that the sum of a positive number x and its reciprocal is always greater than or equal to 2. The square of a real expression is always greater than or equal to zero, which gives the stated bound; and here we achieve 2 just when x is 1, causing the square to vanish. Example: factoring a simple quartic polynomial Consider the problem of factoring the polynomial $x^{4}+324.$ This is $(x^{2})^{2}+(18)^{2},$ so the middle term is 2(x2)(18) = 36x2. Thus we get ${\begin{aligned}x^{4}+324&{}=(x^{4}+36x^{2}+324)-36x^{2}\\&{}=(x^{2}+18)^{2}-(6x)^{2}={\text{a difference of two squares}}\\&{}=(x^{2}+18+6x)(x^{2}+18-6x)\\&{}=(x^{2}+6x+18)(x^{2}-6x+18)\end{aligned}}$ (the last line being added merely to follow the convention of decreasing degrees of terms). The same argument shows that $x^{4}+4a^{4}$ is always factorizable as $x^{4}+4a^{4}=\left(x^{2}+2ax+2a^{2}\right)\left(x^{2}-2ax+2a^{2}\right)$ (Also known as Sophie Germain's identity). Completing the cube "Completing the square" consists to remark that the two first terms of a quadratic polynomial are also the first terms of the square of a linear polynomial, and to use this for expressing the quadratic polynomial as the sum of a square and a constant. Completing the cube is a similar technique that allows to transform a cubic polynomial into a cubic polynomial without term of degree two. More precisely, if $ax^{3}+bx^{2}+cx+d$ is a polynomial in x such that $a\neq 0,$ its two first terms are the two first terms of the expanded form of $a\left(x+{\frac {b}{3a}}\right)^{3}=ax^{3}+bx^{2}+x\,{\frac {b^{2}}{3a}}+{\frac {b^{3}}{27a^{2}}}.$ So, the change of variable $t=x+{\frac {b}{3a}}$ provides a cubic polynomial in $t$ without term of degree two, which is called the depressed form of the original polynomial. This transformation is generally the first step of the mehods for solving the general cubic equation. More generally, a similar transformation can be used for removing terms of degree $n-1$ in polynomials of degree $n.$ References 1. Dionissios T. Hristopulos (2020). Random Fields for Spatial Data Modeling: A Primer for Scientists and Engineers. Springer Nature. p. 267. ISBN 978-94-024-1918-4. Extract of page 267 2. James R. Brannan; William E. Boyce (2015). Differential Equations: An Introduction to Modern Methods and Applications (3rd ed.). John Wiley & Sons. p. 314. ISBN 978-1-118-98122-1. Extract of page 314 3. Stephen L. Campbell; Richard Haberman (2011). Introduction to Differential Equations with Dynamical Systems (illustrated ed.). Princeton University Press. p. 214. ISBN 978-1-4008-4132-5. Extract of page 214 4. Tony Philips, "Completing the Square", American Mathematical Society Feature Column, 2020. 5. Hughes, Barnabas. "Completing the Square - Quadratics Using Addition". Math Association of America. Retrieved 2022-10-21.{{cite web}}: CS1 maint: url-status (link) 6. Narasimhan, Revathi (2008). Precalculus: Building Concepts and Connections. Cengage Learning. pp. 133–134. ISBN 978-0-618-41301-0., Section Formula for the Vertex of a Quadratic Function, page 133–134, figure 2.4.8 • Algebra 1, Glencoe, ISBN 0-07-825083-8, pages 539–544 • Algebra 2, Saxon, ISBN 0-939798-62-X, pages 214–214, 241–242, 256–257, 398–401 External links Wikimedia Commons has media related to Completing the square. • Completing the square at PlanetMath.
Wikipedia
Vertex model A vertex model is a type of statistical mechanics model in which the Boltzmann weights are associated with a vertex in the model (representing an atom or particle).[1][2] This contrasts with a nearest-neighbour model, such as the Ising model, in which the energy, and thus the Boltzmann weight of a statistical microstate is attributed to the bonds connecting two neighbouring particles. The energy associated with a vertex in the lattice of particles is thus dependent on the state of the bonds which connect it to adjacent vertices. It turns out that every solution of the Yang–Baxter equation with spectral parameters in a tensor product of vector spaces $V\otimes V$ yields an exactly-solvable vertex model. Although the model can be applied to various geometries in any number of dimensions, with any number of possible states for a given bond, the most fundamental examples occur for two dimensional lattices, the simplest being a square lattice where each bond has two possible states. In this model, every particle is connected to four other particles, and each of the four bonds adjacent to the particle has two possible states, indicated by the direction of an arrow on the bond. In this model, each vertex can adopt $2^{4}$ possible configurations. The energy for a given vertex can be given by $\varepsilon _{ij}^{k\ell }$, with a state of the lattice is an assignment of a state of each bond, with the total energy of the state being the sum of the vertex energies. As the energy is often divergent for an infinite lattice, the model is studied for a finite lattice as the lattice approaches infinite size. Periodic or domain wall[3] boundary conditions may be imposed on the model. Discussion For a given state of the lattice, the Boltzmann weight can be written as the product over the vertices of the Boltzmann weights of the corresponding vertex states $\exp(-\beta \varepsilon ({\mbox{state}}))=\prod _{\mbox{vertices}}\exp(-\beta \varepsilon _{ij}^{k\ell })$ where the Boltzmann weights for the vertices are written $R_{ij}^{k\ell }=\exp(-\beta \varepsilon _{ij}^{k\ell })$, and the i, j, k, l range over the possible statuses of each of the four edges attached to the vertex. The vertex states of adjacent vertices must satisfy compatibility conditions along the connecting edges (bonds) in order for the state to be admissible. The probability of the system being in any given state at a particular time, and hence the properties of the system are determined by the partition function, for which an analytic form is desired. $\mathbb {Z} =\sum _{\mbox{states}}\exp(-\beta \varepsilon ({\mbox{state}}))$ where β=1/kT, T is temperature and k is Boltzmann's constant. The probability that the system is in any given state (microstate) is given by ${\frac {\exp(-\beta \varepsilon ({\mbox{state}}))}{\mathbb {Z} }}$ so that the average value of the energy of the system is given by $\langle \varepsilon \rangle ={\frac {\sum _{\mbox{states}}\varepsilon \exp(-\beta \varepsilon )}{\sum _{\mbox{states}}\exp(-\beta \varepsilon )}}=kT^{2}{\frac {\partial }{\partial T}}\ln \mathbb {Z} $ In order to evaluate the partition function, firstly examine the states of a row of vertices. The external edges are free variables, with summation over the internal bonds. Hence, form the row partition function $T_{i_{1}k_{1}\dots k_{N}}^{i'_{1}\ell _{1}\dots l_{N}}=\sum _{r_{1},\dots ,r_{N-1}}R_{i_{1}k_{1}}^{r_{1}\ell _{1}}R_{r_{1}k_{2}}^{r_{2}\ell _{2}}\cdots R_{r_{N-1}k_{N}}^{i'_{1}\ell _{N}}$ This can be reformulated in terms of an auxiliary n-dimensional vector space V, with a basis $\{v_{1},\ldots ,v_{n}\}$, and $R\in End(V\otimes V)$ as $R(v_{i}\otimes v_{j})=\sum _{k,\ell }R_{ij}^{k\ell }v_{k}\otimes v_{\ell }$ and $T\in End(V\otimes V^{\otimes N})$ as $T(v_{i_{1}}\otimes v_{k_{1}}\otimes \cdots \otimes v_{k_{N}})=\sum _{i'_{1},\ell _{1},\dots \ell _{N}}T_{i_{1}k_{1}\dots k_{N}}^{i'_{1}\ell _{1}\dots \ell _{N}}v_{i'_{1}}\otimes v_{\ell _{1}}\otimes \cdots \otimes v_{\ell _{N}}$ thereby implying that T can be written as $T=R_{0N}\cdots R_{02}R_{01}$ where the indices indicate the factors of the tensor product $V\otimes V^{\otimes N}$ on which R operates. Summing over the states of the bonds in the first row with the periodic boundary conditions $i_{1}=i'_{1}$, gives $(\operatorname {trace} _{V}(T))_{k_{1}\dots k_{N}}^{\ell _{1}\dots \ell _{N}}$ where $\tau =\operatorname {trace} _{V}(T)$ is the row-transfer matrix. By summing the contributions over two rows, the result is $(\operatorname {trace} _{V}(T))_{k_{1}\dots k_{N}}^{\ell _{1}\dots \ell _{N}}(\operatorname {trace} _{V}(T))_{j_{1}\dots j_{N}}^{k_{1}\dots k_{N}}$ which upon summation over the vertical bonds connecting the first two rows gives:$((\operatorname {trace} _{V}(T))^{2})_{j_{1}\dots j_{N}}^{\ell _{1}\dots \ell _{N}}$ for M rows, this gives $((\operatorname {trace} _{V}(T))^{M})_{\ell '_{1}\dots \ell '_{N}}^{\ell _{1}\dots \ell _{N}}$ and then applying the periodic boundary conditions to the vertical columns, the partition function can be expressed in terms of the transfer matrix $\tau $ as $\mathbb {Z} =\operatorname {trace} _{V^{\otimes N}}(\tau ^{M})\sim \lambda _{max}^{M}$ where $\lambda _{max}$ is the largest eigenvalue of $\tau $. The approximation follows from the fact that the eigenvalues of $\tau ^{M}$ are the eigenvalues of $\tau $ to the power of M, and as $M\rightarrow \infty $, the power of the largest eigenvalue becomes much larger than the others. As the trace is the sum of the eigenvalues, the problem of calculating $\mathbb {Z} $ reduces to the problem of finding the maximum eigenvalue of $\tau $. This in itself is another field of study. However, a standard approach to the problem of finding the largest eigenvalue of $\tau $ is to find a large family of operators which commute with $\tau $. This implies that the eigenspaces are common, and restricts the possible space of solutions. Such a family of commuting operators is usually found by means of the Yang–Baxter equation, which thus relates statistical mechanics to the study of quantum groups. Integrability Definition: A vertex model is integrable if, $\forall \mu ,\nu ,\exists \lambda $ such that $R_{12}(\lambda )R_{13}(\mu )R_{23}(\nu )=R_{23}(\nu )R_{13}(\mu )R_{12}(\lambda )$ This is a parameterized version of the Yang–Baxter equation, corresponding to the possible dependence of the vertex energies, and hence the Boltzmann weights R on external parameters, such as temperature, external fields, etc. The integrability condition implies the following relation. Proposition: For an integrable vertex model, with $\lambda ,\mu $ and $\nu $ defined as above, then $R(\lambda )(1\otimes T(\mu ))(T(\nu )\otimes 1)=(T(\nu )\otimes 1)(1\otimes T(\mu ))R(\lambda )$ as endomorphisms of $V\otimes V\otimes V^{\otimes N}$, where $R(\lambda )$ acts on the first two vectors of the tensor product. It follows by multiplying both sides of the above equation on the right by $R(\lambda )^{-1}$ and using the cyclic property of the trace operator that the following corollary holds. Corollary: For an integrable vertex model for which $R(\lambda )$ is invertible $\forall \lambda $, the transfer matrix $\tau (\mu )$ commutes with $\tau (\nu ),\ \forall \mu ,\nu $. This illustrates the role of the Yang–Baxter equation in the solution of solvable lattice models. Since the transfer matrices $\tau $ commute for all $\lambda ,\nu $, the eigenvectors of $\tau $ are common, and hence independent of the parameterization. It is a recurring theme which appears in many other types of statistical mechanical models to look for these commuting transfer matrices. From the definition of R above, it follows that for every solution of the Yang–Baxter equation in the tensor product of two n-dimensional vector spaces, there is a corresponding 2-dimensional solvable vertex model where each of the bonds can be in the possible states $\{1,\ldots ,n\}$, where R is an endomorphism in the space spanned by $\{|a\rangle \otimes |b\rangle \},1\leq a,b\leq n$. This motivates the classification of all the finite-dimensional irreducible representations of a given Quantum algebra in order to find solvable models corresponding to it. Notable vertex models • Six-vertex model • Eight-vertex model • Nineteen-vertex model (Izergin-Korepin model) [4] References 1. R.J. Baxter, Exactly solved models in statistical mechanics, London, Academic Press, 1982 2. V. Chari and A.N. Pressley, A Guide to Quantum Groups Cambridge University Press, 1994 3. V.E. Korepin et al., Quantum inverse scattering method and correlation functions, New York, Press Syndicate of the University of Cambridge, 1993 4. A. G. Izergin and V. E. Korepin, The inverse scattering method approach to the quantum Shabat-Mikhailov model. Communications in Mathematical Physics, 79, 303 (1981)
Wikipedia
Vertex of a representation In mathematical finite group theory, the vertex of a representation of a finite group is a subgroup associated to it, that has a special representation called a source. Vertices and sources were introduced by Green (1958–1959). References • Green, J. A. (1958–1959), "On the indecomposable representations of a finite group", Mathematische Zeitschrift, 70: 430–445, doi:10.1007/BF01558601, ISSN 0025-5874, MR 0131454, S2CID 123304240
Wikipedia
Vertex separator In graph theory, a vertex subset $S\subset V$ is a vertex separator (or vertex cut, separating set) for nonadjacent vertices a and b if the removal of S from the graph separates a and b into distinct connected components. Relevant topics on Graph connectivity • Connectivity • Algebraic connectivity • Cycle rank • Rank (graph theory) • SPQR tree • St-connectivity • K-connectivity certificate • Pixel connectivity • Vertex separator • Strongly connected component • Biconnected graph • Bridge Not to be confused with cut vertex. Examples Consider a grid graph with r rows and c columns; the total number n of vertices is r × c. For instance, in the illustration, r = 5, c = 8, and n = 40. If r is odd, there is a single central row, and otherwise there are two rows equally close to the center; similarly, if c is odd, there is a single central column, and otherwise there are two columns equally close to the center. Choosing S to be any of these central rows or columns, and removing S from the graph, partitions the graph into two smaller connected subgraphs A and B, each of which has at most n⁄2 vertices. If r ≤ c (as in the illustration), then choosing a central column will give a separator S with $r\leq {\sqrt {n}}$ vertices, and similarly if c ≤ r then choosing a central row will give a separator with at most ${\sqrt {n}}$ vertices. Thus, every grid graph has a separator S of size at most ${\sqrt {n}},$ the removal of which partitions it into two connected components, each of size at most n⁄2.[1] To give another class of examples, every free tree T has a separator S consisting of a single vertex, the removal of which partitions T into two or more connected components, each of size at most n⁄2. More precisely, there is always exactly one or exactly two vertices, which amount to such a separator, depending on whether the tree is centered or bicentered.[2] As opposed to these examples, not all vertex separators are balanced, but that property is most useful for applications in computer science, such as the planar separator theorem. Minimal separators Let S be an (a,b)-separator, that is, a vertex subset that separates two nonadjacent vertices a and b. Then S is a minimal (a,b)-separator if no proper subset of S separates a and b. More generally, S is called a minimal separator if it is a minimal separator for some pair (a,b) of nonadjacent vertices. Notice that this is different from minimal separating set which says that no proper subset of S is a minimal (u,v)-separator for any pair of vertices (u,v). The following is a well-known result characterizing the minimal separators:[3] Lemma. A vertex separator S in G is minimal if and only if the graph G – S, obtained by removing S from G, has two connected components C1 and C2 such that each vertex in S is both adjacent to some vertex in C1 and to some vertex in C2. The minimal (a,b)-separators also form an algebraic structure: For two fixed vertices a and b of a given graph G, an (a,b)-separator S can be regarded as a predecessor of another (a,b)-separator T, if every path from a to b meets S before it meets T. More rigorously, the predecessor relation is defined as follows: Let S and T be two (a,b)-separators in G. Then S is a predecessor of T, in symbols $S\sqsubseteq _{a,b}^{G}T$, if for each x ∈ S \ T, every path connecting x to b meets T. It follows from the definition that the predecessor relation yields a preorder on the set of all (a,b)-separators. Furthermore, Escalante (1972) proved that the predecessor relation gives rise to a complete lattice when restricted to the set of minimal (a,b)-separators in G. See also • Chordal graph, a graph in which every minimal separator is a clique. • k-vertex-connected graph Notes 1. George (1973). Instead of using a row or column of a grid graph, George partitions the graph into four pieces by using the union of a row and a column as a separator. 2. Jordan (1869) 3. Golumbic (1980). References • Escalante, F. (1972). "Schnittverbände in Graphen". Abhandlungen aus dem Mathematischen Seminar der Universität Hamburg. 38: 199–220. doi:10.1007/BF02996932. • George, J. Alan (1973), "Nested dissection of a regular finite element mesh", SIAM Journal on Numerical Analysis, 10 (2): 345–363, doi:10.1137/0710032, JSTOR 2156361. • Golumbic, Martin Charles (1980), Algorithmic Graph Theory and Perfect Graphs, Academic Press, ISBN 0-12-289260-7. • Jordan, Camille (1869). "Sur les assemblages de lignes". Journal für die reine und angewandte Mathematik (in French). 70 (2): 185–190. • Rosenberg, Arnold; Heath, Lenwood (2002). Graph Separators, with Applications. Springer. doi:10.1007/b115747.
Wikipedia
Vertex-transitive graph In the mathematical field of graph theory, a vertex-transitive graph is a graph G in which, given any two vertices v1 and v2 of G, there is some automorphism $f:G\to G\ $ Graph families defined by their automorphisms distance-transitive → distance-regular ← strongly regular ↓ symmetric (arc-transitive) ← t-transitive, t ≥ 2 skew-symmetric ↓ (if connected) vertex- and edge-transitive → edge-transitive and regular → edge-transitive ↓ ↓ ↓ vertex-transitive → regular → (if bipartite) biregular ↑ Cayley graph ← zero-symmetric asymmetric such that $f(v_{1})=v_{2}.\ $ In other words, a graph is vertex-transitive if its automorphism group acts transitively on its vertices.[1] A graph is vertex-transitive if and only if its graph complement is, since the group actions are identical. Every symmetric graph without isolated vertices is vertex-transitive, and every vertex-transitive graph is regular. However, not all vertex-transitive graphs are symmetric (for example, the edges of the truncated tetrahedron), and not all regular graphs are vertex-transitive (for example, the Frucht graph and Tietze's graph). Finite examples Finite vertex-transitive graphs include the symmetric graphs (such as the Petersen graph, the Heawood graph and the vertices and edges of the Platonic solids). The finite Cayley graphs (such as cube-connected cycles) are also vertex-transitive, as are the vertices and edges of the Archimedean solids (though only two of these are symmetric). Potočnik, Spiga and Verret have constructed a census of all connected cubic vertex-transitive graphs on at most 1280 vertices.[2] Although every Cayley graph is vertex-transitive, there exist other vertex-transitive graphs that are not Cayley graphs. The most famous example is the Petersen graph, but others can be constructed including the line graphs of edge-transitive non-bipartite graphs with odd vertex degrees.[3] Properties The edge-connectivity of a vertex-transitive graph is equal to the degree d, while the vertex-connectivity will be at least 2(d + 1)/3.[1] If the degree is 4 or less, or the graph is also edge-transitive, or the graph is a minimal Cayley graph, then the vertex-connectivity will also be equal to d.[4] Infinite examples Infinite vertex-transitive graphs include: • infinite paths (infinite in both directions) • infinite regular trees, e.g. the Cayley graph of the free group • graphs of uniform tessellations (see a complete list of planar tessellations), including all tilings by regular polygons • infinite Cayley graphs • the Rado graph Two countable vertex-transitive graphs are called quasi-isometric if the ratio of their distance functions is bounded from below and from above. A well known conjecture stated that every infinite vertex-transitive graph is quasi-isometric to a Cayley graph. A counterexample was proposed by Diestel and Leader in 2001.[5] In 2005, Eskin, Fisher, and Whyte confirmed the counterexample.[6] See also • Edge-transitive graph • Lovász conjecture • Semi-symmetric graph • Zero-symmetric graph References 1. Godsil, Chris; Royle, Gordon (2013) [2001], Algebraic Graph Theory, Graduate Texts in Mathematics, vol. 207, Springer, ISBN 978-1-4613-0163-9. 2. Potočnik P., Spiga P. & Verret G. (2013), "Cubic vertex-transitive graphs on up to 1280 vertices", Journal of Symbolic Computation, 50: 465–477, arXiv:1201.5317, doi:10.1016/j.jsc.2012.09.002, S2CID 26705221. 3. Lauri, Josef; Scapellato, Raffaele (2003), Topics in graph automorphisms and reconstruction, London Mathematical Society Student Texts, vol. 54, Cambridge University Press, p. 44, ISBN 0-521-82151-7, MR 1971819. Lauri and Scapelleto credit this construction to Mark Watkins. 4. Babai, L. (1996), Technical Report TR-94-10, University of Chicago, archived from the original on 2010-06-11 5. Diestel, Reinhard; Leader, Imre (2001), "A conjecture concerning a limit of non-Cayley graphs" (PDF), Journal of Algebraic Combinatorics, 14 (1): 17–25, doi:10.1023/A:1011257718029, S2CID 10927964. 6. Eskin, Alex; Fisher, David; Whyte, Kevin (2005). "Quasi-isometries and rigidity of solvable groups". arXiv:math.GR/0511647.. External links • Weisstein, Eric W. "Vertex-transitive graph". MathWorld. • A census of small connected cubic vertex-transitive graphs . Primož Potočnik, Pablo Spiga, Gabriel Verret, 2012.
Wikipedia
Asymptote In analytic geometry, an asymptote (/ˈæsɪmptoʊt/) of a curve is a line such that the distance between the curve and the line approaches zero as one or both of the x or y coordinates tends to infinity. In projective geometry and related contexts, an asymptote of a curve is a line which is tangent to the curve at a point at infinity.[1][2] The word asymptote is derived from the Greek ἀσύμπτωτος (asumptōtos) which means "not falling together", from ἀ priv. + σύν "together" + πτωτ-ός "fallen".[3] The term was introduced by Apollonius of Perga in his work on conic sections, but in contrast to its modern meaning, he used it to mean any line that does not intersect the given curve.[4] There are three kinds of asymptotes: horizontal, vertical and oblique. For curves given by the graph of a function y = ƒ(x), horizontal asymptotes are horizontal lines that the graph of the function approaches as x tends to +∞ or −∞. Vertical asymptotes are vertical lines near which the function grows without bound. An oblique asymptote has a slope that is non-zero but finite, such that the graph of the function approaches it as x tends to +∞ or −∞. More generally, one curve is a curvilinear asymptote of another (as opposed to a linear asymptote) if the distance between the two curves tends to zero as they tend to infinity, although the term asymptote by itself is usually reserved for linear asymptotes. Asymptotes convey information about the behavior of curves in the large, and determining the asymptotes of a function is an important step in sketching its graph.[5] The study of asymptotes of functions, construed in a broad sense, forms a part of the subject of asymptotic analysis. Introduction The idea that a curve may come arbitrarily close to a line without actually becoming the same may seem to counter everyday experience. The representations of a line and a curve as marks on a piece of paper or as pixels on a computer screen have a positive width. So if they were to be extended far enough they would seem to merge, at least as far as the eye could discern. But these are physical representations of the corresponding mathematical entities; the line and the curve are idealized concepts whose width is 0 (see Line). Therefore, the understanding of the idea of an asymptote requires an effort of reason rather than experience. Consider the graph of the function $f(x)={\frac {1}{x}}$ shown in this section. The coordinates of the points on the curve are of the form $\left(x,{\frac {1}{x}}\right)$ where x is a number other than 0. For example, the graph contains the points (1, 1), (2, 0.5), (5, 0.2), (10, 0.1), ... As the values of $x$ become larger and larger, say 100, 1,000, 10,000 ..., putting them far to the right of the illustration, the corresponding values of $y$, .01, .001, .0001, ..., become infinitesimal relative to the scale shown. But no matter how large $x$ becomes, its reciprocal ${\frac {1}{x}}$ is never 0, so the curve never actually touches the x-axis. Similarly, as the values of $x$ become smaller and smaller, say .01, .001, .0001, ..., making them infinitesimal relative to the scale shown, the corresponding values of $y$, 100, 1,000, 10,000 ..., become larger and larger. So the curve extends farther and farther upward as it comes closer and closer to the y-axis. Thus, both the x and y-axis are asymptotes of the curve. These ideas are part of the basis of concept of a limit in mathematics, and this connection is explained more fully below.[6] Asymptotes of functions The asymptotes most commonly encountered in the study of calculus are of curves of the form y = ƒ(x). These can be computed using limits and classified into horizontal, vertical and oblique asymptotes depending on their orientation. Horizontal asymptotes are horizontal lines that the graph of the function approaches as x tends to +∞ or −∞. As the name indicates they are parallel to the x-axis. Vertical asymptotes are vertical lines (perpendicular to the x-axis) near which the function grows without bound. Oblique asymptotes are diagonal lines such that the difference between the curve and the line approaches 0 as x tends to +∞ or −∞. Vertical asymptotes The line x = a is a vertical asymptote of the graph of the function y = ƒ(x) if at least one of the following statements is true: 1. $\lim _{x\to a^{-}}f(x)=\pm \infty ,$ 2. $\lim _{x\to a^{+}}f(x)=\pm \infty ,$ where $\lim _{x\to a^{-}}$ is the limit as x approaches the value a from the left (from lesser values), and $\lim _{x\to a^{+}}$ is the limit as x approaches a from the right. For example, if ƒ(x) = x/(x–1), the numerator approaches 1 and the denominator approaches 0 as x approaches 1. So $\lim _{x\to 1^{+}}{\frac {x}{x-1}}=+\infty $ $\lim _{x\to 1^{-}}{\frac {x}{x-1}}=-\infty $ and the curve has a vertical asymptote x = 1. The function ƒ(x) may or may not be defined at a, and its precise value at the point x = a does not affect the asymptote. For example, for the function $f(x)={\begin{cases}{\frac {1}{x}}&{\text{if }}x>0,\\5&{\text{if }}x\leq 0.\end{cases}}$ has a limit of +∞ as x → 0+, ƒ(x) has the vertical asymptote x = 0, even though ƒ(0) = 5. The graph of this function does intersect the vertical asymptote once, at (0, 5). It is impossible for the graph of a function to intersect a vertical asymptote (or a vertical line in general) in more than one point. Moreover, if a function is continuous at each point where it is defined, it is impossible that its graph does intersect any vertical asymptote. A common example of a vertical asymptote is the case of a rational function at a point x such that the denominator is zero and the numerator is non-zero. If a function has a vertical asymptote, then it isn't necessarily true that the derivative of the function has a vertical asymptote at the same place. An example is $f(x)={\tfrac {1}{x}}+\sin({\tfrac {1}{x}})\quad $ at $\quad x=0$. This function has a vertical asymptote at $x=0,$ because $\lim _{x\to 0^{+}}f(x)=\lim _{x\to 0^{+}}\left({\tfrac {1}{x}}+\sin \left({\tfrac {1}{x}}\right)\right)=+\infty ,$ and $\lim _{x\to 0^{-}}f(x)=\lim _{x\to 0^{-}}\left({\tfrac {1}{x}}+\sin \left({\tfrac {1}{x}}\right)\right)=-\infty $. The derivative of $f$ is the function $f'(x)={\frac {-(\cos({\tfrac {1}{x}})+1)}{x^{2}}}$. For the sequence of points $x_{n}={\frac {(-1)^{n}}{(2n+1)\pi }},\quad $ for $\quad n=0,1,2,\ldots $ that approaches $x=0$ both from the left and from the right, the values $f'(x_{n})$ are constantly $0$. Therefore, both one-sided limits of $f'$ at $0$ can be neither $+\infty $ nor $-\infty $. Hence $f'(x)$ doesn't have a vertical asymptote at $x=0$. Horizontal asymptotes Horizontal asymptotes are horizontal lines that the graph of the function approaches as x → ±∞. The horizontal line y = c is a horizontal asymptote of the function y = ƒ(x) if $\lim _{x\rightarrow -\infty }f(x)=c$ or $\lim _{x\rightarrow +\infty }f(x)=c$. In the first case, ƒ(x) has y = c as asymptote when x tends to −∞, and in the second ƒ(x) has y = c as an asymptote as x tends to +∞. For example, the arctangent function satisfies $\lim _{x\rightarrow -\infty }\arctan(x)=-{\frac {\pi }{2}}$ and $\lim _{x\rightarrow +\infty }\arctan(x)={\frac {\pi }{2}}.$ So the line y = –π/2 is a horizontal asymptote for the arctangent when x tends to –∞, and y = π/2 is a horizontal asymptote for the arctangent when x tends to +∞. Functions may lack horizontal asymptotes on either or both sides, or may have one horizontal asymptote that is the same in both directions. For example, the function ƒ(x) = 1/(x2+1) has a horizontal asymptote at y = 0 when x tends both to −∞ and +∞ because, respectively, $\lim _{x\to -\infty }{\frac {1}{x^{2}+1}}=\lim _{x\to +\infty }{\frac {1}{x^{2}+1}}=0.$ Other common functions that have one or two horizontal asymptotes include x ↦ 1/x (that has an hyperbola as it graph), the Gaussian function $x\mapsto \exp(-x^{2}),$ the error function, and the logistic function. Oblique asymptotes When a linear asymptote is not parallel to the x- or y-axis, it is called an oblique asymptote or slant asymptote. A function ƒ(x) is asymptotic to the straight line y = mx + n (m ≠ 0) if $\lim _{x\to +\infty }\left[f(x)-(mx+n)\right]=0\,{\mbox{ or }}\lim _{x\to -\infty }\left[f(x)-(mx+n)\right]=0.$ In the first case the line y = mx + n is an oblique asymptote of ƒ(x) when x tends to +∞, and in the second case the line y = mx + n is an oblique asymptote of ƒ(x) when x tends to −∞. An example is ƒ(x) = x + 1/x, which has the oblique asymptote y = x (that is m = 1, n = 0) as seen in the limits $\lim _{x\to \pm \infty }\left[f(x)-x\right]$ $=\lim _{x\to \pm \infty }\left[\left(x+{\frac {1}{x}}\right)-x\right]$ $=\lim _{x\to \pm \infty }{\frac {1}{x}}=0.$ Elementary methods for identifying asymptotes The asymptotes of many elementary functions can be found without the explicit use of limits (although the derivations of such methods typically use limits). General computation of oblique asymptotes for functions The oblique asymptote, for the function f(x), will be given by the equation y = mx + n. The value for m is computed first and is given by $m\;{\stackrel {\text{def}}{=}}\,\lim _{x\rightarrow a}f(x)/x$ where a is either $-\infty $ or $+\infty $ depending on the case being studied. It is good practice to treat the two cases separately. If this limit doesn't exist then there is no oblique asymptote in that direction. Having m then the value for n can be computed by $n\;{\stackrel {\text{def}}{=}}\,\lim _{x\rightarrow a}(f(x)-mx)$ where a should be the same value used before. If this limit fails to exist then there is no oblique asymptote in that direction, even should the limit defining m exist. Otherwise y = mx + n is the oblique asymptote of ƒ(x) as x tends to a. For example, the function ƒ(x) = (2x2 + 3x + 1)/x has $m=\lim _{x\rightarrow +\infty }f(x)/x=\lim _{x\rightarrow +\infty }{\frac {2x^{2}+3x+1}{x^{2}}}=2$ and then $n=\lim _{x\rightarrow +\infty }(f(x)-mx)=\lim _{x\rightarrow +\infty }\left({\frac {2x^{2}+3x+1}{x}}-2x\right)=3$ so that y = 2x + 3 is the asymptote of ƒ(x) when x tends to +∞. The function ƒ(x) = ln x has $m=\lim _{x\rightarrow +\infty }f(x)/x=\lim _{x\rightarrow +\infty }{\frac {\ln x}{x}}=0$ and then $n=\lim _{x\rightarrow +\infty }(f(x)-mx)=\lim _{x\rightarrow +\infty }\ln x$, which does not exist. So y = ln x does not have an asymptote when x tends to +∞. Asymptotes for rational functions A rational function has at most one horizontal asymptote or oblique (slant) asymptote, and possibly many vertical asymptotes. The degree of the numerator and degree of the denominator determine whether or not there are any horizontal or oblique asymptotes. The cases are tabulated below, where deg(numerator) is the degree of the numerator, and deg(denominator) is the degree of the denominator. The cases of horizontal and oblique asymptotes for rational functions deg(numerator)−deg(denominator) Asymptotes in general Example Asymptote for example < 0 $y=0$ $f(x)={\frac {1}{x^{2}+1}}$ $y=0$ = 0 y = the ratio of leading coefficients $f(x)={\frac {2x^{2}+7}{3x^{2}+x+12}}$ $y={\frac {2}{3}}$ = 1 y = the quotient of the Euclidean division of the numerator by the denominator $f(x)={\frac {2x^{2}+3x+5}{x}}=2x+3+{\frac {5}{x}}$ $y=2x+3$ > 1 none $f(x)={\frac {2x^{4}}{3x^{2}+1}}$ no linear asymptote, but a curvilinear asymptote exists The vertical asymptotes occur only when the denominator is zero (If both the numerator and denominator are zero, the multiplicities of the zero are compared). For example, the following function has vertical asymptotes at x = 0, and x = 1, but not at x = 2. $f(x)={\frac {x^{2}-5x+6}{x^{3}-3x^{2}+2x}}={\frac {(x-2)(x-3)}{x(x-1)(x-2)}}$ Oblique asymptotes of rational functions When the numerator of a rational function has degree exactly one greater than the denominator, the function has an oblique (slant) asymptote. The asymptote is the polynomial term after dividing the numerator and denominator. This phenomenon occurs because when dividing the fraction, there will be a linear term, and a remainder. For example, consider the function $f(x)={\frac {x^{2}+x+1}{x+1}}=x+{\frac {1}{x+1}}$ shown to the right. As the value of x increases, f approaches the asymptote y = x. This is because the other term, 1/(x+1), approaches 0. If the degree of the numerator is more than 1 larger than the degree of the denominator, and the denominator does not divide the numerator, there will be a nonzero remainder that goes to zero as x increases, but the quotient will not be linear, and the function does not have an oblique asymptote. Transformations of known functions If a known function has an asymptote (such as y=0 for f(x)=ex), then the translations of it also have an asymptote. • If x=a is a vertical asymptote of f(x), then x=a+h is a vertical asymptote of f(x-h) • If y=c is a horizontal asymptote of f(x), then y=c+k is a horizontal asymptote of f(x)+k If a known function has an asymptote, then the scaling of the function also have an asymptote. • If y=ax+b is an asymptote of f(x), then y=cax+cb is an asymptote of cf(x) For example, f(x)=ex-1+2 has horizontal asymptote y=0+2=2, and no vertical or oblique asymptotes. General definition Let A : (a,b) → R2 be a parametric plane curve, in coordinates A(t) = (x(t),y(t)). Suppose that the curve tends to infinity, that is: $\lim _{t\rightarrow b}(x^{2}(t)+y^{2}(t))=\infty .$ A line ℓ is an asymptote of A if the distance from the point A(t) to ℓ tends to zero as t → b.[7] From the definition, only open curves that have some infinite branch can have an asymptote. No closed curve can have an asymptote. For example, the upper right branch of the curve y = 1/x can be defined parametrically as x = t, y = 1/t (where t > 0). First, x → ∞ as t → ∞ and the distance from the curve to the x-axis is 1/t which approaches 0 as t → ∞. Therefore, the x-axis is an asymptote of the curve. Also, y → ∞ as t → 0 from the right, and the distance between the curve and the y-axis is t which approaches 0 as t → 0. So the y-axis is also an asymptote. A similar argument shows that the lower left branch of the curve also has the same two lines as asymptotes. Although the definition here uses a parameterization of the curve, the notion of asymptote does not depend on the parameterization. In fact, if the equation of the line is $ax+by+c=0$ then the distance from the point A(t) = (x(t),y(t)) to the line is given by ${\frac {|ax(t)+by(t)+c|}{\sqrt {a^{2}+b^{2}}}}$ if γ(t) is a change of parameterization then the distance becomes ${\frac {|ax(\gamma (t))+by(\gamma (t))+c|}{\sqrt {a^{2}+b^{2}}}}$ which tends to zero simultaneously as the previous expression. An important case is when the curve is the graph of a real function (a function of one real variable and returning real values). The graph of the function y = ƒ(x) is the set of points of the plane with coordinates (x,ƒ(x)). For this, a parameterization is $t\mapsto (t,f(t)).$ This parameterization is to be considered over the open intervals (a,b), where a can be −∞ and b can be +∞. An asymptote can be either vertical or non-vertical (oblique or horizontal). In the first case its equation is x = c, for some real number c. The non-vertical case has equation y = mx + n, where m and $n$ are real numbers. All three types of asymptotes can be present at the same time in specific examples. Unlike asymptotes for curves that are graphs of functions, a general curve may have more than two non-vertical asymptotes, and may cross its vertical asymptotes more than once. Curvilinear asymptotes Let A : (a,b) → R2 be a parametric plane curve, in coordinates A(t) = (x(t),y(t)), and B be another (unparameterized) curve. Suppose, as before, that the curve A tends to infinity. The curve B is a curvilinear asymptote of A if the shortest distance from the point A(t) to a point on B tends to zero as t → b. Sometimes B is simply referred to as an asymptote of A, when there is no risk of confusion with linear asymptotes.[8] For example, the function $y={\frac {x^{3}+2x^{2}+3x+4}{x}}$ has a curvilinear asymptote y = x2 + 2x + 3, which is known as a parabolic asymptote because it is a parabola rather than a straight line.[9] Asymptotes and curve sketching Asymptotes are used in procedures of curve sketching. An asymptote serves as a guide line to show the behavior of the curve towards infinity.[10] In order to get better approximations of the curve, curvilinear asymptotes have also been used [11] although the term asymptotic curve seems to be preferred.[12] Algebraic curves The asymptotes of an algebraic curve in the affine plane are the lines that are tangent to the projectivized curve through a point at infinity.[13] For example, one may identify the asymptotes to the unit hyperbola in this manner. Asymptotes are often considered only for real curves,[14] although they also make sense when defined in this way for curves over an arbitrary field.[15] A plane curve of degree n intersects its asymptote at most at n−2 other points, by Bézout's theorem, as the intersection at infinity is of multiplicity at least two. For a conic, there are a pair of lines that do not intersect the conic at any complex point: these are the two asymptotes of the conic. A plane algebraic curve is defined by an equation of the form P(x,y) = 0 where P is a polynomial of degree n $P(x,y)=P_{n}(x,y)+P_{n-1}(x,y)+\cdots +P_{1}(x,y)+P_{0}$ where Pk is homogeneous of degree k. Vanishing of the linear factors of the highest degree term Pn defines the asymptotes of the curve: setting Q = Pn, if Pn(x, y) = (ax − by) Qn−1(x, y), then the line $Q'_{x}(b,a)x+Q'_{y}(b,a)y+P_{n-1}(b,a)=0$ is an asymptote if $Q'_{x}(b,a)$ and $Q'_{y}(b,a)$ are not both zero. If $Q'_{x}(b,a)=Q'_{y}(b,a)=0$ and $P_{n-1}(b,a)\neq 0$, there is no asymptote, but the curve has a branch that looks like a branch of parabola. Such a branch is called a parabolic branch, even when it does not have any parabola that is a curvilinear asymptote. If $Q'_{x}(b,a)=Q'_{y}(b,a)=P_{n-1}(b,a)=0,$ the curve has a singular point at infinity which may have several asymptotes or parabolic branches. Over the complex numbers, Pn splits into linear factors, each of which defines an asymptote (or several for multiple factors). Over the reals, Pn splits in factors that are linear or quadratic factors. Only the linear factors correspond to infinite (real) branches of the curve, but if a linear factor has multiplicity greater than one, the curve may have several asymptotes or parabolic branches. It may also occur that such a multiple linear factor corresponds to two complex conjugate branches, and does not corresponds to any infinite branch of the real curve. For example, the curve x4 + y2 - 1 = 0 has no real points outside the square $|x|\leq 1,|y|\leq 1$, but its highest order term gives the linear factor x with multiplicity 4, leading to the unique asymptote x=0. Asymptotic cone The hyperbola ${\frac {x^{2}}{a^{2}}}-{\frac {y^{2}}{b^{2}}}=1$ has the two asymptotes $y=\pm {\frac {b}{a}}x.$ The equation for the union of these two lines is ${\frac {x^{2}}{a^{2}}}-{\frac {y^{2}}{b^{2}}}=0.$ Similarly, the hyperboloid ${\frac {x^{2}}{a^{2}}}-{\frac {y^{2}}{b^{2}}}-{\frac {z^{2}}{c^{2}}}=1$ is said to have the asymptotic cone[16][17] ${\frac {x^{2}}{a^{2}}}-{\frac {y^{2}}{b^{2}}}-{\frac {z^{2}}{c^{2}}}=0.$ The distance between the hyperboloid and cone approaches 0 as the distance from the origin approaches infinity. More generally, consider a surface that has an implicit equation $P_{d}(x,y,z)+P_{d-2}(x,y,z)+\cdots P_{0}=0,$ where the $P_{i}$ are homogeneous polynomials of degree $i$ and $P_{d-1}=0$. Then the equation $P_{d}(x,y,z)=0$ defines a cone which is centered at the origin. It is called an asymptotic cone, because the distance to the cone of a point of the surface tends to zero when the point on the surface tends to infinity. See also • Big O notation References General references • Kuptsov, L.P. (2001) [1994], "Asymptote", Encyclopedia of Mathematics, EMS Press Specific references 1. Williamson, Benjamin (1899), "Asymptotes", An elementary treatise on the differential calculus 2. Nunemacher, Jeffrey (1999), "Asymptotes, Cubic Curves, and the Projective Plane", Mathematics Magazine, 72 (3): 183–192, CiteSeerX 10.1.1.502.72, doi:10.2307/2690881, JSTOR 2690881 3. Oxford English Dictionary, second edition, 1989. 4. D.E. Smith, History of Mathematics, vol 2 Dover (1958) p. 318 5. Apostol, Tom M. (1967), Calculus, Vol. 1: One-Variable Calculus with an Introduction to Linear Algebra (2nd ed.), New York: John Wiley & Sons, ISBN 978-0-471-00005-1, §4.18. 6. Reference for section: "Asymptote" The Penny Cyclopædia vol. 2, The Society for the Diffusion of Useful Knowledge (1841) Charles Knight and Co., London p. 541 7. Pogorelov, A. V. (1959), Differential geometry, Translated from the first Russian ed. by L. F. Boron, Groningen: P. Noordhoff N. V., MR 0114163, §8. 8. Fowler, R. H. (1920), The elementary differential geometry of plane curves, Cambridge, University Press, hdl:2027/uc1.b4073882, ISBN 0-486-44277-2, p. 89ff. 9. William Nicholson, The British enciclopaedia, or dictionary of arts and sciences; comprising an accurate and popular view of the present improved state of human knowledge, Vol. 5, 1809 10. Frost, P. An elementary treatise on curve tracing (1918) online 11. Fowler, R. H. The elementary differential geometry of plane curves Cambridge, University Press, 1920, pp 89ff.(online at archive.org) 12. Frost, P. An elementary treatise on curve tracing, 1918, page 5 13. C.G. Gibson (1998) Elementary Geometry of Algebraic Curves, § 12.6 Asymptotes, Cambridge University Press ISBN 0-521-64140-3, 14. Coolidge, Julian Lowell (1959), A treatise on algebraic plane curves, New York: Dover Publications, ISBN 0-486-49576-0, MR 0120551, pp. 40–44. 15. Kunz, Ernst (2005), Introduction to plane algebraic curves, Boston, MA: Birkhäuser Boston, ISBN 978-0-8176-4381-2, MR 2156630, p. 121. 16. L.P. Siceloff, G. Wentworth, D.E. Smith Analytic geometry (1922) p. 271 17. P. Frost Solid geometry (1875) This has a more general treatment of asymptotic surfaces. External links Wikimedia Commons has media related to Asymptotics. • Asymptote at PlanetMath. • Hyperboloid and Asymptotic Cone, string surface model, 1872 Archived 2012-02-15 at the Wayback Machine from the Science Museum Authority control: National • Japan
Wikipedia
Spatial gradient A spatial gradient is a gradient whose components are spatial derivatives, i.e., rate of change of a given scalar physical quantity with respect to the position coordinates. Homogeneous regions have spatial gradient vector norm equal to zero. When evaluated over vertical position (altitude or depth), it is called vertical gradient; the remainder is called horizontal gradient, the vector projection of the full gradient onto the horizontal plane. Examples: Biology • Concentration gradient, the ratio of solute concentration between two adjoining regions • Potential gradient, the difference in electric charge between two adjoining regions Fluid dynamics and earth science • Density gradient • Pressure gradient • Temperature gradient • Geothermal gradient • Sound speed gradient • Wind gradient • Lapse rate See also • Grade (slope) • Time derivative • Material derivative • Structure tensor • Surface gradient
Wikipedia
Vertical and horizontal bundles In mathematics, the vertical bundle and the horizontal bundle are vector bundles associated to a smooth fiber bundle. More precisely, given a smooth fiber bundle $\pi \colon E\to B$, the vertical bundle $VE$ and horizontal bundle $HE$ are subbundles of the tangent bundle $TE$ of $E$ whose Whitney sum satisfies $VE\oplus HE\cong TE$. This means that, over each point $e\in E$, the fibers $V_{e}E$ and $H_{e}E$ form complementary subspaces of the tangent space $T_{e}E$. The vertical bundle consists of all vectors that are tangent to the fibers, while the horizontal bundle requires some choice of complementary subbundle. To make this precise, define the vertical space $V_{e}E$ at $e\in E$ to be $\ker(d\pi _{e})$. That is, the differential $d\pi _{e}\colon T_{e}E\to T_{b}B$ (where $b=\pi (e)$) is a linear surjection whose kernel has the same dimension as the fibers of $\pi $. If we write $F=\pi ^{-1}(b)$, then $V_{e}E$ consists of exactly the vectors in $T_{e}E$ which are also tangent to $F$. The name is motivated by low-dimensional examples like the trivial line bundle over a circle, which is sometimes depicted as a vertical cylinder projecting to a horizontal circle. A subspace $H_{e}E$ of $T_{e}E$ is called a horizontal space if $T_{e}E$ is the direct sum of $V_{e}E$ and $H_{e}E$. The disjoint union of the vertical spaces VeE for each e in E is the subbundle VE of TE; this is the vertical bundle of E. Likewise, provided the horizontal spaces $H_{e}E$ vary smoothly with e, their disjoint union is a horizontal bundle. The use of the words "the" and "a" here is intentional: each vertical subspace is unique, defined explicitly by $\ker(d\pi _{e})$. Excluding trivial cases, there are an infinite number of horizontal subspaces at each point. Also note that arbitrary choices of horizontal space at each point will not, in general, form a smooth vector bundle; they must also vary in an appropriately smooth way. The horizontal bundle is one way to formulate the notion of an Ehresmann connection on a fiber bundle. Thus, for example, if E is a principal G-bundle, then the horizontal bundle is usually required to be G-invariant: such a choice is equivalent to a connection on the principal bundle.[1] This notably occurs when E is the frame bundle associated to some vector bundle, which is a principal $\operatorname {GL} _{n}$ bundle. Formal definition Let π:E→B be a smooth fiber bundle over a smooth manifold B. The vertical bundle is the kernel VE := ker(dπ) of the tangent map dπ : TE → TB.[2] Since dπe is surjective at each point e, it yields a regular subbundle of TE. Furthermore, the vertical bundle VE is also integrable. An Ehresmann connection on E is a choice of a complementary subbundle HE to VE in TE, called the horizontal bundle of the connection. At each point e in E, the two subspaces form a direct sum, such that TeE = VeE ⊕ HeE. Example A simple example of a smooth fiber bundle is a Cartesian product of two manifolds. Consider the bundle B1 := (M × N, pr1) with bundle projection pr1 : M × N → M : (x, y) → x. Applying the definition in the paragraph above to find the vertical bundle, we consider first a point (m,n) in M × N. Then the image of this point under pr1 is m. The preimage of m under this same pr1 is {m} × N, so that T(m,n) ({m} × N) = {m} × TN. The vertical bundle is then VB1 = M × TN, which is a subbundle of T(M ×N). If we take the other projection pr2 : M × N → N : (x, y) → y to define the fiber bundle B2 := (M × N, pr2) then the vertical bundle will be VB2 = TM × N. In both cases, the product structure gives a natural choice of horizontal bundle, and hence an Ehresmann connection: the horizontal bundle of B1 is the vertical bundle of B2 and vice versa. Properties Various important tensors and differential forms from differential geometry take on specific properties on the vertical and horizontal bundles, or even can be defined in terms of them. Some of these are: • A vertical vector field is a vector field that is in the vertical bundle. That is, for each point e of E, one chooses a vector $v_{e}\in V_{e}E$ where $V_{e}E\subset T_{e}E=T_{e}(E_{\pi (e)})$ is the vertical vector space at e.[2] • A differentiable r-form $\alpha $ on E is said to be a horizontal form if $\alpha (v_{1},...,v_{r})=0$ whenever at least one of the vectors $v_{1},...,v_{r}$ is vertical. • The connection form vanishes on the horizontal bundle, and is non-zero only on the vertical bundle. In this way, the connection form can be used to define the horizontal bundle: The horizontal bundle is the kernel of the connection form. • The solder form or tautological one-form vanishes on the vertical bundle and is non-zero only on the horizontal bundle. By definition, the solder form takes its values entirely in the horizontal bundle. • For the case of a frame bundle, the torsion form vanishes on the vertical bundle, and can be used to define exactly that part that needs to be added to an arbitrary connection to turn it into a Levi-Civita connection, i.e. to make a connection be torsionless. Indeed, if one writes θ for the solder form, then the torsion tensor Θ is given by Θ = D θ (with D the exterior covariant derivative). For any given connection ω, there is a unique one-form σ on TE, called the contorsion tensor, that is vanishing in the vertical bundle, and is such that ω+σ is another connection 1-form that is torsion-free. The resulting one-form ω+σ is nothing other than the Levi-Civita connection. One can take this as a definition: since the torsion is given by $\Theta =D\theta =d\theta +\omega \wedge \theta $, the vanishing of the torsion is equivalent to having $d\theta =-(\omega +\sigma )\wedge \theta $, and it is not hard to show that σ must vanish on the vertical bundle, and that σ must be G-invariant on each fibre (more precisely, that σ transforms in the adjoint representation of G). Note that this defines the Levi-Civita connection without making any explicit reference to any metric tensor (although the metric tensor can be understood to be a special case of a solder form, as it establishes a mapping between the tangent and cotangent bundles of the base space, i.e. between the horizontal and vertical subspaces of the frame bundle). • In the case where E is a principal bundle, then the fundamental vector field must necessarily live in the vertical bundle, and vanish in any horizontal bundle. Notes 1. David Bleecker, Gauge Theory and Variational Principles (1981) Addison-Wesely Publishing Company ISBN 0-201-10096-7 (See theorem 1.2.4) 2. Kolář, Ivan; Michor, Peter; Slovák, Jan (1993), Natural Operations in Differential Geometry (PDF), Springer-Verlag (page 77) References • Choquet-Bruhat, Yvonne; DeWitt-Morette, Cécile (1977), Analysis, Manifolds and Physics, Amsterdam: Elsevier, ISBN 978-0-7204-0494-4 • Kobayashi, Shoshichi; Nomizu, Katsumi (1996). Foundations of Differential Geometry, Vol. 1 (New ed.). Wiley Interscience. ISBN 0-471-15733-3. • Kolář, Ivan; Michor, Peter; Slovák, Jan (1993), Natural Operations in Differential Geometry (PDF), Springer-Verlag • Krupka, Demeter; Janyška, Josef (1990), Lectures on differential invariants, Univerzita J. E. Purkyně V Brně, ISBN 80-210-0165-8 • Saunders, D.J. (1989), The geometry of jet bundles, Cambridge University Press, ISBN 0-521-36948-7
Wikipedia
Inverted snub dodecadodecahedron In geometry, the inverted snub dodecadodecahedron (or vertisnub dodecadodecahedron) is a nonconvex uniform polyhedron, indexed as U60.[1] It is given a Schläfli symbol sr{5/3,5}. Inverted snub dodecadodecahedron TypeUniform star polyhedron ElementsF = 84, E = 150 V = 60 (χ = −6) Faces by sides60{3}+12{5}+12{5/2} Coxeter diagram Wythoff symbol| 5/3 2 5 Symmetry groupI, [5,3]+, 532 Index referencesU60, C76, W114 Dual polyhedronMedial inverted pentagonal hexecontahedron Vertex figure 3.3.5.3.5/3 Bowers acronymIsdid Cartesian coordinates Cartesian coordinates for the vertices of an inverted snub dodecadodecahedron are all the even permutations of (±2α, ±2, ±2β), (±(α+β/τ+τ), ±(-ατ+β+1/τ), ±(α/τ+βτ-1)), (±(-α/τ+βτ+1), ±(-α+β/τ-τ), ±(ατ+β-1/τ)), (±(-α/τ+βτ-1), ±(α-β/τ-τ), ±(ατ+β+1/τ)) and (±(α+β/τ-τ), ±(ατ-β+1/τ), ±(α/τ+βτ+1)), with an even number of plus signs, where β = (α2/τ+τ)/(ατ−1/τ), where τ = (1+√5)/2 is the golden mean and α is the negative real root of τα4−α3+2α2−α−1/τ, or approximately −0.3352090. Taking the odd permutations of the above coordinates with an odd number of plus signs gives another form, the enantiomorph of the other one. Related polyhedra Medial inverted pentagonal hexecontahedron Medial inverted pentagonal hexecontahedron TypeStar polyhedron Face ElementsF = 60, E = 150 V = 84 (χ = −6) Symmetry groupI, [5,3]+, 532 Index referencesDU60 dual polyhedronInverted snub dodecadodecahedron The medial inverted pentagonal hexecontahedron (or midly petaloid ditriacontahedron) is a nonconvex isohedral polyhedron. It is the dual of the uniform inverted snub dodecadodecahedron. Its faces are irregular nonconvex pentagons, with one very acute angle. Proportions Denote the golden ratio by $\phi $, and let $\xi \approx -0.236\,993\,843\,45$ be the largest (least negative) real zero of the polynomial $P=8x^{4}-12x^{3}+5x+1$. Then each face has three equal angles of $\arccos(\xi )\approx 103.709\,182\,219\,53^{\circ }$, one of $\arccos(\phi ^{2}\xi +\phi )\approx 3.990\,130\,423\,41^{\circ }$ and one of $360^{\circ }-\arccos(\phi ^{-2}\xi -\phi ^{-1})\approx 224.882\,322\,917\,99^{\circ }$. Each face has one medium length edge, two short and two long ones. If the medium length is $2$, then the short edges have length $1-{\sqrt {(1-\xi )/(\phi ^{3}-\xi )}}\approx 0.474\,126\,460\,54$, and the long edges have length $1+{\sqrt {(1-\xi )/(-\phi ^{-3}-\xi )}}\approx 37.551\,879\,448\,54$. The dihedral angle equals $\arccos(\xi /(\xi +1))\approx 108.095\,719\,352\,34^{\circ }$. The other real zero of the polynomial $P$ plays a similar role for the medial pentagonal hexecontahedron. See also • List of uniform polyhedra • Snub dodecadodecahedron References • Wenninger, Magnus (1983), Dual Models, Cambridge University Press, ISBN 978-0-521-54325-5, MR 0730208 p. 124 1. Roman, Maeder. "60: inverted snub dodecadodecahedron". MathConsult.{{cite web}}: CS1 maint: url-status (link) External links • Weisstein, Eric W. "Medial inverted pentagonal hexecontahedron". MathWorld. • Weisstein, Eric W. "Inverted snub dodecadodecahedron". MathWorld.
Wikipedia
Brownian excursion In probability theory a Brownian excursion process is a stochastic process that is closely related to a Wiener process (or Brownian motion). Realisations of Brownian excursion processes are essentially just realizations of a Wiener process selected to satisfy certain conditions. In particular, a Brownian excursion process is a Wiener process conditioned to be positive and to take the value 0 at time 1. Alternatively, it is a Brownian bridge process conditioned to be positive. BEPs are important because, among other reasons, they naturally arise as the limit process of a number of conditional functional central limit theorems.[1] Definition A Brownian excursion process, $e$, is a Wiener process (or Brownian motion) conditioned to be positive and to take the value 0 at time 1. Alternatively, it is a Brownian bridge process conditioned to be positive. Another representation of a Brownian excursion $e$ in terms of a Brownian motion process W (due to Paul Lévy and noted by Kiyosi Itô and Henry P. McKean, Jr.[2]) is in terms of the last time $\tau _{-}$ that W hits zero before time 1 and the first time $\tau _{+}$ that Brownian motion $W$ hits zero after time 1:[2] $\{e(t):\ {0\leq t\leq 1}\}\ {\stackrel {d}{=}}\ \left\{{\frac {|W((1-t)\tau _{-}+t\tau _{+})|}{\sqrt {\tau _{+}-\tau _{-}}}}:\ 0\leq t\leq 1\right\}.$ Let $\tau _{m}$ be the time that a Brownian bridge process $W_{0}$ achieves its minimum on [0, 1]. Vervaat (1979) shows that $\{e(t):\ {0\leq t\leq 1}\}\ {\stackrel {d}{=}}\ \left\{W_{0}(\tau _{m}+t{\bmod {1}})-W_{0}(\tau _{m}):\ 0\leq t\leq 1\right\}.$ Properties Vervaat's representation of a Brownian excursion has several consequences for various functions of $e$. In particular: $M_{+}\equiv \sup _{0\leq t\leq 1}e(t)\ {\stackrel {d}{=}}\ \sup _{0\leq t\leq 1}W_{0}(t)-\inf _{0\leq t\leq 1}W_{0}(t),$ (this can also be derived by explicit calculations[3][4]) and $\int _{0}^{1}e(t)\,dt\ {\stackrel {d}{=}}\ \int _{0}^{1}W_{0}(t)\,dt-\inf _{0\leq t\leq 1}W_{0}(t).$ The following result holds:[5] $EM_{+}={\sqrt {\pi /2}}\approx 1.25331\ldots ,\,$ and the following values for the second moment and variance can be calculated by the exact form of the distribution and density:[5] $EM_{+}^{2}\approx 1.64493\ldots \ ,\ \ \operatorname {Var} (M_{+})\approx 0.0741337\ldots .$ Groeneboom (1989), Lemma 4.2 gives an expression for the Laplace transform of (the density) of $\int _{0}^{1}e(t)\,dt$. A formula for a certain double transform of the distribution of this area integral is given by Louchard (1984). Groeneboom (1983) and Pitman (1983) give decompositions of Brownian motion $W$ in terms of i.i.d Brownian excursions and the least concave majorant (or greatest convex minorant) of $W$. For an introduction to Itô's general theory of Brownian excursions and the Itô Poisson process of excursions, see Revuz and Yor (1994), chapter XII. Connections and applications The Brownian excursion area $A_{+}\equiv \int _{0}^{1}e(t)\,dt$ arises in connection with the enumeration of connected graphs, many other problems in combinatorial theory; see e.g.[6][7][8][9][10] and the limit distribution of the Betti numbers of certain varieties in cohomology theory.[11] Takacs (1991a) shows that $A_{+}$ has density $f_{A_{+}}(x)={\frac {2{\sqrt {6}}}{x^{2}}}\sum _{j=1}^{\infty }v_{j}^{2/3}e^{-v_{j}}U\left(-{\frac {5}{6}},{\frac {4}{3}};v_{j}\right)\ \ {\text{ with }}\ \ v_{j}={\frac {2|a_{j}|^{3}}{27x^{2}}}$ where $a_{j}$ are the zeros of the Airy function and $U$ is the confluent hypergeometric function. Janson and Louchard (2007) show that $f_{A_{+}}(x)\sim {\frac {72{\sqrt {6}}}{\sqrt {\pi }}}x^{2}e^{-6x^{2}}\ \ {\text{ as }}\ \ x\rightarrow \infty ,$ and $P(A_{+}>x)\sim {\frac {6{\sqrt {6}}}{\sqrt {\pi }}}xe^{-6x^{2}}\ \ {\text{ as }}\ \ x\rightarrow \infty .$ They also give higher-order expansions in both cases. Janson (2007) gives moments of $A_{+}$ and many other area functionals. In particular, $E(A_{+})={\frac {1}{2}}{\sqrt {\frac {\pi }{2}}},\ \ E(A_{+}^{2})={\frac {5}{12}}\approx 0.416666\ldots ,\ \ \operatorname {Var} (A_{+})={\frac {5}{12}}-{\frac {\pi }{8}}\approx .0239675\ldots \ .$ Brownian excursions also arise in connection with queuing problems,[12] railway traffic,[13][14] and the heights of random rooted binary trees.[15] Related processes • Brownian bridge • Brownian meander • reflected Brownian motion • skew Brownian motion Notes 1. Durrett, Iglehart: Functionals of Brownian Meander and Brownian Excursion, (1975) 2. Itô and McKean (1974, page 75) 3. Chung (1976) 4. Kennedy (1976) 5. Durrett and Iglehart (1977) 6. Wright, E. M. (1977). "The number of connected sparsely edged graphs". Journal of Graph Theory. 1 (4): 317–330. doi:10.1002/jgt.3190010407. 7. Wright, E. M. (1980). "The number of connected sparsely edged graphs. III. Asymptotic results". Journal of Graph Theory. 4 (4): 393–407. doi:10.1002/jgt.3190040409. 8. Spencer J (1997). "Enumerating graphs and Brownian motion". Communications on Pure and Applied Mathematics. 50 (3): 291–294. doi:10.1002/(sici)1097-0312(199703)50:3<291::aid-cpa4>3.0.co;2-6. 9. Janson, Svante (2007). "Brownian excursion area, Wright's constants in graph enumeration, and other Brownian areas". Probability Surveys. 4: 80–145. arXiv:0704.2289. Bibcode:2007arXiv0704.2289J. doi:10.1214/07-PS104. S2CID 14563292. 10. Flajolet, P.; Louchard, G. (2001). "Analytic variations on the Airy distribution". Algorithmica. 31 (3): 361–377. CiteSeerX 10.1.1.27.3450. doi:10.1007/s00453-001-0056-0. S2CID 6522038. 11. Reineke M (2005). "Cohomology of noncommutative Hilbert schemes". Algebras and Representation Theory. 8 (4): 541–561. arXiv:math/0306185. doi:10.1007/s10468-005-8762-y. S2CID 116587916. 12. Iglehart D. L. (1974). "Functional central limit theorems for random walks conditioned to stay positive". The Annals of Probability. 2 (4): 608–619. doi:10.1214/aop/1176996607. 13. Takacs L (1991a). "A Bernoulli excursion and its various applications". Advances in Applied Probability. 23 (3): 557–585. doi:10.1017/s0001867800023739. 14. Takacs L (1991b). "On a probability problem connected with railway traffic". Journal of Applied Mathematics and Stochastic Analysis. 4: 263–292. doi:10.1155/S1048953391000011. 15. Takacs L (1994). "On the Total Heights of Random Rooted Binary Trees". Journal of Combinatorial Theory, Series B. 61 (2): 155–166. doi:10.1006/jctb.1994.1041. References • Chung, K. L. (1975). "Maxima in Brownian excursions". Bulletin of the American Mathematical Society. 81 (4): 742–745. doi:10.1090/s0002-9904-1975-13852-3. MR 0373035. • Chung, K. L. (1976). "Excursions in Brownian motion". Arkiv för Matematik. 14 (1): 155–177. Bibcode:1976ArM....14..155C. doi:10.1007/bf02385832. MR 0467948. • Durrett, Richard T.; Iglehart, Donald L. (1977). "Functionals of Brownian meander and Brownian excursion". Annals of Probability. 5 (1): 130–135. doi:10.1214/aop/1176995896. JSTOR 2242808. MR 0436354. • Groeneboom, Piet (1983). "The concave majorant of Brownian motion". Annals of Probability. 11 (4): 1016–1027. doi:10.1214/aop/1176993450. JSTOR 2243513. MR 0714964. • Groeneboom, Piet (1989). "Brownian motion with a parabolic drift and Airy functions". Probability Theory and Related Fields. 81: 79–109. doi:10.1007/BF00343738. MR 0981568. S2CID 119980629. • Itô, Kiyosi; McKean, Jr., Henry P. (2013) [1974]. Diffusion Processes and their Sample Paths. Classics in Mathematics (Second printing, corrected ed.). Springer-Verlag, Berlin. ISBN 978-3540606291. MR 0345224. • Janson, Svante (2007). "Brownian excursion area, Wright's constants in graph enumeration, and other Brownian areas". Probability Surveys. 4: 80–145. arXiv:0704.2289. Bibcode:2007arXiv0704.2289J. doi:10.1214/07-ps104. MR 2318402. S2CID 14563292. • Janson, Svante; Louchard, Guy (2007). "Tail estimates for the Brownian excursion area and other Brownian areas". Electronic Journal of Probability. 12: 1600–1632. arXiv:0707.0991. Bibcode:2007arXiv0707.0991J. doi:10.1214/ejp.v12-471. MR 2365879. S2CID 6281609. • Kennedy, Douglas P. (1976). "The distribution of the maximum Brownian excursion". Journal of Applied Probability. 13 (2): 371–376. doi:10.2307/3212843. JSTOR 3212843. MR 0402955. S2CID 222386970. • Lévy, Paul (1948). Processus Stochastiques et Mouvement Brownien. Gauthier-Villars, Paris. MR 0029120. • Louchard, G. (1984). "Kac's formula, Levy's local time and Brownian excursion". Journal of Applied Probability. 21 (3): 479–499. doi:10.2307/3213611. JSTOR 3213611. MR 0752014. S2CID 123640749. • Pitman, J. W. (1983). "Remarks on the convex minorant of Brownian motion". Seminar on Stochastic Processes, 1982. Progr. Probab. Statist. Vol. 5. Birkhauser, Boston. pp. 219–227. MR 0733673. • Revuz, Daniel; Yor, Marc (2004). Continuous Martingales and Brownian Motion. Grundlehren der mathematischen Wissenschaften. Vol. 293. Springer-Verlag, Berlin. doi:10.1007/978-3-662-06400-9. ISBN 978-3-642-08400-3. MR 1725357. • Vervaat, W. (1979). "A relation between Brownian bridge and Brownian excursion". Annals of Probability. 7 (1): 143–149. doi:10.1214/aop/1176995155. JSTOR 2242845. MR 0515820. Stochastic processes Discrete time • Bernoulli process • Branching process • Chinese restaurant process • Galton–Watson process • Independent and identically distributed random variables • Markov chain • Moran process • Random walk • Loop-erased • Self-avoiding • Biased • Maximal entropy Continuous time • Additive process • Bessel process • Birth–death process • pure birth • Brownian motion • Bridge • Excursion • Fractional • Geometric • Meander • Cauchy process • Contact process • Continuous-time random walk • Cox process • Diffusion process • Empirical process • Feller process • Fleming–Viot process • Gamma process • Geometric process • Hawkes process • Hunt process • Interacting particle systems • Itô diffusion • Itô process • Jump diffusion • Jump process • Lévy process • Local time • Markov additive process • McKean–Vlasov process • Ornstein–Uhlenbeck process • Poisson process • Compound • Non-homogeneous • Schramm–Loewner evolution • Semimartingale • Sigma-martingale • Stable process • Superprocess • Telegraph process • Variance gamma process • Wiener process • Wiener sausage Both • Branching process • Galves–Löcherbach model • Gaussian process • Hidden Markov model (HMM) • Markov process • Martingale • Differences • Local • Sub- • Super- • Random dynamical system • Regenerative process • Renewal process • Stochastic chains with memory of variable length • White noise Fields and other • Dirichlet process • Gaussian random field • Gibbs measure • Hopfield model • Ising model • Potts model • Boolean network • Markov random field • Percolation • Pitman–Yor process • Point process • Cox • Poisson • Random field • Random graph Time series models • Autoregressive conditional heteroskedasticity (ARCH) model • Autoregressive integrated moving average (ARIMA) model • Autoregressive (AR) model • Autoregressive–moving-average (ARMA) model • Generalized autoregressive conditional heteroskedasticity (GARCH) model • Moving-average (MA) model Financial models • Binomial options pricing model • Black–Derman–Toy • Black–Karasinski • Black–Scholes • Chan–Karolyi–Longstaff–Sanders (CKLS) • Chen • Constant elasticity of variance (CEV) • Cox–Ingersoll–Ross (CIR) • Garman–Kohlhagen • Heath–Jarrow–Morton (HJM) • Heston • Ho–Lee • Hull–White • LIBOR market • Rendleman–Bartter • SABR volatility • Vašíček • Wilkie Actuarial models • Bühlmann • Cramér–Lundberg • Risk process • Sparre–Anderson Queueing models • Bulk • Fluid • Generalized queueing network • M/G/1 • M/M/1 • M/M/c Properties • Càdlàg paths • Continuous • Continuous paths • Ergodic • Exchangeable • Feller-continuous • Gauss–Markov • Markov • Mixing • Piecewise-deterministic • Predictable • Progressively measurable • Self-similar • Stationary • Time-reversible Limit theorems • Central limit theorem • Donsker's theorem • Doob's martingale convergence theorems • Ergodic theorem • Fisher–Tippett–Gnedenko theorem • Large deviation principle • Law of large numbers (weak/strong) • Law of the iterated logarithm • Maximal ergodic theorem • Sanov's theorem • Zero–one laws (Blumenthal, Borel–Cantelli, Engelbert–Schmidt, Hewitt–Savage, Kolmogorov, Lévy) Inequalities • Burkholder–Davis–Gundy • Doob's martingale • Doob's upcrossing • Kunita–Watanabe • Marcinkiewicz–Zygmund Tools • Cameron–Martin formula • Convergence of random variables • Doléans-Dade exponential • Doob decomposition theorem • Doob–Meyer decomposition theorem • Doob's optional stopping theorem • Dynkin's formula • Feynman–Kac formula • Filtration • Girsanov theorem • Infinitesimal generator • Itô integral • Itô's lemma • Karhunen–Loève theorem • Kolmogorov continuity theorem • Kolmogorov extension theorem • Lévy–Prokhorov metric • Malliavin calculus • Martingale representation theorem • Optional stopping theorem • Prokhorov's theorem • Quadratic variation • Reflection principle • Skorokhod integral • Skorokhod's representation theorem • Skorokhod space • Snell envelope • Stochastic differential equation • Tanaka • Stopping time • Stratonovich integral • Uniform integrability • Usual hypotheses • Wiener space • Classical • Abstract Disciplines • Actuarial mathematics • Control theory • Econometrics • Ergodic theory • Extreme value theory (EVT) • Large deviations theory • Mathematical finance • Mathematical statistics • Probability theory • Queueing theory • Renewal theory • Ruin theory • Signal processing • Statistics • Stochastic analysis • Time series analysis • Machine learning • List of topics • Category
Wikipedia
Ample line bundle In mathematics, a distinctive feature of algebraic geometry is that some line bundles on a projective variety can be considered "positive", while others are "negative" (or a mixture of the two). The most important notion of positivity is that of an ample line bundle, although there are several related classes of line bundles. Roughly speaking, positivity properties of a line bundle are related to having many global sections. Understanding the ample line bundles on a given variety X amounts to understanding the different ways of mapping X into projective space. In view of the correspondence between line bundles and divisors (built from codimension-1 subvarieties), there is an equivalent notion of an ample divisor. In more detail, a line bundle is called basepoint-free if it has enough sections to give a morphism to projective space. A line bundle is semi-ample if some positive power of it is basepoint-free; semi-ampleness is a kind of "nonnegativity". More strongly, a line bundle on a complete variety X is very ample if it has enough sections to give a closed immersion (or "embedding") of X into projective space. A line bundle is ample if some positive power is very ample. An ample line bundle on a projective variety X has positive degree on every curve in X. The converse is not quite true, but there are corrected versions of the converse, the Nakai–Moishezon and Kleiman criteria for ampleness. Introduction Pullback of a line bundle and hyperplane divisors Given a morphism $f\colon X\to Y$ of schemes, a vector bundle E on Y (or more generally a coherent sheaf on Y) has a pullback to X, $f^{*}E$ (see Sheaf of modules#Operations). The pullback of a vector bundle is a vector bundle of the same rank. In particular, the pullback of a line bundle is a line bundle. (Briefly, the fiber of $f^{*}E$ at a point x in X is the fiber of E at f(x).) The notions described in this article are related to this construction in the case of a morphism to projective space $f\colon X\to \mathbb {P} ^{n},$ with E = O(1) the line bundle on projective space whose global sections are the homogeneous polynomials of degree 1 (that is, linear functions) in variables $x_{0},\ldots ,x_{n}$. The line bundle O(1) can also be described as the line bundle associated to a hyperplane in $\mathbb {P} ^{n}$ (because the zero set of a section of O(1) is a hyperplane). If f is a closed immersion, for example, it follows that the pullback $f^{*}O(1)$ is the line bundle on X associated to a hyperplane section (the intersection of X with a hyperplane in $\mathbb {P} ^{n}$). Basepoint-free line bundles Let X be a scheme over a field k (for example, an algebraic variety) with a line bundle L. (A line bundle may also be called an invertible sheaf.) Let $a_{0},...,a_{n}$ be elements of the k-vector space $H^{0}(X,L)$ of global sections of L. The zero set of each section is a closed subset of X; let U be the open subset of points at which at least one of $a_{0},\ldots ,a_{n}$ is not zero. Then these sections define a morphism $f\colon U\to \mathbb {P} _{k}^{n},\ x\mapsto [a_{0}(x),\ldots ,a_{n}(x)].$ In more detail: for each point x of U, the fiber of L over x is a 1-dimensional vector space over the residue field k(x). Choosing a basis for this fiber makes $a_{0}(x),\ldots ,a_{n}(x)$ into a sequence of n+1 numbers, not all zero, and hence a point in projective space. Changing the choice of basis scales all the numbers by the same nonzero constant, and so the point in projective space is independent of the choice. Moreover, this morphism has the property that the restriction of L to U is isomorphic to the pullback $f^{*}O(1)$.[1] The base locus of a line bundle L on a scheme X is the intersection of the zero sets of all global sections of L. A line bundle L is called basepoint-free if its base locus is empty. That is, for every point x of X there is a global section of L which is nonzero at x. If X is proper over a field k, then the vector space $H^{0}(X,L)$ of global sections has finite dimension; the dimension is called $h^{0}(X,L)$.[2] So a basepoint-free line bundle L determines a morphism $f\colon X\to \mathbb {P} ^{n}$ over k, where $n=h^{0}(X,L)-1$, given by choosing a basis for $H^{0}(X,L)$. Without making a choice, this can be described as the morphism $f\colon X\to \mathbb {P} (H^{0}(X,L))$ from X to the space of hyperplanes in $H^{0}(X,L)$, canonically associated to the basepoint-free line bundle L. This morphism has the property that L is the pullback $f^{*}O(1)$. Conversely, for any morphism f from a scheme X to projective space $\mathbb {P} ^{n}$ over k, the pullback line bundle $f^{*}O(1)$ is basepoint-free. Indeed, O(1) is basepoint-free on $\mathbb {P} ^{n}$, because for every point y in $\mathbb {P} ^{n}$ there is a hyperplane not containing y. Therefore, for every point x in X, there is a section s of O(1) over $\mathbb {P} ^{n}$ that is not zero at f(x), and the pullback of s is a global section of $f^{*}O(1)$ that is not zero at x. In short, basepoint-free line bundles are exactly those that can be expressed as the pullback of O(1) by some morphism to projective space. Nef, globally generated, semi-ample The degree of a line bundle L on a proper curve C over k is defined as the degree of the divisor (s) of any nonzero rational section s of L. The coefficients of this divisor are positive at points where s vanishes and negative where s has a pole. Therefore, any line bundle L on a curve C such that $H^{0}(C,L)\neq 0$ has nonnegative degree (because sections of L over C, as opposed to rational sections, have no poles).[3] In particular, every basepoint-free line bundle on a curve has nonnegative degree. As a result, a basepoint-free line bundle L on any proper scheme X over a field is nef, meaning that L has nonnegative degree on every (irreducible) curve in X.[4] More generally, a sheaf F of $O_{X}$-modules on a scheme X is said to be globally generated if there is a set I of global sections $s_{i}\in H^{0}(X,F)$ such that the corresponding morphism $\bigoplus _{i\in I}O_{X}\to F$ of sheaves is surjective.[5] A line bundle is globally generated if and only if it is basepoint-free. For example, every quasi-coherent sheaf on an affine scheme is globally generated.[6] Analogously, in complex geometry, Cartan's theorem A says that every coherent sheaf on a Stein manifold is globally generated. A line bundle L on a proper scheme over a field is semi-ample if there is a positive integer r such that the tensor power $L^{\otimes r}$ is basepoint-free. A semi-ample line bundle is nef (by the corresponding fact for basepoint-free line bundles).[7] Very ample line bundles A line bundle L on a proper scheme X over a field k is said to be very ample if it is basepoint-free and the associated morphism $f\colon X\to \mathbb {P} _{k}^{n}$ is a closed immersion. Here $n=h^{0}(X,L)-1$. Equivalently, L is very ample if X can be embedded into projective space of some dimension over k in such a way that L is the restriction of the line bundle O(1) to X.[8] The latter definition is used to define very ampleness for a line bundle on a proper scheme over any commutative ring.[9] The name "very ample" was introduced by Alexander Grothendieck in 1961.[10] Various names had been used earlier in the context of linear systems of divisors. For a very ample line bundle L on a proper scheme X over a field with associated morphism f, the degree of L on a curve C in X is the degree of f(C) as a curve in $\mathbb {P} ^{n}$. So L has positive degree on every curve in X (because every subvariety of projective space has positive degree).[11] Definitions Ample invertible sheaves on quasi-compact schemes Ample line bundles are used most often on proper schemes, but they can be defined in much wider generality. Let X be a scheme, and let ${\mathcal {L}}$ be an invertible sheaf on X. For each $x\in X$, let ${\mathfrak {m}}_{x}$ denote the ideal sheaf of the reduced subscheme supported only at x. For $s\in \Gamma (X,{\mathcal {L}})$, define $X_{s}=\{x\in X\colon s_{x}\not \in {\mathfrak {m}}_{x}{\mathcal {L}}_{x}\}.$ Equivalently, if $\kappa (x)$ denotes the residue field at x (considered as a skyscraper sheaf supported at x), then $X_{s}=\{x\in X\colon {\bar {s}}_{x}\neq 0\in \kappa (x)\otimes {\mathcal {L}}_{x}\},$ where ${\bar {s}}_{x}$ is the image of s in the tensor product. Fix $s\in \Gamma (X,{\mathcal {L}})$. For every s, the restriction ${\mathcal {L}}|_{X_{s}}$ is a free ${\mathcal {O}}_{X}$-module trivialized by the restriction of s, meaning the multiplication-by-s morphism ${\mathcal {O}}_{X_{s}}\to {\mathcal {L}}|_{X_{s}}$ is an isomorphism. The set $X_{s}$ is always open, and the inclusion morphism $X_{s}\to X$ is an affine morphism. Despite this, $X_{s}$ need not be an affine scheme. For example, if $s=1\in \Gamma (X,{\mathcal {O}}_{X})$, then $X_{s}=X$ is open in itself and affine over itself but generally not affine. Assume X is quasi-compact. Then ${\mathcal {L}}$ is ample if, for every $x\in X$, there exists an $n\geq 1$ and an $s\in \Gamma (X,{\mathcal {L}}^{\otimes n})$ such that $x\in X_{s}$ and $X_{s}$ is an affine scheme.[12] For example, the trivial line bundle ${\mathcal {O}}_{X}$ is ample if and only if X is quasi-affine.[13] In general, it is not true that every $X_{s}$ is affine. For example, if $X=\mathbf {P} ^{2}\setminus \{O\}$ for some point O, and if ${\mathcal {L}}$ is the restriction of ${\mathcal {O}}_{\mathbf {P} ^{2}}(1)$ to X, then ${\mathcal {L}}$ and ${\mathcal {O}}_{\mathbf {P} ^{2}}(1)$ have the same global sections, and the non-vanishing locus of a section of ${\mathcal {L}}$ is affine if and only if the corresponding section of ${\mathcal {O}}_{\mathbf {P} ^{2}}(1)$ contains O. It is necessary to allow powers of ${\mathcal {L}}$ in the definition. In fact, for every N, it is possible that $X_{s}$ is non-affine for every $s\in \Gamma (X,{\mathcal {L}}^{\otimes n})$ with $n\leq N$. Indeed, suppose Z is a finite set of points in $\mathbf {P} ^{2}$, $X=\mathbf {P} ^{2}\setminus Z$, and ${\mathcal {L}}={\mathcal {O}}_{\mathbf {P} ^{2}}(1)|_{X}$. The vanishing loci of the sections of ${\mathcal {L}}^{\otimes N}$ are plane curves of degree N. By taking Z to be a sufficiently large set of points in general position, we may ensure that no plane curve of degree N (and hence any lower degree) contains all the points of Z. In particular their non-vanishing loci are all non-affine. Define $\textstyle S=\bigoplus _{n\geq 0}\Gamma (X,{\mathcal {L}}^{\otimes n})$. Let $p\colon X\to \operatorname {Spec} \mathbf {Z} $ denote the structural morphism. There is a natural isomorphism between ${\mathcal {O}}_{X}$-algebra homomorphisms $\textstyle p^{*}({\tilde {S}})\to \bigoplus _{n\geq 0}{\mathcal {L}}^{\otimes n}$ and endomorphisms of the graded ring S. The identity endomorphism of S corresponds to a homomorphism $\varepsilon $. Applying the $\operatorname {Proj} $ functor produces a morphism from an open subscheme of X, denoted $G(\varepsilon )$, to $\operatorname {Proj} S$. The basic characterization of ample invertible sheaves states that if X is a quasi-compact quasi-separated scheme and ${\mathcal {L}}$ is an invertible sheaf on X, then the following assertions are equivalent:[14] 1. ${\mathcal {L}}$ is ample. 2. The open sets $X_{s}$, where $s\in \Gamma (X,{\mathcal {L}}^{\otimes n})$ and $n\geq 0$, form a basis for the topology of X. 3. The open sets $X_{s}$ with the property of being affine, where $s\in \Gamma (X,{\mathcal {L}}^{\otimes n})$ and $n\geq 0$, form a basis for the topology of X. 4. $G(\varepsilon )=X$ and the morphism $G(\varepsilon )\to \operatorname {Proj} S$ is a dominant open immersion. 5. $G(\varepsilon )=X$ and the morphism $G(\varepsilon )\to \operatorname {Proj} S$ is a homeomorphism of the underlying topological space of X with its image. 6. For every quasi-coherent sheaf ${\mathcal {F}}$ on X, the canonical map $\bigoplus _{n\geq 0}\Gamma (X,{\mathcal {F}}\otimes _{{\mathcal {O}}_{X}}{\mathcal {L}}^{\otimes n})\otimes _{\mathbf {Z} }{\mathcal {L}}^{\otimes {-n}}\to {\mathcal {F}}$ is surjective. 7. For every quasi-coherent sheaf of ideals ${\mathcal {J}}$ on X, the canonical map $\bigoplus _{n\geq 0}\Gamma (X,{\mathcal {J}}\otimes _{{\mathcal {O}}_{X}}{\mathcal {L}}^{\otimes n})\otimes _{\mathbf {Z} }{\mathcal {L}}^{\otimes {-n}}\to {\mathcal {J}}$ is surjective. 8. For every quasi-coherent sheaf of ideals ${\mathcal {J}}$ on X, the canonical map $\bigoplus _{n\geq 0}\Gamma (X,{\mathcal {J}}\otimes _{{\mathcal {O}}_{X}}{\mathcal {L}}^{\otimes n})\otimes _{\mathbf {Z} }{\mathcal {L}}^{\otimes {-n}}\to {\mathcal {J}}$ is surjective. 9. For every quasi-coherent sheaf ${\mathcal {F}}$ of finite type on X, there exists an integer $n_{0}$ such that for $n\geq n_{0}$, ${\mathcal {F}}\otimes {\mathcal {L}}^{\otimes n}$ is generated by its global sections. 10. For every quasi-coherent sheaf ${\mathcal {F}}$ of finite type on X, there exists integers $n>0$ and $k>0$ such that ${\mathcal {F}}$ is isomorphic to a quotient of ${\mathcal {L}}^{\otimes (-n)}\otimes {\mathcal {O}}_{X}^{k}$. 11. For every quasi-coherent sheaf of ideals ${\mathcal {J}}$ of finite type on X, there exists integers $n>0$ and $k>0$ such that ${\mathcal {J}}$ is isomorphic to a quotient of ${\mathcal {L}}^{\otimes (-n)}\otimes {\mathcal {O}}_{X}^{k}$. On proper schemes When X is separated and finite type over an affine scheme, an invertible sheaf ${\mathcal {L}}$ is ample if and only if there exists a positive integer r such that the tensor power ${\mathcal {L}}^{\otimes r}$ is very ample.[15][16] In particular, a proper scheme over R has an ample line bundle if and only if it is projective over R. Often, this characterization is taken as the definition of ampleness. The rest of this article will concentrate on ampleness on proper schemes over a field, as this is the most important case. An ample line bundle on a proper scheme X over a field has positive degree on every curve in X, by the corresponding statement for very ample line bundles. A Cartier divisor D on a proper scheme X over a field k is said to be ample if the corresponding line bundle O(D) is ample. (For example, if X is smooth over k, then a Cartier divisor can be identified with a finite linear combination of closed codimension-1 subvarieties of X with integer coefficients.) Weakening the notion of "very ample" to "ample" gives a flexible concept with a wide variety of different characterizations. A first point is that tensoring high powers of an ample line bundle with any coherent sheaf whatsoever gives a sheaf with many global sections. More precisely, a line bundle L on a proper scheme X over a field (or more generally over a Noetherian ring) is ample if and only if for every coherent sheaf F on X, there is an integer s such that the sheaf $F\otimes L^{\otimes r}$ is globally generated for all $r\geq s$. Here s may depend on F.[17][18] Another characterization of ampleness, known as the Cartan–Serre–Grothendieck theorem, is in terms of coherent sheaf cohomology. Namely, a line bundle L on a proper scheme X over a field (or more generally over a Noetherian ring) is ample if and only if for every coherent sheaf F on X, there is an integer s such that $H^{i}(X,F\otimes L^{\otimes r})=0$ for all $i>0$ and all $r\geq s$.[19][18] In particular, high powers of an ample line bundle kill cohomology in positive degrees. This implication is called the Serre vanishing theorem, proved by Jean-Pierre Serre in his 1955 paper Faisceaux algébriques cohérents. Examples/Non-examples • The trivial line bundle $O_{X}$ on a projective variety X of positive dimension is basepoint-free but not ample. More generally, for any morphism f from a projective variety X to some projective space $\mathbb {P} ^{n}$ over a field, the pullback line bundle $L=f^{*}O(1)$ is always basepoint-free, whereas L is ample if and only if the morphism f is finite (that is, all fibers of f have dimension 0 or are empty).[20] • For an integer d, the space of sections of the line bundle O(d) over $\mathbb {P} _{\mathbb {C} }^{1}$ is the complex vector space of homogeneous polynomials of degree d in variables x,y. In particular, this space is zero for d < 0. For $d\geq 0$, the morphism to projective space given by O(d) is $\mathbb {P} ^{1}\to \mathbb {P} ^{d}$ by $[x,y]\mapsto [x^{d},x^{d-1}y,\ldots ,y^{d}].$ This is a closed immersion for $d\geq 1$, with image a rational normal curve of degree d in $\mathbb {P} ^{d}$. Therefore, O(d) is basepoint-free if and only if $d\geq 0$, and very ample if and only if $d\geq 1$. It follows that O(d) is ample if and only if $d\geq 1$. • For an example where "ample" and "very ample" are different, let X be a smooth projective curve of genus 1 (an elliptic curve) over C, and let p be a complex point of X. Let O(p) be the associated line bundle of degree 1 on X. Then the complex vector space of global sections of O(p) has dimension 1, spanned by a section that vanishes at p.[21] So the base locus of O(p) is equal to p. On the other hand, O(2p) is basepoint-free, and O(dp) is very ample for $d\geq 3$ (giving an embedding of X as an elliptic curve of degree d in $\mathbb {P} ^{d-1}$). Therefore, O(p) is ample but not very ample. Also, O(2p) is ample and basepoint-free but not very ample; the associated morphism to projective space is a ramified double cover $X\to \mathbb {P} ^{1}$. • On curves of higher genus, there are ample line bundles L for which every global section is zero. (But high multiples of L have many sections, by definition.) For example, let X be a smooth plane quartic curve (of degree 4 in $\mathbb {P} ^{2}$) over C, and let p and q be distinct complex points of X. Then the line bundle $L=O(2p-q)$ is ample but has $H^{0}(X,L)=0$.[22] Criteria for ampleness of line bundles Intersection theory Further information: intersection theory § Intersection theory in algebraic geometry To determine whether a given line bundle on a projective variety X is ample, the following numerical criteria (in terms of intersection numbers) are often the most useful. It is equivalent to ask when a Cartier divisor D on X is ample, meaning that the associated line bundle O(D) is ample. The intersection number $D\cdot C$ can be defined as the degree of the line bundle O(D) restricted to C. In the other direction, for a line bundle L on a projective variety, the first Chern class $c_{1}(L)$ means the associated Cartier divisor (defined up to linear equivalence), the divisor of any nonzero rational section of L. On a smooth projective curve X over an algebraically closed field k, a line bundle L is very ample if and only if $h^{0}(X,L\otimes O(-x-y))=h^{0}(X,L)-2$ for all k-rational points x,y in X.[23] Let g be the genus of X. By the Riemann–Roch theorem, every line bundle of degree at least 2g + 1 satisfies this condition and hence is very ample. As a result, a line bundle on a curve is ample if and only if it has positive degree.[24] For example, the canonical bundle $K_{X}$ of a curve X has degree 2g − 2, and so it is ample if and only if $g\geq 2$. The curves with ample canonical bundle form an important class; for example, over the complex numbers, these are the curves with a metric of negative curvature. The canonical bundle is very ample if and only if $g\geq 2$ and the curve is not hyperelliptic.[25] The Nakai–Moishezon criterion (named for Yoshikazu Nakai (1963) and Boris Moishezon (1964)) states that a line bundle L on a proper scheme X over a field is ample if and only if $\int _{Y}c_{1}(L)^{{\text{dim}}(Y)}>0$ for every (irreducible) closed subvariety Y of X (Y is not allowed to be a point).[26] In terms of divisors, a Cartier divisor D is ample if and only if $D^{{\text{dim}}(Y)}\cdot Y>0$ for every (nonzero-dimensional) subvariety Y of X. For X a curve, this says that a divisor is ample if and only if it has positive degree. For X a surface, the criterion says that a divisor D is ample if and only if its self-intersection number $D^{2}$ is positive and every curve C on X has $D\cdot C>0$. Kleiman's criterion To state Kleiman's criterion (1966), let X be a projective scheme over a field. Let $N_{1}(X)$ be the real vector space of 1-cycles (real linear combinations of curves in X) modulo numerical equivalence, meaning that two 1-cycles A and B are equal in $N_{1}(X)$ if and only if every line bundle has the same degree on A and on B. By the Néron–Severi theorem, the real vector space $N_{1}(X)$ has finite dimension. Kleiman's criterion states that a line bundle L on X is ample if and only if L has positive degree on every nonzero element C of the closure of the cone of curves NE(X) in $N_{1}(X)$. (This is slightly stronger than saying that L has positive degree on every curve.) Equivalently, a line bundle is ample if and only if its class in the dual vector space $N^{1}(X)$ is in the interior of the nef cone.[27] Kleiman's criterion fails in general for proper (rather than projective) schemes X over a field, although it holds if X is smooth or more generally Q-factorial.[28] A line bundle on a projective variety is called strictly nef if it has positive degree on every curve Nagata (1959). and David Mumford constructed line bundles on smooth projective surfaces that are strictly nef but not ample. This shows that the condition $c_{1}(L)^{2}>0$ cannot be omitted in the Nakai–Moishezon criterion, and it is necessary to use the closure of NE(X) rather than NE(X) in Kleiman's criterion.[29] Every nef line bundle on a surface has $c_{1}(L)^{2}\geq 0$, and Nagata and Mumford's examples have $c_{1}(L)^{2}=0$. C. S. Seshadri showed that a line bundle L on a proper scheme over an algebraically closed field is ample if and only if there is a positive real number ε such that deg(L|C) ≥ εm(C) for all (irreducible) curves C in X, where m(C) is the maximum of the multiplicities at the points of C.[30] Several characterizations of ampleness hold more generally for line bundles on a proper algebraic space over a field k. In particular, the Nakai-Moishezon criterion is valid in that generality.[31] The Cartan-Serre-Grothendieck criterion holds even more generally, for a proper algebraic space over a Noetherian ring R.[32] (If a proper algebraic space over R has an ample line bundle, then it is in fact a projective scheme over R.) Kleiman's criterion fails for proper algebraic spaces X over a field, even if X is smooth.[33] Openness of ampleness On a projective scheme X over a field, Kleiman's criterion implies that ampleness is an open condition on the class of an R-divisor (an R-linear combination of Cartier divisors) in $N^{1}(X)$, with its topology based on the topology of the real numbers. (An R-divisor is defined to be ample if it can be written as a positive linear combination of ample Cartier divisors.[34]) An elementary special case is: for an ample divisor H and any divisor E, there is a positive real number b such that $H+aE$ is ample for all real numbers a of absolute value less than b. In terms of divisors with integer coefficients (or line bundles), this means that nH + E is ample for all sufficiently large positive integers n. Ampleness is also an open condition in a quite different sense, when the variety or line bundle is varied in an algebraic family. Namely, let $f\colon X\to Y$ be a proper morphism of schemes, and let L be a line bundle on X. Then the set of points y in Y such that L is ample on the fiber $X_{y}$ is open (in the Zariski topology). More strongly, if L is ample on one fiber $X_{y}$, then there is an affine open neighborhood U of y such that L is ample on $f^{-1}(U)$ over U.[35] Kleiman's other characterizations of ampleness Kleiman also proved the following characterizations of ampleness, which can be viewed as intermediate steps between the definition of ampleness and numerical criteria. Namely, for a line bundle L on a proper scheme X over a field, the following are equivalent:[36] • L is ample. • For every (irreducible) subvariety $Y\subset X$ of positive dimension, there is a positive integer r and a section $s\in H^{0}(Y,{\mathcal {L}}^{\otimes r})$ which is not identically zero but vanishes at some point of Y. • For every (irreducible) subvariety $Y\subset X$ of positive dimension, the holomorphic Euler characteristics of powers of L on Y go to infinity: $\chi (Y,{\mathcal {L}}^{\otimes r})\to \infty $ as $r\to \infty $. Generalizations Ample vector bundles Robin Hartshorne defined a vector bundle F on a projective scheme X over a field to be ample if the line bundle ${\mathcal {O}}(1)$ on the space $\mathbb {P} (F)$ of hyperplanes in F is ample.[37] Several properties of ample line bundles extend to ample vector bundles. For example, a vector bundle F is ample if and only if high symmetric powers of F kill the cohomology $H^{i}$ of coherent sheaves for all $i>0$.[38] Also, the Chern class $c_{r}(F)$ of an ample vector bundle has positive degree on every r-dimensional subvariety of X, for $1\leq r\leq {\text{rank}}(F)$.[39] Big line bundles Main article: Iitaka dimension A useful weakening of ampleness, notably in birational geometry, is the notion of a big line bundle. A line bundle L on a projective variety X of dimension n over a field is said to be big if there is a positive real number a and a positive integer $j_{0}$ such that $h^{0}(X,L^{\otimes j})\geq aj^{n}$ for all $j\geq j_{0}$. This is the maximum possible growth rate for the spaces of sections of powers of L, in the sense that for every line bundle L on X there is a positive number b with $h^{0}(X,L^{\otimes j})\leq bj^{n}$ for all j > 0.[40] There are several other characterizations of big line bundles. First, a line bundle is big if and only if there is a positive integer r such that the rational map from X to $\mathbb {P} (H^{0}(X,L^{\otimes r}))$ given by the sections of $L^{\otimes r}$ is birational onto its image.[41] Also, a line bundle L is big if and only if it has a positive tensor power which is the tensor product of an ample line bundle A and an effective line bundle B (meaning that $H^{0}(X,B)\neq 0$).[42] Finally, a line bundle is big if and only if its class in $N^{1}(X)$ is in the interior of the cone of effective divisors.[43] Bigness can be viewed as a birationally invariant analog of ampleness. For example, if $f\colon X\to Y$ is a dominant rational map between smooth projective varieties of the same dimension, then the pullback of a big line bundle on Y is big on X. (At first sight, the pullback is only a line bundle on the open subset of X where f is a morphism, but this extends uniquely to a line bundle on all of X.) For ample line bundles, one can only say that the pullback of an ample line bundle by a finite morphism is ample.[20] Example: Let X be the blow-up of the projective plane $\mathbb {P} ^{2}$ at a point over the complex numbers. Let H be the pullback to X of a line on $\mathbb {P} ^{2}$, and let E be the exceptional curve of the blow-up $\pi \colon X\to \mathbb {P} ^{2}$. Then the divisor H + E is big but not ample (or even nef) on X, because $(H+E)\cdot E=E^{2}=-1<0.$ This negativity also implies that the base locus of H + E (or of any positive multiple) contains the curve E. In fact, this base locus is equal to E. Relative ampleness Given a quasi-compact morphism of schemes $f:X\to S$, an invertible sheaf L on X is said to be ample relative to f or f-ample if the following equivalent conditions are met:[44][45] 1. For each open affine subset $U\subset S$, the restriction of L to $f^{-1}(U)$ is ample (in the usual sense). 2. f is quasi-separated and there is an open immersion $X\hookrightarrow \operatorname {Proj} _{S}({\mathcal {R}}),\,{\mathcal {R}}:=f_{*}\left(\bigoplus _{0}^{\infty }L^{\otimes n}\right)$ induced by the adjunction map: $f^{*}{\mathcal {R}}\to \bigoplus _{0}^{\infty }L^{\otimes n}$. 3. The condition 2. without "open". The condition 2 says (roughly) that X can be openly compactified to a projective scheme with ${\mathcal {O}}(1)=L$ (not just to a proper scheme). See also General algebraic geometry • Algebraic geometry of projective spaces • Fano variety: a variety whose canonical bundle is anti-ample • Matsusaka's big theorem • Divisorial scheme: a scheme admitting an ample family of line bundles Ampleness in complex geometry • Holomorphic vector bundle • Kodaira embedding theorem: on a compact complex manifold, ampleness and positivity coincide. • Kodaira vanishing theorem • Lefschetz hyperplane theorem: an ample divisor in a complex projective variety X is topologically similar to X. Notes 1. Hartshorne (1977), Theorem II.7.1. 2. Hartshorne (1977), Theorem III.5.2; (tag 02O6). 3. Hartshorne (1977), Lemma IV.1.2. 4. Lazarsfeld (2004), Example 1.4.5. 5. tag 01AM. 6. Hartshorne (1977), Example II.5.16.2. 7. Lazarsfeld (2004), Definition 2.1.26. 8. Hartshorne (1977), section II.5. 9. tag 02NP. 10. Grothendieck, EGA II, Definition 4.2.2. 11. Hartshorne (1977), Proposition I.7.6 and Example IV.3.3.2. 12. tag 01PS. 13. tag 01QE. 14. EGA II, Théorème 4.5.2 and Proposition 4.5.5. 15. EGA II, Proposition 4.5.10. 16. tag 01VU. 17. Hartshorne (1977), Theorem II.7.6 18. Lazarsfeld (2004), Theorem 1.2.6. 19. Hartshorne (1977), Proposition III.5.3 20. Lazarsfeld (2004), Theorem 1.2.13. 21. Hartshorne (1977), Example II.7.6.3. 22. Hartshorne (1977), Exercise IV.3.2(b). 23. Hartshorne (1977), Proposition IV.3.1. 24. Hartshorne (1977), Corollary IV.3.3. 25. Hartshorne (1977), Proposition IV.5.2. 26. Lazarsfeld (2004), Theorem 1.2.23, Remark 1.2.29; Kleiman (1966), Theorem III.1. 27. Lazarsfeld (2004), Theorems 1.4.23 and 1.4.29; Kleiman (1966), Theorem IV.1. 28. Fujino (2005), Corollary 3.3; Lazarsfeld (2004), Remark 1.4.24. 29. Lazarsfeld (2004), Example 1.5.2. 30. Lazarsfeld (2004), Theorem 1.4.13; Hartshorne (1970), Theorem I.7.1. 31. Kollár (1990), Theorem 3.11. 32. tag 0D38. 33. Kollár (1996), Chapter VI, Appendix, Exercise 2.19.3. 34. Lazarsfeld (2004), Definition 1.3.11. 35. Lazarsfeld (2004), Theorem 1.2.17 and its proof. 36. Lazarsfeld (2004), Example 1.2.32; Kleiman (1966), Theorem III.1. 37. Lazarsfeld (2004), Definition 6.1.1. 38. Lazarsfeld (2004), Theorem 6.1.10. 39. Lazarsfeld (2004), Theorem 8.2.2. 40. Lazarsfeld (2004), Corollary 2.1.38. 41. Lazarsfeld (2004), section 2.2.A. 42. Lazarsfeld (2004), Corollary 2.2.7. 43. Lazarsfeld (2004), Theorem 2.2.26. 44. tag 01VG. 45. Grothendieck & Dieudonné 1961, Proposition 4.6.3. Sources • Fujino, Osamu (2005), "On the Kleiman-Mori cone", Proceedings of the Japan Academy, Series A, Mathematical Sciences, 81 (5): 80–84, arXiv:math/0501055, Bibcode:2005math......1055F, doi:10.3792/pjaa.81.80, MR 2143547 • Grothendieck, Alexandre; Dieudonné, Jean (1961). "Éléments de géométrie algébrique: II. Étude globale élémentaire de quelques classes de morphismes". Publications Mathématiques de l'IHÉS. 8. doi:10.1007/bf02699291. MR 0217084. • Hartshorne, Robin (1970), Ample Subvarieties of Algebraic Varieties, Lecture Notes in Mathematics, vol. 156, Berlin, Heidelberg: Springer-Verlag, doi:10.1007/BFb0067839, ISBN 978-3-540-05184-8, MR 0282977 • Hartshorne, Robin (1977), Algebraic Geometry, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90244-9, MR 0463157 • Kleiman, Steven L. (1966), "Toward a numerical theory of ampleness", Annals of Mathematics, Second Series, 84 (3): 293–344, doi:10.2307/1970447, ISSN 0003-486X, JSTOR 1970447, MR 0206009 • Kollár, János (1990), "Projectivity of complete moduli", Journal of Differential Geometry, 32, doi:10.4310/jdg/1214445046, MR 1064874 • Kollár, János (1996), Rational curves on algebraic varieties, Berlin, Heidelberg: Springer-Verlag, doi:10.1007/978-3-662-03276-3, ISBN 978-3-642-08219-1, MR 1440180 • Lazarsfeld, Robert (2004), Positivity in algebraic geometry (2 vols.), Berlin: Springer-Verlag, doi:10.1007/978-3-642-18808-4, ISBN 3-540-22533-1, MR 2095471 • Nagata, Masayoshi (1959), "On the 14th problem of Hilbert", American Journal of Mathematics, 81 (3): 766–772, doi:10.2307/2372927, JSTOR 2372927, MR 0154867 • "Section 29.37 (01VG): Relatively ample sheaves—The Stacks project". • Stacks Project, Tag 01AM. • Stacks Project, Tag 01PS. • Stacks Project, Tag 01QE. • Stacks Project, Tag 01VU. • Stacks Project, Tag 02NP. • Stacks Project, Tag 02O6 • Stacks Project, Tag 0D38. External links • The Stacks Project
Wikipedia
Very large-scale neighborhood search In mathematical optimization, neighborhood search is a technique that tries to find good or near-optimal solutions to a combinatorial optimisation problem by repeatedly transforming a current solution into a different solution in the neighborhood of the current solution. The neighborhood of a solution is a set of similar solutions obtained by relatively simple modifications to the original solution. For a very large-scale neighborhood search, the neighborhood is large and possibly exponentially sized. The resulting algorithms can outperform algorithms using small neighborhoods because the local improvements are larger. If neighborhood searched is limited to just one or a very small number of changes from the current solution, then it can be difficult to escape from local minima, even with additional meta-heuristic techniques such as Simulated Annealing or Tabu search. In large neighborhood search techniques, the possible changes from one solution to its neighbor may allow tens or hundreds of values to change, and this means that the size of the neighborhood may itself be sufficient to allow the search process to avoid or escape local minima, though additional meta-heuristic techniques can still improve performance. References • Ahuja, Ravindra K.; Orlin, James B.; Sharma, Dushyant (2000), "Very large-scale neighborhood search" (PDF), International Transactions in Operational Research, 7 (4–5): 301–317, doi:10.1111/j.1475-3995.2000.tb00201.x
Wikipedia
Verónica Martínez de la Vega Verónica Martínez de la Vega y Mansilla is a Mexican mathematician whose research involves topology and hypertopology. She is a researcher in the Institute of Mathematics at the National Autonomous University of Mexico (UNAM).[1] Education and career Martínez de la Vega was born in Mexico City, on January 5, 1971. Her family worked as lawyers, and discouraged her from going into science, but nevertheless she ended up studying mathematics at UNAM, and wrote an undergraduate thesis in topology that she published as a journal paper in Topology and its Applications.[2] Continuing to graduate study in topology at UNAM, she completed her PhD in 2002 with the dissertation Estudio sobre dendroides y compactaciones supervised by Polish topologist Janusz J. Charatonik, becoming his only female doctoral student.[2][3] After postgraduate research at UAM Iztapalapa and California State University, Sacramento, she joined the Institute of Mathematics as a researcher in 2005.[4] Recognition Martínez de la Vega is a member of the Mexican Academy of Sciences.[5] In 2017 UNAM gave her their "Reconocimiento Sor Juana Inés de la Cruz" award.[2] References 1. "Dra. Verónica Martínez de la Vega y Mansilla", Directory, UNAM Faculty of Sciences, retrieved 2022-11-26 2. "Verónica Martínez de la Vega y Mansilla recibe el "Reconocimiento Sor Juana Inés de la Cruz"", Noticias del IM, UNAM Institute of Mathematics, retrieved 2022-11-26 3. Verónica Martínez de la Vega at the Mathematics Genealogy Project 4. Verónica Martínez de la Vega (Investigadora), UNAM Institute of Mathematics, retrieved 2022-11-26 5. Mathematics section members (PDF), Mexican Academy of Sciences, 2021, retrieved 2022-11-26 Authority control: Academics • MathSciNet • Mathematics Genealogy Project
Wikipedia
Edoardo Vesentini Edoardo Vesentini (31 May 1928 – 28 March 2020) was an Italian mathematician and politician who introduced the Andreotti–Vesentini theorem. He was awarded the Caccioppoli Prize in 1962. Vasentini was born in Rome, and died on 28 March 2020, aged 91.[1] References • Vesentini, Edoardo (2005), "Beniamino Segre and Italian geometry" (PDF), Rendiconti di Matematica e delle sue Applicazioni, 25 (2): 185–193, MR 2197882, Zbl 1093.01009. • Edoardo Vesentini at the Mathematics Genealogy Project • Premio Caccioppoli 1962 a Edoardo Vesentini 1. "E' morto Edoardo Vesentini, direttore emerito della Scuola Normale Superiore di Pisa". LaNazione.it. Authority control International • ISNI • VIAF National • Spain • France • BnF data • Germany • Italy • Israel • Belgium • United States • Czech Republic • Netherlands Academics • CiNii • MathSciNet • Mathematics Genealogy Project • zbMATH People • Trove Other • IdRef
Wikipedia
Linear algebraic group In mathematics, a linear algebraic group is a subgroup of the group of invertible $n\times n$ matrices (under matrix multiplication) that is defined by polynomial equations. An example is the orthogonal group, defined by the relation $M^{T}M=I_{n}$ where $M^{T}$ is the transpose of $M$. Algebraic structure → Group theory Group theory Basic notions • Subgroup • Normal subgroup • Quotient group • (Semi-)direct product Group homomorphisms • kernel • image • direct sum • wreath product • simple • finite • infinite • continuous • multiplicative • additive • cyclic • abelian • dihedral • nilpotent • solvable • action • Glossary of group theory • List of group theory topics Finite groups • Cyclic group Zn • Symmetric group Sn • Alternating group An • Dihedral group Dn • Quaternion group Q • Cauchy's theorem • Lagrange's theorem • Sylow theorems • Hall's theorem • p-group • Elementary abelian group • Frobenius group • Schur multiplier Classification of finite simple groups • cyclic • alternating • Lie type • sporadic • Discrete groups • Lattices • Integers ($\mathbb {Z} $) • Free group Modular groups • PSL(2, $\mathbb {Z} $) • SL(2, $\mathbb {Z} $) • Arithmetic group • Lattice • Hyperbolic group Topological and Lie groups • Solenoid • Circle • General linear GL(n) • Special linear SL(n) • Orthogonal O(n) • Euclidean E(n) • Special orthogonal SO(n) • Unitary U(n) • Special unitary SU(n) • Symplectic Sp(n) • G2 • F4 • E6 • E7 • E8 • Lorentz • Poincaré • Conformal • Diffeomorphism • Loop Infinite dimensional Lie group • O(∞) • SU(∞) • Sp(∞) Algebraic groups • Linear algebraic group • Reductive group • Abelian variety • Elliptic curve Many Lie groups can be viewed as linear algebraic groups over the field of real or complex numbers. (For example, every compact Lie group can be regarded as a linear algebraic group over R (necessarily R-anisotropic and reductive), as can many noncompact groups such as the simple Lie group SL(n,R).) The simple Lie groups were classified by Wilhelm Killing and Élie Cartan in the 1880s and 1890s. At that time, no special use was made of the fact that the group structure can be defined by polynomials, that is, that these are algebraic groups. The founders of the theory of algebraic groups include Maurer, Chevalley, and Kolchin (1948). In the 1950s, Armand Borel constructed much of the theory of algebraic groups as it exists today. One of the first uses for the theory was to define the Chevalley groups. Examples For a positive integer $n$, the general linear group $GL(n)$ over a field $k$, consisting of all invertible $n\times n$ matrices, is a linear algebraic group over $k$. It contains the subgroups $U\subset B\subset GL(n)$ consisting of matrices of the form, resp., $\left({\begin{array}{cccc}1&*&\dots &*\\0&1&\ddots &\vdots \\\vdots &\ddots &\ddots &*\\0&\dots &0&1\end{array}}\right)$ and $\left({\begin{array}{cccc}*&*&\dots &*\\0&*&\ddots &\vdots \\\vdots &\ddots &\ddots &*\\0&\dots &0&*\end{array}}\right)$. The group $U$ is an example of a unipotent linear algebraic group, the group $B$ is an example of a solvable algebraic group called the Borel subgroup of $GL(n)$. It is a consequence of the Lie-Kolchin theorem that any connected solvable subgroup of $\mathrm {GL} (n)$ is conjugated into $B$. Any unipotent subgroup can be conjugated into $U$. Another algebraic subgroup of $\mathrm {GL} (n)$ is the special linear group $\mathrm {SL} (n)$ of matrices with determinant 1. The group $\mathrm {GL} (1)$ is called the multiplicative group, usually denoted by $\mathbf {G} _{\mathrm {m} }$. The group of $k$-points $\mathbf {G} _{\mathrm {m} }(k)$ is the multiplicative group $k^{*}$ of nonzero elements of the field $k$. The additive group $\mathbf {G} _{\mathrm {a} }$, whose $k$-points are isomorphic to the additive group of $k$, can also be expressed as a matrix group, for example as the subgroup $U$ in $\mathrm {GL} (2)$ : ${\begin{pmatrix}1&*\\0&1\end{pmatrix}}.$ These two basic examples of commutative linear algebraic groups, the multiplicative and additive groups, behave very differently in terms of their linear representations (as algebraic groups). Every representation of the multiplicative group $\mathbf {G} _{\mathrm {m} }$ is a direct sum of irreducible representations. (Its irreducible representations all have dimension 1, of the form $x\mapsto x^{n}$ for an integer $n$.) By contrast, the only irreducible representation of the additive group $\mathbf {G} _{\mathrm {a} }$ is the trivial representation. So every representation of $\mathbf {G} _{\mathrm {a} }$ (such as the 2-dimensional representation above) is an iterated extension of trivial representations, not a direct sum (unless the representation is trivial). The structure theory of linear algebraic groups analyzes any linear algebraic group in terms of these two basic groups and their generalizations, tori and unipotent groups, as discussed below. Definitions For an algebraically closed field k, much of the structure of an algebraic variety X over k is encoded in its set X(k) of k-rational points, which allows an elementary definition of a linear algebraic group. First, define a function from the abstract group GL(n,k) to k to be regular if it can be written as a polynomial in the entries of an n×n matrix A and in 1/det(A), where det is the determinant. Then a linear algebraic group G over an algebraically closed field k is a subgroup G(k) of the abstract group GL(n,k) for some natural number n such that G(k) is defined by the vanishing of some set of regular functions. For an arbitrary field k, algebraic varieties over k are defined as a special case of schemes over k. In that language, a linear algebraic group G over a field k is a smooth closed subgroup scheme of GL(n) over k for some natural number n. In particular, G is defined by the vanishing of some set of regular functions on GL(n) over k, and these functions must have the property that for every commutative k-algebra R, G(R) is a subgroup of the abstract group GL(n,R). (Thus an algebraic group G over k is not just the abstract group G(k), but rather the whole family of groups G(R) for commutative k-algebras R; this is the philosophy of describing a scheme by its functor of points.) In either language, one has the notion of a homomorphism of linear algebraic groups. For example, when k is algebraically closed, a homomorphism from G ⊂ GL(m) to H ⊂ GL(n) is a homomorphism of abstract groups G(k) → H(k) which is defined by regular functions on G. This makes the linear algebraic groups over k into a category. In particular, this defines what it means for two linear algebraic groups to be isomorphic. In the language of schemes, a linear algebraic group G over a field k is in particular a group scheme over k, meaning a scheme over k together with a k-point 1 ∈ G(k) and morphisms $m\colon G\times _{k}G\to G,\;i\colon G\to G$ over k which satisfy the usual axioms for the multiplication and inverse maps in a group (associativity, identity, inverses). A linear algebraic group is also smooth and of finite type over k, and it is affine (as a scheme). Conversely, every affine group scheme G of finite type over a field k has a faithful representation into GL(n) over k for some n.[1] An example is the embedding of the additive group Ga into GL(2), as mentioned above. As a result, one can think of linear algebraic groups either as matrix groups or, more abstractly, as smooth affine group schemes over a field. (Some authors use "linear algebraic group" to mean any affine group scheme of finite type over a field.) For a full understanding of linear algebraic groups, one has to consider more general (non-smooth) group schemes. For example, let k be an algebraically closed field of characteristic p > 0. Then the homomorphism f: Gm → Gm defined by x ↦ xp induces an isomorphism of abstract groups k* → k*, but f is not an isomorphism of algebraic groups (because x1/p is not a regular function). In the language of group schemes, there is a clearer reason why f is not an isomorphism: f is surjective, but it has nontrivial kernel, namely the group scheme μp of pth roots of unity. This issue does not arise in characteristic zero. Indeed, every group scheme of finite type over a field k of characteristic zero is smooth over k.[2] A group scheme of finite type over any field k is smooth over k if and only if it is geometrically reduced, meaning that the base change $G_{\overline {k}}$ is reduced, where ${\overline {k}}$ is an algebraic closure of k.[3] Since an affine scheme X is determined by its ring O(X) of regular functions, an affine group scheme G over a field k is determined by the ring O(G) with its structure of a Hopf algebra (coming from the multiplication and inverse maps on G). This gives an equivalence of categories (reversing arrows) between affine group schemes over k and commutative Hopf algebras over k. For example, the Hopf algebra corresponding to the multiplicative group Gm = GL(1) is the Laurent polynomial ring k[x, x−1], with comultiplication given by $x\mapsto x\otimes x.$ Basic notions For a linear algebraic group G over a field k, the identity component Go (the connected component containing the point 1) is a normal subgroup of finite index. So there is a group extension $1\to G^{\circ }\to G\to F\to 1,$ where F is a finite algebraic group. (For k algebraically closed, F can be identified with an abstract finite group.) Because of this, the study of algebraic groups mostly focuses on connected groups. Various notions from abstract group theory can be extended to linear algebraic groups. It is straightforward to define what it means for a linear algebraic group to be commutative, nilpotent, or solvable, by analogy with the definitions in abstract group theory. For example, a linear algebraic group is solvable if it has a composition series of linear algebraic subgroups such that the quotient groups are commutative. Also, the normalizer, the center, and the centralizer of a closed subgroup H of a linear algebraic group G are naturally viewed as closed subgroup schemes of G. If they are smooth over k, then they are linear algebraic groups as defined above. One may ask to what extent the properties of a connected linear algebraic group G over a field k are determined by the abstract group G(k). A useful result in this direction is that if the field k is perfect (for example, of characteristic zero), or if G is reductive (as defined below), then G is unirational over k. Therefore, if in addition k is infinite, the group G(k) is Zariski dense in G.[4] For example, under the assumptions mentioned, G is commutative, nilpotent, or solvable if and only if G(k) has the corresponding property. The assumption of connectedness cannot be omitted in these results. For example, let G be the group μ3 ⊂ GL(1) of cube roots of unity over the rational numbers Q. Then G is a linear algebraic group over Q for which G(Q) = 1 is not Zariski dense in G, because $G({\overline {\mathbf {Q} }})$ is a group of order 3. Over an algebraically closed field, there is a stronger result about algebraic groups as algebraic varieties: every connected linear algebraic group over an algebraically closed field is a rational variety.[5] The Lie algebra of an algebraic group The Lie algebra ${\mathfrak {g}}$ of an algebraic group G can be defined in several equivalent ways: as the tangent space T1(G) at the identity element 1 ∈ G(k), or as the space of left-invariant derivations. If k is algebraically closed, a derivation D: O(G) → O(G) over k of the coordinate ring of G is left-invariant if $D\lambda _{x}=\lambda _{x}D$ for every x in G(k), where λx: O(G) → O(G) is induced by left multiplication by x. For an arbitrary field k, left invariance of a derivation is defined as an analogous equality of two linear maps O(G) → O(G) ⊗O(G).[6] The Lie bracket of two derivations is defined by [D1, D2] =D1D2 − D2D1. The passage from G to ${\mathfrak {g}}$ is thus a process of differentiation. For an element x ∈ G(k), the derivative at 1 ∈ G(k) of the conjugation map G → G, g ↦ xgx−1, is an automorphism of ${\mathfrak {g}}$, giving the adjoint representation: $\operatorname {Ad} \colon G\to \operatorname {Aut} ({\mathfrak {g}}).$ Over a field of characteristic zero, a connected subgroup H of a linear algebraic group G is uniquely determined by its Lie algebra ${\mathfrak {h}}\subset {\mathfrak {g}}$.[7] But not every Lie subalgebra of ${\mathfrak {g}}$ corresponds to an algebraic subgroup of G, as one sees in the example of the torus G = (Gm)2 over C. In positive characteristic, there can be many different connected subgroups of a group G with the same Lie algebra (again, the torus G = (Gm)2 provides examples). For these reasons, although the Lie algebra of an algebraic group is important, the structure theory of algebraic groups requires more global tools. Semisimple and unipotent elements Main article: Jordan–Chevalley decomposition For an algebraically closed field k, a matrix g in GL(n,k) is called semisimple if it is diagonalizable, and unipotent if the matrix g − 1 is nilpotent. Equivalently, g is unipotent if all eigenvalues of g are equal to 1. The Jordan canonical form for matrices implies that every element g of GL(n,k) can be written uniquely as a product g = gssgu such that gss is semisimple, gu is unipotent, and gss and gu commute with each other. For any field k, an element g of GL(n,k) is said to be semisimple if it becomes diagonalizable over the algebraic closure of k. If the field k is perfect, then the semisimple and unipotent parts of g also lie in GL(n,k). Finally, for any linear algebraic group G ⊂ GL(n) over a field k, define a k-point of G to be semisimple or unipotent if it is semisimple or unipotent in GL(n,k). (These properties are in fact independent of the choice of a faithful representation of G.) If the field k is perfect, then the semisimple and unipotent parts of a k-point of G are automatically in G. That is (the Jordan decomposition): every element g of G(k) can be written uniquely as a product g = gssgu in G(k) such that gss is semisimple, gu is unipotent, and gss and gu commute with each other.[8] This reduces the problem of describing the conjugacy classes in G(k) to the semisimple and unipotent cases. Tori Main article: Algebraic torus A torus over an algebraically closed field k means a group isomorphic to (Gm)n, the product of n copies of the multiplicative group over k, for some natural number n. For a linear algebraic group G, a maximal torus in G means a torus in G that is not contained in any bigger torus. For example, the group of diagonal matrices in GL(n) over k is a maximal torus in GL(n), isomorphic to (Gm)n. A basic result of the theory is that any two maximal tori in a group G over an algebraically closed field k are conjugate by some element of G(k).[9] The rank of G means the dimension of any maximal torus. For an arbitrary field k, a torus T over k means a linear algebraic group over k whose base change $T_{\overline {k}}$ to the algebraic closure of k is isomorphic to (Gm)n over ${\overline {k}}$, for some natural number n. A split torus over k means a group isomorphic to (Gm)n over k for some n. An example of a non-split torus over the real numbers R is $T=\{(x,y)\in A_{\mathbf {R} }^{2}:x^{2}+y^{2}=1\},$ with group structure given by the formula for multiplying complex numbers x+iy. Here T is a torus of dimension 1 over R. It is not split, because its group of real points T(R) is the circle group, which is not isomorphic even as an abstract group to Gm(R) = R*. Every point of a torus over a field k is semisimple. Conversely, if G is a connected linear algebraic group such that every element of $G({\overline {k}})$ is semisimple, then G is a torus.[10] For a linear algebraic group G over a general field k, one cannot expect all maximal tori in G over k to be conjugate by elements of G(k). For example, both the multiplicative group Gm and the circle group T above occur as maximal tori in SL(2) over R. However, it is always true that any two maximal split tori in G over k (meaning split tori in G that are not contained in a bigger split torus) are conjugate by some element of G(k).[11] As a result, it makes sense to define the k-rank or split rank of a group G over k as the dimension of any maximal split torus in G over k. For any maximal torus T in a linear algebraic group G over a field k, Grothendieck showed that $T_{\overline {k}}$ is a maximal torus in $G_{\overline {k}}$.[12] It follows that any two maximal tori in G over a field k have the same dimension, although they need not be isomorphic. Unipotent groups Let Un be the group of upper-triangular matrices in GL(n) with diagonal entries equal to 1, over a field k. A group scheme over a field k (for example, a linear algebraic group) is called unipotent if it is isomorphic to a closed subgroup scheme of Un for some n. It is straightforward to check that the group Un is nilpotent. As a result, every unipotent group scheme is nilpotent. A linear algebraic group G over a field k is unipotent if and only if every element of $G({\overline {k}})$ is unipotent.[13] The group Bn of upper-triangular matrices in GL(n) is a semidirect product $B_{n}=T_{n}\ltimes U_{n},$ where Tn is the diagonal torus (Gm)n. More generally, every connected solvable linear algebraic group is a semidirect product of a torus with a unipotent group, T ⋉ U.[14] A smooth connected unipotent group over a perfect field k (for example, an algebraically closed field) has a composition series with all quotient groups isomorphic to the additive group Ga.[15] Borel subgroups The Borel subgroups are important for the structure theory of linear algebraic groups. For a linear algebraic group G over an algebraically closed field k, a Borel subgroup of G means a maximal smooth connected solvable subgroup. For example, one Borel subgroup of GL(n) is the subgroup B of upper-triangular matrices (all entries below the diagonal are zero). A basic result of the theory is that any two Borel subgroups of a connected group G over an algebraically closed field k are conjugate by some element of G(k).[16] (A standard proof uses the Borel fixed-point theorem: for a connected solvable group G acting on a proper variety X over an algebraically closed field k, there is a k-point in X which is fixed by the action of G.) The conjugacy of Borel subgroups in GL(n) amounts to the Lie–Kolchin theorem: every smooth connected solvable subgroup of GL(n) is conjugate to a subgroup of the upper-triangular subgroup in GL(n). For an arbitrary field k, a Borel subgroup B of G is defined to be a subgroup over k such that, over an algebraic closure ${\overline {k}}$ of k, $B_{\overline {k}}$is a Borel subgroup of $G_{\overline {k}}$. Thus G may or may not have a Borel subgroup over k. For a closed subgroup scheme H of G, the quotient space G/H is a smooth quasi-projective scheme over k.[17] A smooth subgroup P of a connected group G is called parabolic if G/P is projective over k (or equivalently, proper over k). An important property of Borel subgroups B is that G/B is a projective variety, called the flag variety of G. That is, Borel subgroups are parabolic subgroups. More precisely, for k algebraically closed, the Borel subgroups are exactly the minimal parabolic subgroups of G; conversely, every subgroup containing a Borel subgroup is parabolic.[18] So one can list all parabolic subgroups of G (up to conjugation by G(k)) by listing all the linear algebraic subgroups of G that contain a fixed Borel subgroup. For example, the subgroups P ⊂ GL(3) over k that contain the Borel subgroup B of upper-triangular matrices are B itself, the whole group GL(3), and the intermediate subgroups $\left\{{\begin{bmatrix}*&*&*\\0&*&*\\0&*&*\end{bmatrix}}\right\}$ and $\left\{{\begin{bmatrix}*&*&*\\*&*&*\\0&0&*\end{bmatrix}}\right\}.$ The corresponding projective homogeneous varieties GL(3)/P are (respectively): the flag manifold of all chains of linear subspaces $0\subset V_{1}\subset V_{2}\subset A_{k}^{3}$ with Vi of dimension i; a point; the projective space P2 of lines (1-dimensional linear subspaces) in A3; and the dual projective space P2 of planes in A3. Semisimple and reductive groups Main article: Reductive group A connected linear algebraic group G over an algebraically closed field is called semisimple if every smooth connected solvable normal subgroup of G is trivial. More generally, a connected linear algebraic group G over an algebraically closed field is called reductive if every smooth connected unipotent normal subgroup of G is trivial.[19] (Some authors do not require reductive groups to be connected.) A semisimple group is reductive. A group G over an arbitrary field k is called semisimple or reductive if $G_{\overline {k}}$ is semisimple or reductive. For example, the group SL(n) of n × n matrices with determinant 1 over any field k is semisimple, whereas a nontrivial torus is reductive but not semisimple. Likewise, GL(n) is reductive but not semisimple (because its center Gm is a nontrivial smooth connected solvable normal subgroup). Every compact connected Lie group has a complexification, which is a complex reductive algebraic group. In fact, this construction gives a one-to-one correspondence between compact connected Lie groups and complex reductive groups, up to isomorphism.[20] A linear algebraic group G over a field k is called simple (or k-simple) if it is semisimple, nontrivial, and every smooth connected normal subgroup of G over k is trivial or equal to G.[21] (Some authors call this property "almost simple".) This differs slightly from the terminology for abstract groups, in that a simple algebraic group may have nontrivial center (although the center must be finite). For example, for any integer n at least 2 and any field k, the group SL(n) over k is simple, and its center is the group scheme μn of nth roots of unity. Every connected linear algebraic group G over a perfect field k is (in a unique way) an extension of a reductive group R by a smooth connected unipotent group U, called the unipotent radical of G: $1\to U\to G\to R\to 1.$ If k has characteristic zero, then one has the more precise Levi decomposition: every connected linear algebraic group G over k is a semidirect product $R\ltimes U$ of a reductive group by a unipotent group.[22] Classification of reductive groups Main article: Reductive group Reductive groups include the most important linear algebraic groups in practice, such as the classical groups: GL(n), SL(n), the orthogonal groups SO(n) and the symplectic groups Sp(2n). On the other hand, the definition of reductive groups is quite "negative", and it is not clear that one can expect to say much about them. Remarkably, Claude Chevalley gave a complete classification of the reductive groups over an algebraically closed field: they are determined by root data.[23] In particular, simple groups over an algebraically closed field k are classified (up to quotients by finite central subgroup schemes) by their Dynkin diagrams. It is striking that this classification is independent of the characteristic of k. For example, the exceptional Lie groups G2, F4, E6, E7, and E8 can be defined in any characteristic (and even as group schemes over Z). The classification of finite simple groups says that most finite simple groups arise as the group of k-points of a simple algebraic group over a finite field k, or as minor variants of that construction. Every reductive group over a field is the quotient by a finite central subgroup scheme of the product of a torus and some simple groups. For example, $GL(n)\cong (G_{m}\times SL(n))/\mu _{n}.$ For an arbitrary field k, a reductive group G is called split if it contains a split maximal torus over k (that is, a split torus in G which remains maximal over an algebraic closure of k). For example, GL(n) is a split reductive group over any field k. Chevalley showed that the classification of split reductive groups is the same over any field. By contrast, the classification of arbitrary reductive groups can be hard, depending on the base field. For example, every nondegenerate quadratic form q over a field k determines a reductive group SO(q), and every central simple algebra A over k determines a reductive group SL1(A). As a result, the problem of classifying reductive groups over k essentially includes the problem of classifying all quadratic forms over k or all central simple algebras over k. These problems are easy for k algebraically closed, and they are understood for some other fields such as number fields, but for arbitrary fields there are many open questions. Applications Representation theory One reason for the importance of reductive groups comes from representation theory. Every irreducible representation of a unipotent group is trivial. More generally, for any linear algebraic group G written as an extension $1\to U\to G\to R\to 1$ with U unipotent and R reductive, every irreducible representation of G factors through R.[24] This focuses attention on the representation theory of reductive groups. (To be clear, the representations considered here are representations of G as an algebraic group. Thus, for a group G over a field k, the representations are on k-vector spaces, and the action of G is given by regular functions. It is an important but different problem to classify continuous representations of the group G(R) for a real reductive group G, or similar problems over other fields.) Chevalley showed that the irreducible representations of a split reductive group over a field k are finite-dimensional, and they are indexed by dominant weights.[25] This is the same as what happens in the representation theory of compact connected Lie groups, or the finite-dimensional representation theory of complex semisimple Lie algebras. For k of characteristic zero, all these theories are essentially equivalent. In particular, every representation of a reductive group G over a field of characteristic zero is a direct sum of irreducible representations, and if G is split, the characters of the irreducible representations are given by the Weyl character formula. The Borel–Weil theorem gives a geometric construction of the irreducible representations of a reductive group G in characteristic zero, as spaces of sections of line bundles over the flag manifold G/B. The representation theory of reductive groups (other than tori) over a field of positive characteristic p is less well understood. In this situation, a representation need not be a direct sum of irreducible representations. And although irreducible representations are indexed by dominant weights, the dimensions and characters of the irreducible representations are known only in some cases. Andersen, Jantzen and Soergel (1994) determined these characters (proving Lusztig's conjecture) when the characteristic p is sufficiently large compared to the Coxeter number of the group. For small primes p, there is not even a precise conjecture. Group actions and geometric invariant theory An action of a linear algebraic group G on a variety (or scheme) X over a field k is a morphism $G\times _{k}X\to X$ that satisfies the axioms of a group action. As in other types of group theory, it is important to study group actions, since groups arise naturally as symmetries of geometric objects. Part of the theory of group actions is geometric invariant theory, which aims to construct a quotient variety X/G, describing the set of orbits of a linear algebraic group G on X as an algebraic variety. Various complications arise. For example, if X is an affine variety, then one can try to construct X/G as Spec of the ring of invariants O(X)G. However, Masayoshi Nagata showed that the ring of invariants need not be finitely generated as a k-algebra (and so Spec of the ring is a scheme but not a variety), a negative answer to Hilbert's 14th problem. In the positive direction, the ring of invariants is finitely generated if G is reductive, by Haboush's theorem, proved in characteristic zero by Hilbert and Nagata. Geometric invariant theory involves further subtleties when a reductive group G acts on a projective variety X. In particular, the theory defines open subsets of "stable" and "semistable" points in X, with the quotient morphism only defined on the set of semistable points. Related notions Linear algebraic groups admit variants in several directions. Dropping the existence of the inverse map $i\colon G\to G$, one obtains the notion of a linear algebraic monoid.[26] Lie groups For a linear algebraic group G over the real numbers R, the group of real points G(R) is a Lie group, essentially because real polynomials, which describe the multiplication on G, are smooth functions. Likewise, for a linear algebraic group G over C, G(C) is a complex Lie group. Much of the theory of algebraic groups was developed by analogy with Lie groups. There are several reasons why a Lie group may not have the structure of a linear algebraic group over R. • A Lie group with an infinite group of components G/Go cannot be realized as a linear algebraic group. • An algebraic group G over R may be connected as an algebraic group while the Lie group G(R) is not connected, and likewise for simply connected groups. For example, the algebraic group SL(2) is simply connected over any field, whereas the Lie group SL(2,R) has fundamental group isomorphic to the integers Z. The double cover H of SL(2,R), known as the metaplectic group, is a Lie group that cannot be viewed as a linear algebraic group over R. More strongly, H has no faithful finite-dimensional representation. • Anatoly Maltsev showed that every simply connected nilpotent Lie group can be viewed as a unipotent algebraic group G over R in a unique way.[27] (As a variety, G is isomorphic to affine space of some dimension over R.) By contrast, there are simply connected solvable Lie groups that cannot be viewed as real algebraic groups. For example, the universal cover H of the semidirect product S1 ⋉ R2 has center isomorphic to Z, which is not a linear algebraic group, and so H cannot be viewed as a linear algebraic group over R. Abelian varieties Algebraic groups which are not affine behave very differently. In particular, a smooth connected group scheme which is a projective variety over a field is called an abelian variety. In contrast to linear algebraic groups, every abelian variety is commutative. Nonetheless, abelian varieties have a rich theory. Even the case of elliptic curves (abelian varieties of dimension 1) is central to number theory, with applications including the proof of Fermat's Last Theorem. Tannakian categories The finite-dimensional representations of an algebraic group G, together with the tensor product of representations, form a tannakian category RepG. In fact, tannakian categories with a "fiber functor" over a field are equivalent to affine group schemes. (Every affine group scheme over a field k is pro-algebraic in the sense that it is an inverse limit of affine group schemes of finite type over k.[28]) For example, the Mumford–Tate group and the motivic Galois group are constructed using this formalism. Certain properties of a (pro-)algebraic group G can be read from its category of representations. For example, over a field of characteristic zero, RepG is a semisimple category if and only if the identity component of G is pro-reductive.[29] See also • The groups of Lie type are the finite simple groups constructed from simple algebraic groups over finite fields. • Lang's theorem • Generalized flag variety, Bruhat decomposition, BN pair, Weyl group, Cartan subgroup, group of adjoint type, parabolic induction • Real form (Lie theory), Satake diagram • Adelic algebraic group, Weil's conjecture on Tamagawa numbers • Langlands classification, Langlands program, geometric Langlands program • Torsor, nonabelian cohomology, special group, cohomological invariant, essential dimension, Kneser–Tits conjecture, Serre's conjecture II • Pseudo-reductive group • Differential Galois theory • Distribution on a linear algebraic group Notes 1. Milne (2017), Corollary 4.10. 2. Milne (2017), Corollary 8.39. 3. Milne (2017), Proposition 1.26(b). 4. Borel (1991), Theorem 18.2 and Corollary 18.4. 5. Borel (1991), Remark 14.14. 6. Milne (2017), section 10.e. 7. Borel (1991), section 7.1. 8. Milne (2017), Theorem 9.18. 9. Borel (1991), Corollary 11.3. 10. Milne (2017), Corollary 17.25 11. Springer (1998), Theorem 15.2.6. 12. Borel (1991), 18.2(i). 13. Milne (2017), Corollary 14.12. 14. Borel (1991), Theorem 10.6. 15. Borel (1991), Theorem 15.4(iii). 16. Borel (1991), Theorem 11.1. 17. Milne (2017), Theorems 7.18 and 8.43. 18. Borel (1991), Corollary 11.2. 19. Milne (2017), Definition 6.46. 20. Bröcker & tom Dieck (1985), section III.8; Conrad (2014), section D.3. 21. Conrad (2014), after Proposition 5.1.17. 22. Conrad (2014), Proposition 5.4.1. 23. Springer (1998), 9.6.2 and 10.1.1. 24. Milne (2017), Lemma 19.16. 25. Milne (2017), Theorem 22.2. 26. Renner, Lex (2006), Linear Algebraic Monoids, Springer. 27. Milne (2017), Theorem 14.37. 28. Deligne & Milne (1982), Corollary II.2.7. 29. Deligne & Milne (1982), Remark II.2.28. References • Andersen, H. H.; Jantzen, J. C.; Soergel, W. (1994), Representations of Quantum Groups at a pth Root of Unity and of Semisimple Groups in Characteristic p: Independence of p, Astérisque, vol. 220, Société Mathématique de France, ISSN 0303-1179, MR 1272539 • Borel, Armand (1991) [1969], Linear Algebraic Groups (2nd ed.), New York: Springer-Verlag, ISBN 0-387-97370-2, MR 1102012 • Bröcker, Theodor; tom Dieck, Tammo (1985), Representations of Compact Lie Groups, Springer Nature, ISBN 0-387-13678-9, MR 0781344 • Conrad, Brian (2014), "Reductive group schemes" (PDF), Autour des schémas en groupes, vol. 1, Paris: Société Mathématique de France, pp. 93–444, ISBN 978-2-85629-794-0, MR 3309122 • Deligne, Pierre; Milne, J. S. (1982), "Tannakian categories", Hodge Cycles, Motives, and Shimura Varieties, Lecture Notes in Mathematics, vol. 900, Springer Nature, pp. 101–228, ISBN 3-540-11174-3, MR 0654325 • De Medts, Tom (2019), Linear Algebraic Groups (course notes) (PDF), Ghent University • Humphreys, James E. (1975), Linear Algebraic Groups, Springer, ISBN 0-387-90108-6, MR 0396773 • Kolchin, E. R. (1948), "Algebraic matric groups and the Picard–Vessiot theory of homogeneous linear ordinary differential equations", Annals of Mathematics, Second Series, 49 (1): 1–42, doi:10.2307/1969111, ISSN 0003-486X, JSTOR 1969111, MR 0024884 • Milne, J. S. (2017), Algebraic Groups: The Theory of Group Schemes of Finite Type over a Field, Cambridge University Press, ISBN 978-1107167483, MR 3729270 • Springer, Tonny A. (1998) [1981], Linear Algebraic Groups (2nd ed.), New York: Birkhäuser, ISBN 0-8176-4021-5, MR 1642713 External links • "Linear algebraic group", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia
Vexillary permutation In mathematics, a vexillary permutation is a permutation μ of the positive integers containing no subpermutation isomorphic to the permutation (2143); in other words, there do not exist four numbers i < j < k < l with μ(j) < μ(i) < μ(l) < μ(k). They were introduced by Lascoux and Schützenberger (1982, 1985). The word "vexillary" means flag-like, and comes from the fact that vexillary permutations are related to flags of modules. Guibert, Pergola & Pinzani (2001) showed that vexillary involutions are enumerated by Motzkin numbers. See also • Riffle shuffle permutation, a subclass of the vexillary permutations References • Guibert, O.; Pergola, E.; Pinzani, R. (2001), "Vexillary involutions are enumerated by Motzkin numbers", Annals of Combinatorics, 5 (2): 153–174, doi:10.1007/PL00001297, ISSN 0218-0006, MR 1904383 • Lascoux, Alain; Schützenberger, Marcel-Paul (1982), "Polynômes de Schubert", Comptes Rendus de l'Académie des Sciences, Série I, 294 (13): 447–450, ISSN 0249-6291, MR 0660739 • Lascoux, Alain; Schützenberger, Marcel-Paul (1985), "Schubert polynomials and the Littlewood–Richardson rule", Letters in Mathematical Physics. A Journal for the Rapid Dissemination of Short Contributions in the Field of Mathematical Physics, 10 (2): 111–124, doi:10.1007/BF00398147, ISSN 0377-9017, MR 0815233 • Macdonald, I.G. (1991b), Notes on Schubert polynomials, Publications du Laboratoire de combinatoire et d'informatique mathématique, vol. 6, Laboratoire de combinatoire et d'informatique mathématique (LACIM), Université du Québec a Montréal, ISBN 978-2-89276-086-6
Wikipedia
Vickrey auction A Vickrey auction or sealed-bid second-price auction (SBSPA) is a type of sealed-bid auction. Bidders submit written bids without knowing the bid of the other people in the auction. The highest bidder wins but the price paid is the second-highest bid. This type of auction is strategically similar to an English auction and gives bidders an incentive to bid their true value. The auction was first described academically by Columbia University professor William Vickrey in 1961[1] though it had been used by stamp collectors since 1893.[2] In 1797 Johann Wolfgang von Goethe sold a manuscript using a sealed-bid, second-price auction.[3] Part of a series on Auctions Types • All-pay • Amsterdam • Anglo-Dutch • Barter double • Best/not best • Brazilian • Calcutta • Candle • Chinese • Click-box bidding • Combinatorial • Common value • Deferred-acceptance • Discriminatory price • Double • Dutch • English • Forward • French • Generalized first-price • Generalized second-price • Japanese • Knapsack • Multi-attribute • Multiunit • No-reserve • Rank • Reverse • Scottish • Sealed first-price • Simultaneous ascending • Single-price • Traffic light • Uniform price • Unique bid • Value of revenues • Vickrey • Vickrey–Clarke–Groves • Walrasian • Yankee Bidding • Shading • Calor licitantis • Cancellation hunt • Jump • Rigging • Sniping • Suicide • Tacit collusion Contexts • Algorithms • Autos • Art • Charity • Children • Cricket players • Domain names • Flowers • Loans • Scam • Slaves • Spectrum • Stamps • Virginity • Wine • Wives Theory • Digital goods • Price of anarchy • Revenue equivalence • Winner's curse Online • Ebidding • Private electronic market • Software Vickrey's original paper mainly considered auctions where only a single, indivisible good is being sold. The terms Vickrey auction and second-price sealed-bid auction are, in this case only, equivalent and used interchangeably. In the case of multiple identical goods, the bidders submit inverse demand curves and pay the opportunity cost.[4] Vickrey auctions are much studied in economic literature but uncommon in practice. Generalized variants of the Vickrey auction for multiunit auctions exist, such as the generalized second-price auction used in Google's and Yahoo!'s online advertisement programmes[5][6] (not incentive compatible) and the Vickrey–Clarke–Groves auction (incentive compatible). Properties Self-revelation and incentive compatibility In a Vickrey auction with private values each bidder maximizes their expected utility by bidding (revealing) their valuation of the item for sale. These type of auctions are sometimes used for specified pool trading in the agency mortgage-backed securities (MBS) market. Ex-post efficiency A Vickrey auction is decision efficient (the winner is the bidder with the highest valuation) under the most general circumstances; it thus provides a baseline model against which the efficiency properties of other types of auctions can be posited. It is only ex-post efficient (sum of transfers equal to zero) if the seller is included as "player zero," whose transfer equals the negative of the sum of the other players' transfers (i.e. the bids). Weaknesses • It does not allow for price discovery, that is, discovery of the market price if the buyers are unsure of their own valuations, without sequential auctions. • Sellers may use shill bids to increase profit. Proof of dominance of truthful bidding The dominant strategy in a Vickrey auction with a single, indivisible item is for each bidder to bid their true value of the item.[7] Let $v_{i}$ be bidder i's value for the item. Let $b_{i}$ be bidder i's bid for the item. The payoff for bidder i is ${\begin{cases}v_{i}-\max _{j\neq i}b_{j}&{\text{if }}b_{i}>\max _{j\neq i}b_{j}\\0&{\text{otherwise}}\end{cases}}$ The strategy of overbidding is dominated by bidding truthfully. Assume that bidder i bids $b_{i}>v_{i}$. If $\max _{j\neq i}b_{j}<v_{i}$ then the bidder would win the item with a truthful bid as well as an overbid. The bid's amount does not change the payoff so the two strategies have equal payoffs in this case. If $\max _{j\neq i}b_{j}>b_{i}$ then the bidder would lose the item either way so the strategies have equal payoffs in this case. If $v_{i}<\max _{j\neq i}b_{j}<b_{i}$ then only the strategy of overbidding would win the auction. The payoff would be negative for the strategy of overbidding because they paid more than their value of the item, while the payoff for a truthful bid would be zero. Thus the strategy of bidding higher than one's true valuation is dominated by the strategy of truthfully bidding. The strategy of underbidding is dominated by bidding truthfully. Assume that bidder i bids $b_{i}<v_{i}$. If $\max _{j\neq i}b_{j}>v_{i}$ then the bidder would lose the item with a truthful bid as well as an underbid, so the strategies have equal payoffs for this case. If $\max _{j\neq i}b_{j}<b_{i}$ then the bidder would win the item either way so the strategies have equal payoffs in this case. If $b_{i}<\max _{j\neq i}b_{j}<v_{i}$ then only the strategy of truthfully bidding would win the auction. The payoff for the truthful strategy would be positive as they paid less than their value of the item, while the payoff for an underbid bid would be zero. Thus the strategy of underbidding is dominated by the strategy of truthfully bidding. Truthful bidding dominates the other possible strategies (underbidding and overbidding) so it is an optimal strategy. Revenue equivalence of the Vickrey auction and sealed first price auction The two most common auctions are the sealed first price (or high-bid) auction and the open ascending price (or English) auction. In the former each buyer submits a sealed bid. The high bidder is awarded the item and pays his or her bid. In the latter, the auctioneer announces successively higher asking prices and continues until no one is willing to accept a higher price. Suppose that a buyer's valuation is $v$ and the current asking price is $b$. If $v<b$, then the buyer loses by raising his hand. If $v>b$ and the buyer is not the current high bidder, it is more profitable to bid than to let someone else be the winner. Thus it is a dominant strategy for a buyer to drop out of the bidding when the asking price reaches his or her valuation. Thus, just as in the Vickrey sealed second price auction, the price paid by the buyer with the highest valuation is equal to the second highest value. Consider then the expected payment in the sealed second-price auction. Vickrey considered the case of two buyers and assumed that each buyer's value was an independent draw from a uniform distribution with support $[0,1]$. With buyers bidding according to their dominant strategies, a buyer with valuation $v$ wins if his opponent's value $x<v$. Suppose that $v$ is the high value. Then the winning payment is uniformly distributed on the interval $[0,v]$ and so the expected payment of the winner is $e(v)={\tfrac {1}{2}}v$. We now argue that in the sealed first price auction the equilibrium bid of a buyer with valuation $v$ is $B(v)=e(v)={\tfrac {1}{2}}v$. That is, the payment of the winner in the sealed first-price auction is equal to the expected revenue in the sealed second-price auction. Proof of revenue equivalence Suppose that buyer 2 bids according to the strategy $B(v)=v/2$, where $B(v)$ is the buyer's bid for a valuation $v$. We need to show that buyer 1's best response is to use the same strategy. Note first that if buyer 2 uses the strategy $B(v)=v/2$, then buyer 2's maximum bid is $B(1)=1/2$ and so buyer 1 wins with probability 1 with any bid of 1/2 or more. Consider then a bid $b$ on the interval $[0,1/2]$. Let buyer 2's value be $x$. Then buyer 1 wins if $B(x)=x/2<b$, that is, if $x<2b$. Under Vickrey's assumption of uniformly distributed values, the win probability is $w(b)=2b$. Buyer 1's expected payoff is therefore $U(b)=w(b)(v-b)=2b(v-b)={\tfrac {1}{2}}[{{v}^{2}}-{{(v-2b)}^{2}}]$ Note that $U(b)$ takes on its maximum at $b=v/2=B(v)$. Use in network routing In network routing, VCG mechanisms are a family of payment schemes based on the added value concept. The basic idea of a VCG mechanism in network routing is to pay the owner of each link or node (depending on the network model) that is part of the solution, its declared cost plus its added value. In many routing problems, this mechanism is not only strategyproof, but also the minimum among all strategyproof mechanisms. In the case of network flows, unicast or multicast, a minimum cost flow (MCF) in graph G is calculated based on the declared costs dk of each of the links and payment is calculated as follows: Each link (or node) $\scriptstyle e_{k}$ in the MCF is paid $p_{k}=d_{k}+MCF(G-e_{k})-MCF(G)$, where MCF(G) indicates the cost of the minimum cost flow in graph G and G − ek indicates graph G without the link ek. Links not in the MCF are paid nothing. This routing problem is one of the cases for which VCG is strategyproof and minimum. In 2004, it was shown that the expected VCG overpayment of an Erdős–Rényi random graph with n nodes and edge probability p, $\scriptstyle G\in G(n,p)$ approaches ${\frac {p}{2-p}}$ as n, approaches $\scriptstyle \infty $, for $np=\omega ({\sqrt {n\log n}})$. Prior to this result, it was known that VCG overpayment in G(n, p) is $\Omega \left({\frac {1}{np}}\right)$ and $O(1)\,$ with high probability given $np=\omega (\log n).\,$ Generalizations The most obvious generalization to multiple or divisible goods is to have all winning bidders pay the amount of the highest non-winning bid. This is known as a uniform price auction. The uniform-price auction does not, however, result in bidders bidding their true valuations as they do in a second-price auction unless each bidder has demand for only a single unit. A generalization of the Vickrey auction that maintains the incentive to bid truthfully is known as the Vickrey–Clarke–Groves (VCG) mechanism. The idea in VCG is that items are assigned to maximize the sum of utilities; then each bidder pays the "opportunity cost" that their presence introduces to all the other players. This opportunity cost for a bidder is defined as the total bids of all the other bidders that would have won if the first bidder had not bid, minus the total bids of all the other actual winning bidders. A different kind of generalization is to set a reservation price—a minimum price below which the item is not sold at all. In some cases, setting a reservation price can substantially increase the revenue of the auctioneer. This is an example of Bayesian-optimal mechanism design. In mechanism design, the revelation principle can be viewed as a generalization of the Vickrey auction. See also • Auction theory • First-price sealed-bid auction • VCG auction References • Vijay Krishna, Auction Theory, Academic Press, 2002. • Peter Cramton, Yoav Shoham, Richard Steinberg (Eds), Combinatorial Auctions, MIT Press, 2006, Chapter 1. ISBN 0-262-03342-9. • Paul Milgrom, Putting Auction Theory to Work, Cambridge University Press, 2004. • Teck Ho, "Consumption and Production" UC Berkeley, Haas Class of 2010. Notes 1. Vickrey, William (1961). "Counterspeculation, Auctions, and Competitive Sealed Tenders". The Journal of Finance. 16 (1): 8–37. doi:10.1111/j.1540-6261.1961.tb02789.x. 2. Lucking-Reiley, David (2000). "Vickrey Auctions in Practice: From Nineteenth-Century Philately to Twenty-First-Century E-Commerce". Journal of Economic Perspectives. 14 (3): 183–192. doi:10.1257/jep.14.3.183. 3. Benny Moldovanu and Manfred Tietzel (1998). "Goethe's Second-Price Auction". The Journal of Political Economy. 106 (4): 854–859. CiteSeerX 10.1.1.560.8278. doi:10.1086/250032. JSTOR 2990730. S2CID 53490333. 4. Jones, Derek (2003). "Auction Theory for the New Economy". New Economy Handbook. Emerald Publishing Ltd. ISBN 978-0123891723. 5. Benjamin Edelman, Michael Ostrovsky, and Michael Schwarz: "Internet Advertising and the Generalized Second-Price Auction: Selling Billions of Dollars Worth of Keywords". American Economic Review 97(1), 2007 pp 242–259. 6. Hal R. Varian: "Position Auctions". International Journal of Industrial Organization, 2006, doi:10.1016/j.ijindorg.2006.10.002 . 7. von Ahn, Luis (30 September 2008). "Auctions" (PDF). 15–396: Science of the Web Course Notes. Carnegie Mellon University. Archived from the original (PDF) on 8 October 2008. Retrieved 6 November 2008. Authority control: National • Germany
Wikipedia
Vicky Neale Victoria Neale (1984 – 3 May 2023) was a British mathematician and writer. She was Whitehead Lecturer at Oxford's Mathematical Institute and Supernumerary Fellow at Balliol College.[2][3] Her research specialty was number theory. The author of the 2017 book Closing the Gap: The Quest to Understand Prime Numbers,[4][5] she was interviewed on several BBC radio programs as a mathematics expert.[6][7] In addition, she wrote for The Conversation and The Guardian.[8][9] Her other educational and outreach activities included lecturing at the PROMYS Europe high-school program[10] and helping to organize the European Girls' Mathematical Olympiad.[11] Vicky Neale Born Victoria Neale[1] 1984 (1984) Died3 May 2023 (aged 39) CitizenshipUnited Kingdom Alma mater • University of Cambridge (BA) • University of Cambridge (PhD) Scientific career Fields • Number theory Institutions • University of Cambridge • University of Oxford ThesisBracket quadratics as asymptotic bases for the natural numbers (2011) Doctoral advisorBen Green Websitepeople.maths.ox.ac.uk/neale/ Neale was born in 1984.[12] She obtained her PhD in 2011 from the University of Cambridge. Her thesis work, supervised by Ben Joseph Green, concerned Waring's problem.[2][1] She then taught at Cambridge while being Director of Studies in mathematics at Murray Edwards College,[11][13] before moving to Oxford in the summer of 2014.[14] Neale died on 3 May 2023, at the age of 39.[15] She had been diagnosed with a rare type of cancer in 2021.[16] References 1. Vicky Neale at the Mathematics Genealogy Project 2. Neale, Vicky (3 August 2018). "Homepage". Mathematical Institute, University of Oxford. Retrieved 7 August 2018. 3. "Speakers and Panellists - ACME". Advisory Committee on Mathematics Education. Retrieved 7 August 2018. "BCME 9 Plenary Speakers". British Congress of Mathematics Education. 2018. Retrieved 10 August 2018. 4. Neale, Vicky (2017). Closing the Gap: The Quest to Understand Prime Numbers. Oxford University Press. ISBN 9780198788287. OCLC 1030559953. 5. Reviews of Closing the Gap include the following: • Hunacek, Mark (12 February 2018). "Closing the Gap | Mathematical Association of America". Mathematical Association of America. Retrieved 7 August 2018. • Freiberger, Marianne (12 December 2017). "'Closing the gap'". Plus Magazine. Retrieved 7 August 2018. • Bultheel, Adhemar (February 2018). "Review: Closing the Gap". European Mathematical Society. Retrieved 10 September 2018. • Kalaydzhieva, Nikoleta; Porritt, Sam (28 June 2018). "Closing the Gap". Chalkdust. Retrieved 7 August 2018. • Fried, Michael N. (3 July 2018). "Prime Numbers, Mathematical Pencils, and Massive Collaboration". Mathematical Thinking and Learning. 20 (3): 248–250. doi:10.1080/10986065.2018.1483932. ISSN 1098-6065. 6. Among her appearances are the following: • "Fermat's Last Theorem, In Our Time - BBC Radio 4". BBC. Retrieved 7 August 2018. • "Numbers Numbers Everywhere, Series 10, The Infinite Monkey Cage - BBC Radio 4". BBC. Retrieved 7 August 2018. • "e, In Our Time - BBC Radio 4". BBC. Retrieved 7 August 2018. • "Vicky Neale on the Mathematics of Beauty, A History of Ideas - BBC Radio 4". BBC. Retrieved 7 August 2018. • "Maths: Alex Bellos, Neil deGrasse Tyson, Serafina Cuomo, Vicky Neale, Free Thinking - BBC Radio 3". BBC. Retrieved 7 August 2018. 7. She is also quoted as a mathematics expert in, for example, • Flyn, Cal (10 July 2017). "What Makes Maths Beautiful?". New Humanist. Retrieved 10 August 2018. • Sample, Ian (21 November 2016). "Magic numbers: can maths equations be beautiful?". The Guardian. Retrieved 10 August 2018. 8. Neale, Vicky (17 February 2017). "Mathematics is beautiful (no, really)". The Conversation. Retrieved 7 August 2018. 9. Neale, Vicky (26 November 2015). "Solving for Xmas: how to make mathematical Christmas cards". The Guardian. Retrieved 7 August 2018. 10. "Annual Report 2016" (PDF). Clay Mathematics Institute. 26 June 2017. Retrieved 7 August 2018. 11. "Principal Faculty | PROMYS-Europe: Program in Mathematics for Young Scientists". promys-europe.org. Retrieved 7 August 2018. 12. Vicky Neale 1984–2023, Balliol College 13. Gowers, Timothy (11 January 2014). "Introduction to Cambridge IA Analysis I 2014". Gowers' Weblog. Retrieved 10 August 2018. 14. "Balliol Maths: a plurality of women". Floreat Domus 2015. Balliol College. Retrieved 10 August 2018. 15. "Vicky Neale | Mathematical Institute". Mathematical Institute, University of Oxford. 4 May 2023. Archived from the original on 4 May 2023. Retrieved 4 May 2023. 16. Dr Vicky Neale (1984-2023), London Mathematical Society Authority control International • ISNI • VIAF National • Germany • United States • Japan • Czech Republic • Netherlands Academics • CiNii • MathSciNet • Mathematics Genealogy Project Other • IdRef
Wikipedia
Victor Puiseux Victor Alexandre Puiseux (French: [pɥizø]; 16 April 1820 – 9 September 1883) was a French mathematician and astronomer. Puiseux series are named after him, as is in part the Bertrand–Diquet–Puiseux theorem. His work on algebraic functions and uniformization makes him a direct precursor of Bernhard Riemann, for what concerns the latter's work on this subject and his introduction of Riemann surfaces.[1] He was also an accomplished amateur mountaineer. A peak in the French alps, which he climbed in 1848, is named after him. A species of Israeli gecko, Ptyodactylus puiseuxi, is named in his honor.[2] Life He was born in 1820 in Argenteuil, Val-d'Oise. He occupied the chair of celestial mechanics at the Sorbonne. Excelling in mathematical analysis, he introduced new methods in his account of algebraic functions, and by his contributions to celestial mechanics advanced knowledge in that direction. In 1871, he was unanimously elected to the French Academy. One of his sons, Pierre Henri Puiseux, was a famous astronomer. He died in 1883 in Frontenay, France. References 1. Athanase Papadopoulos, « Cauchy and Puiseux: Two precursors of Riemann », In: From Riemann to differential geometry and relativity (L. Ji, A. Papadopoulos and S. Yamada, ed.) Berlin: Springer., 2017, p. 209-235. 2. Beolens, Bo; Watkins, Michael; Grayson, Michael (2011). The Eponym Dictionary of Reptiles. Baltimore: Johns Hopkins University Press. xiii + 296 pp. ISBN 978-1-4214-0135-5. ("Puiseux", p. 212). • O'Connor, John J.; Robertson, Edmund F., "Victor Alexandre Puiseux", MacTutor History of Mathematics Archive, University of St Andrews • Victor Puiseux at the Mathematics Genealogy Project  This article incorporates text from a publication now in the public domain: Herbermann, Charles, ed. (1913). "Victor-Alexandre Puiseux". Catholic Encyclopedia. New York: Robert Appleton Company. Authority control International • FAST • ISNI • VIAF • WorldCat National • France • BnF data • Germany • United States Academics • Mathematics Genealogy Project • zbMATH Artists • Musée d'Orsay People • Deutsche Biographie Other • IdRef
Wikipedia
Victor Ginzburg Victor Ginzburg (born 1957) is a Russian American mathematician who works in representation theory and in noncommutative geometry. He is known for his contributions to geometric representation theory, especially, for his works on representations of quantum groups and Hecke algebras, and on the geometric Langlands program (Satake equivalence of categories). He is currently a Professor of Mathematics at the University of Chicago.[1][2] Victor Ginzburg 2012 in Oberwolfach Born1957 (age 65–66) Moscow, Russia NationalityAmerican Alma materMoscow State University Known forGinzburg dg algebra Koszul duality Scientific career FieldsMathematics InstitutionsUniversity of Chicago Doctoral advisorAlexandre Kirillov Israel Gelfand Career Ginzburg received his Ph.D. at Moscow State University in 1985, under the direction of Alexandre Kirillov and Israel Gelfand. Ginzburg wrote a textbook Representation theory and complex geometry with Neil Chriss on geometric representation theory. A paper by Alexander Beilinson, Ginzburg, and Wolfgang Soergel introduced the concept of Koszul duality (cf. Koszul algebra) and the technique of "mixed categories" to representation theory. Furthermore, Ginzburg and Mikhail Kapranov developed Koszul duality theory for operads. In noncommutative geometry, Ginzburg defined, following earlier ideas of Maxim Kontsevich, the notion of Calabi–Yau algebra. An important role in the theory of motivic Donaldson–Thomas invariants is played by the so-called "Ginzburg dg algebra", a Calabi-Yau (dg)-algebra of dimension 3 associated with any cyclic potential on the path algebra of a quiver. Selected publications • Beilinson, Alexander; Ginzburg, Victor; Soergel, Wolfgang (1996), "Koszul duality patterns in representation theory" (PDF), Journal of the American Mathematical Society, 9 (2): 473–527, doi:10.1090/S0894-0347-96-00192-0, MR 1322847 • Chriss, Neil; Ginzburg, Victor (1997), Representation theory and complex geometry, Boston, MA: Birkhäuser, MR 1433132 • Etingof, Pavel; Ginzburg, Victor (2002), "Symplectic reflection algebras, Calogero-Moser space, and deformed Harish-Chandra homomorphism", Inventiones Mathematicae, 147 (2): 243–348, arXiv:math/0011114, Bibcode:2002InMat.147..243E, doi:10.1007/s002220100171, MR 1881922, S2CID 119708574 • Ginzburg, Victor (2005). "Lectures on Noncommutative Geometry". arXiv:math/0506603. • Ginzburg, Victor (2006). "Calabi-Yau Algebras". arXiv:math/0612139. • Ginzburg, Victor; Kapranov, Mikhail (1994), "Koszul duality for operads", Duke Mathematical Journal, 76 (1): 203–272, doi:10.1215/S0012-7094-94-07608-4, MR 1301191, S2CID 115166937 References 1. Koppes, Steve (June 8, 2006), "Victor Ginzburg, Professor in Mathematics and the College", The University of Chicago Chronicle. 2. "MMJ: Vol.7 (2007), N.4. - Victor Ginzburg". External links Victor Ginzburg at the Mathematics Genealogy Project Wikimedia Commons has media related to Victor Ginzburg. Authority control International • ISNI • VIAF National • France • BnF data • Catalonia • Germany • Israel • United States • Czech Republic • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • ORCID • zbMATH Other • IdRef
Wikipedia
Victor Moll Victor Hugo Moll (born 1956) is a Chilean American mathematician specializing in calculus. Moll studied at the Universidad Santa Maria and at the New York University with a master's degree in 1982 and a doctorate in 1984 with Henry P. McKean (Stability in the Large for Solitary Wave Solutions to McKean's Nerve Conduction Caricature).[1] He was a post-doctoral student at Temple University and became an assistant professor in 1986 and an associate professor in 1992 and in 2001 Professor at Tulane University. In 1990–1991, he was a visiting professor at the University of Utah, in 1999 at the Universidad Técnica Federico Santa María in Valparaíso, and in 1995 a visiting scientist at the Courant Institute of Mathematical Sciences of New York University. He deals with classical analysis, symbolic arithmetic and experimental mathematics, special functions and number theory. Projects Inspired by a 1988 paper in which Ilan Vardi proved several integrals in Table of Integrals, Series, and Products,[2] a well-known comprehensive table of integrals originally compiled by the Russian mathematicians Iosif Moiseevich Ryzhik (Russian: Иосиф Моисеевич Рыжик) and Izrail Solomonovich Gradshteyn (Израиль Соломонович Градштейн) in 1943 and subsequently expanded and translated into several languages, Victor Moll and George Boros started a project to prove all integrals listed in Gradshteyn and Ryzhik and add additional commentary and references.[3] In the foreword of the book Irresistible Integrals (2004), they wrote:[4] It took a short time to realize that this task was monumental. Nevertheless, the efforts have resulted in about 900 entries from Gradshteyn and Ryzhik discussed in a series of more than 30 articles of which papers 1 to 28 have been published in issues 14 to 26 of Scientia, Universidad Técnica Federico Santa María (UTFSM), between 2007 and 2015[5] and compiled into a two-volume book series Special Integrals of Gradshteyn and Ryzhik: the Proofs (2014–2015).[6][7] Moll also assisted Daniel Zwillinger editing the eight English edition of Gradshteyn and Ryzhik in 2014.[8] Moll also took on the task to revise and expand the classical landmark work "A Course of Modern Analysis" by Whittaker and Watson, which was originally published in 1902 and last revised in 1927, to publish a new edition in 2021. Publications • The evaluation of integrals, a personal story, Notices AMS, 2002, No. 3 • with Henry McKean Elliptic Curves: function theory, geometry, arithmetic, Cambridge University Press, 1997 • Numbers and functions: from a classical-experimental mathematician’s point of view, AMS, 2012 • Editor with Tewodros Amdeberhan Tapas in experimental mathematics, AMS Special Session on Experimental Mathematics, 5 January 2007, New Orleans, Louisiana, AMS, 2008 • Editor with Tewodros Amdeberhan, Luis A. Medina Gems in experimental mathematics, AMS Special Session, Experimental Mathematics, 5 January 2009, Washington, DC, AMS, 2010 • with George Boros Irresistible Integrals: Symbolics, Analysis and Experiments in the Evaluation of Integrals, Cambridge University Press, 2004. • A Course of Modern Analysis, 2021 See also • Gradshteyn and Ryzhik (GR) • Whittaker and Watson References 1. Victor Moll at the Mathematics Genealogy Project 2. Vardi, Ilan (April 1988). "Integrals: An Introduction to Analytic Number Theory" (PDF). American Mathematical Monthly. 95 (4): 308–315. doi:10.2307/2323562. JSTOR 2323562. Archived (PDF) from the original on 2016-03-15. Retrieved 2016-03-14. 3. Moll, Victor Hugo (April 2010) [2009-08-30]. "Seized Opportunities" (PDF). Notices of the American Mathematical Society. 57 (4): 476–484. Archived (PDF) from the original on 2016-04-08. Retrieved 2016-04-08. 4. Boros, George; Moll, Victor Hugo (2006) [September 2004]. Irresistible Integrals. Symbolics, Analysis and Experiments in the Evaluation of Integrals (reprinted 1st ed.). Cambridge University Press (CUP). p. xi. ISBN 978-0-521-79186-1. Retrieved 2016-02-22. (NB. This edition contains many typographical errors.) 5. Moll, Victor Hugo (2012). "Index of the papers in Revista Scientia with formulas from GR". Retrieved 2016-02-17. 6. Moll, Victor Hugo (2014-10-01). Special Integrals of Gradshteyn and Ryzhik: the Proofs – Volume I. ISBN 978-1-4822-5651-2. Retrieved 2016-02-12. {{cite book}}: |work= ignored (help) 7. Moll, Victor Hugo (2015-08-24). Special Integrals of Gradshteyn and Ryzhik: the Proofs – Volume II. ISBN 978-1-4822-5653-6. Retrieved 2016-02-12. {{cite book}}: |work= ignored (help) 8. Gradshteyn, Izrail Solomonovich; Ryzhik, Iosif Moiseevich; Geronimus, Yuri Veniaminovich; Tseytlin, Michail Yulyevich; Jeffrey, Alan (2015) [October 2014]. Zwillinger, Daniel; Moll, Victor Hugo (eds.). Table of Integrals, Series, and Products. Translated by Scripta Technica, Inc. (8 ed.). Academic Press, Inc. ISBN 978-0-12-384933-5. GR:12. Retrieved 2016-02-21. External links • Moll, Victor Hugo. "Victor Hugo Moll". Archived from the original on 2021-12-21. Retrieved 2022-01-22. • Testimonios: Dr. Victor H. Moll (October 15, 2022) Authority control International • ISNI • VIAF National • France • BnF data • Catalonia • Germany • Israel • United States • Sweden • Czech Republic • Netherlands • Poland Academics • DBLP • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Victor Kac Victor Gershevich (Grigorievich) Kac (Russian: Виктор Гершевич (Григорьевич) Кац; born 19 December 1943) is a Soviet and American mathematician at MIT, known for his work in representation theory. He co-discovered[2] Kac–Moody algebras, and used the Weyl–Kac character formula for them to reprove the Macdonald identities. He classified the finite-dimensional simple Lie superalgebras, and found the Kac determinant formula for the Virasoro algebra. He is also known for the Kac–Weisfeiler conjectures with Boris Weisfeiler. Victor Gershevich Kac Born (1943-12-19) December 19, 1943 Buguruslan, Orenburg Oblast, Russian SFSR Alma materMoscow State University (MS) Moscow State University (PhD) Known for • Kac–Moody algebra • Weyl–Kac character formula • Classification of Lie superalgebras • Kac–Weisfeiler conjectures • Kac determinant formula for Virasoro algebra Awards • Sloan Research Fellowship (1981) • Medal of the Collège de France (1981) • Guggenheim Fellowship (1986) • Wigner Medal (1996) • American Academy of Arts and Sciences (2007) • National Academy of Sciences (2013) • Steele Prize (2015) Scientific career FieldsMathematics InstitutionsMIT ThesisSimple Irreducible Graded Lie Algebras of Finite Growth (1968) Doctoral advisorÈrnest Borisovich Vinberg[1] Biography Kac studied mathematics at Moscow State University, receiving his MS in 1965 and his PhD in 1968.[3] From 1968 to 1976, he held a teaching position at the Moscow Institute of Electronic Machine Building (MIEM). He left the Soviet Union in 1977, becoming an associate professor of mathematics at MIT. In 1981, he was promoted to full professor. Kac received a Sloan Fellowship and the Medal of the Collège de France, both in 1981, and a Guggenheim Fellowship in 1986. He received the Wigner Medal (1996) "in recognition of work on affine Lie algebras that has had wide influence in theoretical physics". In 1978 he was an Invited Speaker (Highest weight representations of infinite dimensional Lie algebras) at the International Congress of Mathematicians (ICM) in Helsinki. Kac was a plenary speaker at the 1988 American Mathematical Society centennial conference. In 2002 he gave a plenary lecture, Classification of Supersymmetries, at the ICM in Beijing. Kac is a Fellow of the American Mathematical Society,[4] an Honorary member of the Moscow Mathematical Society, Fellow of the American Academy of Arts and Sciences and a Member of the National Academy of Sciences. The research of Victor Kac primarily concerns representation theory and mathematical physics. His work appears in mathematics and physics and in the development of quantum field theory, string theory and the theory of integrable systems. Kac has published 13 books and over 200 articles in mathematics and physics journals and is listed as an ISI highly cited researcher.[5] Victor Kac was awarded the 2015 AMS Leroy P. Steele Prize for Lifetime Achievement.[6] He was married with Michèle Vergne[7] and they have a daughter, Marianne Kac-Vergne, who is a professor of American civilization at the university of Picardie. His brother Boris Katz is a principal research scientist at MIT.[8] Kac–Moody algebra "Almost simultaneously in 1967, Victor Kac in the USSR and Robert Moody in Canada developed what was to become Kac–Moody algebra. Kac and Moody noticed that if Wilhelm Killing's conditions were relaxed, it was still possible to associate to the Cartan matrix a Lie algebra which, necessarily, would be infinite dimensional." – A.J. Coleman[9] Bibliography • Kac, Victor G. (1994) [1985]. Infinite-Dimensional Lie Algebras (3rd ed.). Cambridge University Press. ISBN 0-521-46693-8. • Kac, V. (1985). Infinite Dimensional Groups with Applications. New York: Springer. ISBN 9781461211044. OCLC 840277997. • Seligman, George B. (1987). "Review: Infinite-dimensional Lie algebras, by Victor G. Kac, 2nd edition" (PDF). Bull. Amer. Math. Soc. (N.S.). 16: 144–149. doi:10.1090/S0273-0979-1987-15492-9. • Kac, Victor G.; Raina, A. K. (1987). Bombay lectures on highest weight representations of infinite-dimensional Lie algebras. Singapore: World Scientific. ISBN 9971503956. OCLC 18475755. • Kac, Victor (1997). Vertex Algebras for Beginners (University Lecture Series, No 10). American Mathematical Society. ISBN 0-8218-0643-2. • Kac, Victor G.; Cheung, Pokman (2002). Quantum calculus. New York: Springer. ISBN 0387953418. OCLC 47243954. • Kac, Victor G.; Raina, A. K. (2013). Bombay Lectures on Highest Weight Representations of Infinite Dimensional Lie Algebras. Advanced Series in Mathematical Physics. Vol. 29 (2nd ed.). World Scientific Publishing. doi:10.1142/8882. ISBN 978-981-4522-18-2. References 1. Mathematics Genealogy Project: https://www.genealogy.math.ndsu.nodak.edu/id.php?id=37054 2. Stephen Berman, Karen Parshall "Victor Kac and Robert Moody — their paths to Kac–Moody-Algebras", Mathematical Intelligencer, 2002, Nr.1 3. Victor Kac, A Biographical Interview: http://dynkincollection.library.cornell.edu/sites/default/files/Victor%20Kac%20%28RI-ED%29.pdf 4. List of Fellows of the American Mathematical Society, retrieved 2013-01-27. 5. "List of ISI highly cited researchers". 6. 2015 AMS Steele Prizes 7. La Gazette des Mathématiciens 165, retrieved 2021-04-22. 8. Negri, Gloria (4 October 2006). "Clara Katz; Soviet émigré saved ailing granddaughter". The Boston Globe. 9. Coleman, A. John, "The Greatest Mathematical Paper of All Time", The Mathematical Intelligencer, vol. 11, no. 3, pp. 29–38. External links • Victor Kac's home page at MIT • Victor Kac at the Mathematics Genealogy Project • Victor Kac, A Biographical Interview, Authority control International • ISNI • VIAF National • France • BnF data • Germany • Israel • United States • Czech Republic • Netherlands Academics • CiNii • Google Scholar • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie Other • IdRef
Wikipedia
Victor J. Katz Victor Joseph Katz (born 31 December 1942, Philadelphia)[1] is an American mathematician, historian of mathematics, and teacher known for using the history of mathematics in teaching mathematics. Victor Joseph Katz Born31 December 1942 Philadelphia, Pennsylvania, USA NationalityAmerican OccupationMathematician Known forUsing the history of mathematics to teach the subject Academic background Alma materPrinceton University Academic work Notable worksHistory of Mathematics: An Introduction (1993) Biography Katz received in 1963 from Princeton University a bachelor's degree and in 1968 from Brandeis University a Ph.D. in mathematics under Maurice Auslander with thesis The Brauer group of a regular local ring.[2] He became at Federal City College an assistant professor and then in 1973 an associate professor and, after the merger of Federal City College into the University of the District of Columbia in 1977, a full professor there in 1980. He retired there as professor emeritus in 2005. As a mathematician Katz specializes in algebra, but he is mainly known for his work on the history of mathematics and its uses in teaching. He wrote a textbook History of Mathematics: An Introduction (1993), for which he won in 1995 the Watson Davis and Helen Miles Davis Prize. He organized workshops and congresses for the Mathematical Association of America (MAA) and the National Council of Teachers of Mathematics. The MAA published a collection of teaching materials by Katz as a compact disk with the title Historical Modules for the Teaching and Learning of Mathematics. With Frank Swetz, he was a founding editor of a free online journal on the history of mathematics under the aegis of the MAA; the journal is called Convergence: Where Mathematics, History, and Teaching Interact.[3] In the journal Convergence, Katz and Swetz published a series Mathematical Treasures.[4][5] For a study of the possibilities for using mathematical history in schools, Katz received a grant from the National Science Foundation. Personal He has been married since 1969 to Phyllis Katz (née Friedman), a science educator who developed and directed the U.S. national nonprofit organization Hands On Science Outreach, Inc. (HOSO). The couple have three children. Selected publications As author • History of Mathematics: An Introduction, New York: Harper Collins, 1993, 3rd edition Pearson 2008 (a shortened edition was published in 2003 by Pearson) • with Karen Hunger Parshall: Taming the Unknown: A History of Algebra from Antiquity to the Early Twentieth Century, Princeton University Press 2014[6][7] • with John B. Fraleigh: A first course in abstract algebra, Addison-Wesley 2003 As editor • The Mathematics of Egypt, Mesopotamia, China, India and Islam: A Sourcebook, Princeton University Press 2007[8] • with Bengt Johansson, Frank Swetz, Otto Bekken, John Fauvel: Learn from the Masters, MAA 1994 (contribution by Katz: Historical ideas in teaching linear algebra, Napier's logarithms adapted for today's classroom) • Using History to Teach Mathematics: An International Perspective, MAA 2000, MAA Notes (No. 51)[9][10] • with Marlow Anderson, Robin Wilson: Sherlock Holmes in Babylon and other Tales of Mathematical History, (collection of reprints from the journal Mathematics Magazine of MAA; contribution by Katz: Ideas of calculus in Islam and India), MAA 2004[11] • with Marlow Anderson, Robin Wilson: Who gave you the epsilon? and other tales of mathematical history, MAA 2009 (continuation of the collection of essays on the history of mathematics from MAA journal; contribution by Katz: The history of Stokes' theorem)[12] • with Constantinos Tzanakis: Recent Developments on Introducing a Historical Dimension in Mathematics Education, MAA 2011 References 1. biographical information from American Men and Women of Science, Thomson Gale 2004 2. Victor J. Katz at the Mathematics Genealogy Project 3. MAA, Convergence 4. Katz, Swetz, Mathematical Treasures, Omar Khayyam's Algebra 5. Katz, V. J.; Swetz, F. (March 2011). "Mathematical Treasures" (PDF). HPM Newskletter. No. 76. pp. 2–4. 6. Jongsma, Calvin (26 February 2015). "Review of Taming the Unknown: A History of Algebra from Antiquity to the Early Twentieth Century by Victor J. Katz and Karen Hunger Parshall". MAA Reviews, Mathematical Association of America. 7. Chen, Jiang-Ping Jeff (March 2015). "Review of Taming the Unknown: A History of Algebra from Antiquity to the Early Twentieth Century by Victor J. Katz and Karen Hunger Parshall". The College Mathematics Journal. 46 (2): 149–152. doi:10.4169/college.math.j.46.2.149. S2CID 218544510. 8. Montelle, Clemency (2015). "Review of The Mathematics of Egypt, Mesopotamia, China, India, and Islam: A Sourcebook ed. by Victor J. Katz". Aestimatio: Critical Reviews in the History of Science. 4: 179–191. doi:10.33137/aestimatio.v4i0.25818. 9. Sandifer, Ed (3 January 2001). "Review of Using History to Teach Mathematics: An International Perspective by Victor J. Katz". MAA Reviews, Mathematical Association of America. 10. Deakin, Michael A. B. (2001). "Review of Using History to Teach Mathematics: An International Perspective ed. by Victor J. Katz" (PDF). Zentralblatt für Didaktik der Mathematik. 33 (5): 137–138. doi:10.1007/BF02656618. 11. Gouvêa, Fernando Q. (2015). "Review of Sherlock Holmes in Babylon and Other Tales of Mathematical History ed. by Marlow Anderson, Victor Katz, and Robin Wilson". Aestimatio: Critical Reviews in the History of Science. 2: 67–79. doi:10.33137/aestimatio.v2i0.25743. 12. Davis, Philip J. (18 October 2009). "Review of Who Gave You the Epsilon? & Other Tales of Mathematical History ed. by Marlow Anderson, Victor Katz, and Robin Wilson". SIAM News, Society for Industrial and Applied Mathematics. External links • Biography from the MAA Authority control International • ISNI • VIAF National • Norway • France • BnF data • Catalonia • Germany • Israel • Belgium • United States • Sweden • Japan • Czech Republic • Korea • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Victor Kolyvagin Victor Alexandrovich Kolyvagin (Russian: Виктор Александрович Колывагин, born 11 March, 1955) is a Russian mathematician who wrote a series of papers on Euler systems, leading to breakthroughs on the Birch and Swinnerton-Dyer conjecture, and Iwasawa's conjecture for cyclotomic fields.[1] His work also influenced Andrew Wiles's work on Fermat's Last Theorem.[2][3] Victor Kolyvagin NationalityRussian Alma materMoscow State University Scientific career FieldsMathematics InstitutionsJohns Hopkins University, CUNY Doctoral advisorYuri Manin Career Kolyvagin received his Ph.D. in Mathematics in 1981 from Moscow State University,[4] where his advisor was Yuri I. Manin. He then worked at Steklov Institute of Mathematics in Moscow[2] until 1994. Since 1994 he has been a professor of mathematics in the United States. He was a professor at Johns Hopkins University until 2002 when he became the first person to hold the Mina Rees Chair in mathematics at the Graduate Center Faculty at The City University of New York.[5][4] Awards In 1990 he received the Chebyshev Prize of the USSR Academy of Sciences.[4] References 1. Rubin, Karl (2000). Euler Systems. ISBN 0-691-05075-9. {{cite book}}: |journal= ignored (help) 2. Cipra, Barry (1993). "Fermat Proof Hits a Stumbling Block". Science. American Association for the Advancement of Science. 263 (5142): 1967–8. Bibcode:1993Sci...262.1967C. doi:10.1126/science.262.5142.1967-a. JSTOR 2882956. 3. Cipra, Barry A. (January 6, 1989). "Getting a Grip on Elliptic Curves Author". Science. New Series. American Association for the Advancement of Science Stable. 243 (4887): 30–31. doi:10.1126/science.243.4887.30. JSTOR 1703169. PMID 17780417. Archived from the original on 2022-11-04. 4. Arenson, Karen W. (August 7, 2002). "Benefactor's Chair Filled at CUNY". The New York Times. 5. Targeted News Service (2009-12-22). "NSF Invests a Million Dollars in Number Theory at the CUNY Graduate Center". Link • Victor Kolyvagin at the Mathematics Genealogy Project • Kolyvagin's Biography Authority control: Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia