text
stringlengths 59
500k
| subset
stringclasses 6
values |
---|---|
Mary and Jenna are playing a game in which each girl rolls a twenty-sided die with numbers from 1 to 20 on the faces. If the number on the die that Mary rolls is a factor of the number on the die that Jenna rolls, Mary wins. If the number on the die that Jenna rolls is a factor of the number on the die that Mary rolls, Jenna wins. For how many possible rolls would both girls win?
We are trying to determine how many possible combinations of two numbers between 1 and 20 would make it so that the first number is a factor of the second number and the second number is a factor of the first number. We know that all of the positive factors of a positive number, except for the number itself, are less than the number. Therefore, if Jenna's number is greater than Mary's number, Jenna's number cannot be a divisor of Mary's number, and Jenna cannot win.
Likewise, if Jenna's number is less than Mary's number, Mary's number must be greater than Jenna's number, Mary's number cannot be a divisor of Jenna's number, and Mary cannot win. If Jenna's number is equal to Mary's number, both girls' numbers are factors of the other's number because any number is a factor of itself. Thus, we can determine that for both girls to win, they must roll the same number. Since there are 20 numbers on the dice, this means that there are $\boxed{20}$ rolls for which both girls would win. | Math Dataset |
All issues Volume 63 / No 7 (October-November 2006) Ann. For. Sci., 63 7 (2006) 673-685 References
Volume 63, Number 7, October-November 2006
https://doi.org/10.1051/forest:2006048
References of Ann. For. Sci. 63 (2006) 673-685
Becker W.A., Manual of quantitative genetics, Academic Enterprises, Pullman, Washington DC, 1984.
Bendtsen B.A., Senft J., Mechanical and anatomical properties in individual growth rings of plantation-grown eastern cottonwood and loblolly pine, Wood Fiber Sci. 18 (1986) 23-38.
Bendtsen B.A., Maeglin R.R., Deneke F., Comparison of mechanical and anatomical properties of eastern cottonwood and Populus hybrid NE-237, Wood Sci. 14 (1981) 1-14.
Beaudoin M., Hernandez R.E., Koubaa A., Poliquin J., Interclonal, intraclonal and within-tree variation in wood density of poplar hybrid clones, Wood Fiber Sci. 24 (1992) 147-153.
Cornelius J., Heritabilities and additive genetic coefficients of variation in forest trees, Can. J. For. Res. 24 (1994) 372-379.
Dadswell H.E., Fielding J.M., Nicholls J.W., Brown A.G., Tree to tree variations and the gross heritability of wood characteristics of Pinus radiata, TAPPI 44 (1961) 174-179.
Danusevicius D., Lindgren D., Efficiency of selection based on phenotype, clone and progeny testing in long-term breeding, Silvae Genet. 51 (2002) 19-26.
DeBell D.S., Singleton R., Harrington C.A., Gartner B.L., Wood density and fiber length in young Populus stems: relation to clone, age, growth rate, and pruning, Wood Fiber Sci. 34 (2002) 529-539.
Falconer D.S., Introduction to quantitative genetics, 3rd ed., Longman, London and New York, 1989.
Farmer R.E. Jr., Variation and inheritance of eastern cotton wood growth and properties under two soil moisture regimes, Silvae Genet. 19 (1970) 5-8.
Farmer R.E. Jr., Genetic variation among open-pollinated progeny of eastern cottonwood, Silvae Genet. 19 (1970) 149-151.
Farmer R.E. Jr., Wilcox J.R., Specific gravity variation in a lower Mississippi valley cottonwood population, TAPPI 49 (1966) 210-211.
Farmer R.E. Jr., Wilcox J.R., Preliminary testing of eastern cottonwood clones, Theor. Appl. Genet. 38 (1968) 197-201 [CrossRef].
Gonzalez J.S., Richards J., Early selection for wood density in young coastal Douglas-fir trees, Can. J. For. Res. 18 (1988) 1182-1158.
Hanrup B., Ekberg I., Age-age correlations for tracheid length and wood density in Pinus sylvestris, Can. J. For. Res. 28 (1998) 1373-1379 [CrossRef].
Hernandez R.E., Koubaa A, Beaudoin M., Fortin Y., Selected mechanical properties of fast-growing poplar hybrid clones, Wood Fiber Sci. 30 (1998) 138-147.
Hodge G.R., White T., Powell G., Genetics of wood density characteristics in slash pine, in: Coop. For. Gen. Res. Prog. 34th Prog. Ep. Univ. Florida Gainesville, FL, 1992, pp. 12-20.
Holt D.H., Murphey W.K., Properties of hybrid poplar juvenile wood affected by silvicultural treatments, Wood Sci. 10 (1978) 198-203.
Houle D., Comparing evolvability and variability of quantitative traits, Genetics 130 (1992) 195-204 [PubMed].
Hylen G., Age trends in genetic parameters of wood density in young Norway spruce, Can. J. For. Res. 29 (1999) 135-143 [CrossRef].
Ilstedt B., Gullberg U., Genetic variation in a 26-year old hybrid aspen trial in southern Sweden, Scand. J. For. Res. 8 (1993) 185-192.
Ivkovich M., Genetic variation of wood properties in balsam poplar (Populus balsamifera L.), Silvae Genet. 45 (1996) 119-124.
Ivkovich M., Namkong G., Koshy M., Genetic variation in wood properties of interior spruce. I. Growth, latewood percentage, and wood density, Can. J. For. Res. (2002) 2116-2117.
Johnson L.P., Studies on the relation of growth rate to wood quality in Populus hybrids. Can. J. Res. 20 (1942) 28-40.
Koga S., Zhang S.Y., Relationships between wood density and annual growth rate components in balsam fir (Abies balsamea), Wood Fiber Sci. 34 (2002) 146-157.
Koubaa A., Hernandez R.E., Beaudoin M., Shrinkage of fast-growing hybrid poplar clones, For. Prod. J. 48 (1998) 82-87.
Larson P.R., Silvicultural control of the characteristics of wood used for furnish, in: Proc. 4th TAPPI For. Biol. Conf., New York, 1967, pp. 143-150.
Louzada J.L.P.C., Fonseca F.M.A., The heritability of wood density components in Pinus pinaster Ait., and the implications for tree breeding, Ann. For. Sci. 59 (2002) 867-873 [EDP Sciences] [CrossRef].
Matyas C., Peszlen I., Effect of age on selected wood quality traits on poplar clones, Silvae Genet. 46 (1997) 64-72.
Megraw R.A., Wood quality factors in loblolly pine, TAPPI Press Atlanta, Georgia, 1985, 89 p.
Mutibaric J., Comparative qualitative relationships of wood properties of Euramerican poplars, Silvae Genet. 20 (1971) 199-204.
Nepveu G., Barneoud C., Polge H., Aubert M., Variabilité clonale des contraintes de croissance et de quelques autres propriétés du bois danc le genre Populus, in: Fiabilité de l'appréciation de la qualité du bois à l'aide de carottes de sondage, Annales de Recherchers Sylvicoles, AFOCEL, France, 1986, pp. 337-357.
Nepveu G., Keller R., Tessier du Cross E., Sélection juvénile pour la qualité du bois chez certains peupliers noirs, Ann. Sci. For. 35 (1978) 69-92.
Okkonen E.A., Wahlgren H.E., Maeglin R.R., Relationships of specific gravity to tree height in commercially important species, For. Prod. J. 22 (1972) 37-41.
Olson J.R., Jourdain C.R., Rousseau R.J., Selection for cellulose content, specific gravity and volume in young Populus deltoides clones, Can. J. For. Res. 15 (1985) 393-396.
Phelps J.E., Isebrands J.G., Jowett D., Raw material quality of short-term, intensively cultured Populus clones. I. A comparison of stem and branch properties at three spacing, IAWA Bull. n.s. 3 (1982) 193-200.
Pliura A., Yu Q., Zhang S.Y., MacKay J., Périnet P., Bousquet J., Variation in wood density and shrinkage and their relationship to growth of selected young poplar hybrid crosses, For. Sci. 51 (2005) 472-482.
Posey C.E., Bridgewater F.E., Buxton J.A., Natural variation in specific gravity, fiber length, and growth rate of eastern cottonwood in the southern Great Plains, TAPPI 52 (1969) 1508-1511.
Randall W.K., Cooper D.T., Predicted genotypic gains from cottonwood clonal tests, Silvae Genet. 22 (1973) 165-167.
Richardson C.J, Koerper G.J., The influence of edaphic characteristics and clonal variation on quantity and quality of wood production in Populus grandidentata in the Great Lakes region of the USA, Mitteil Forstl. Bundes-Versuchsanstalt Wien 142 (1981) 271-292.
Riemenschneider D.E., Berguson W.E., Dickmann D.I., Hall R.B., Isebrands J.G., Mohn C.A., Stanosz G.C., Tuskan G.A., Poplar breeding and testing strategies in the north-central USA: Demonstration of potential yield and consideration of future research needs, For. Chron. 77 (2001) 245-253.
SAS Institute Inc., SAS/STAT User's guide, Vers. 8, SAS Institute Inc., Cary, NC, USA, 1999.
Shelbourne C.J.A., Genotype-environment interaction: its study and its implications in forest tree improvement, in: Proc. of Joint Symposia for the Advancement of Forest Tree Breeding of the Genetics Subject Group, IUFRO, and Section 5, Forest Trees, SABRO, Gov. Forest Exp. Station of Japan, Tokyo, 1972, pp. B-1 I1-I28.
Stener L.-G., Analys av fiberegeneskaper för kloner av hybridasp, Arbetsrapport nr 387. SkogForsk, Uppsala, 1998 (in Swedish).
Swiger L.A., Harvey W.R., Everson D.O., Gregory K.E., The variance of intra-class correlation involving groups with one observation, Biometrics 20 (1964) 818-826.
Vargas-Hernandez J., Adams W.T., Genetic variation of wood density components in young coastal Douglas-fir: implications for tree breeding, Can. J. For. Res. 21 (1991) 1801-1807.
Yu Q., Pulkkinen P., Rautio M., Haapanen M., Alen R., Stener L.G., Beuker E., Tigerstedt P.M.A., Genetic control of wood physiochemical properties, growth and phenology in hybrid aspen clones, Can. J. For. Res. 31 (2001) 1348-1356 [CrossRef].
Zhang S.Y., Morgenstern E.K., Genetic variation and inheritance of wood density in black spruce (Picea mariana) families and its relationship with growth: implications for tree breeding, Wood Sci. Technol. 30 (1995) 63-75 [CrossRef].
Zhang S.Y., Zhong Y., Effect of growth rate on specific gravity of East-Liaoning oak (Quercues liaotungensis) wood, Can. J. For. Res. 21 (1991) 255-260.
Zhang S.Y., Nepveu G., Eyono Owoundi R., Intratree and intertree variation in selected wood quality characteristics of European oak (Quercus petraea and Quercus robur), Can. J. For. Res. 24 (1994) 1818-1823.
Zhang S.Y., Yu Q., Chauret G., Koubaa A., Selection for both growth and wood properties in hybrid poplar clones, For. Sci. 49 (2003) 901-908.
Zobel B.J., van Buijtenen J.P., Wood variation, Springer-Verlag Berlin Heidelberg, Germany, 1989.
Zobel B.J., Jett J.B., Genetics of wood production, Springer-Verlag, Berlin Heidelberg, Germany, 1995.
End-use related physical and mechanical properties of selected fast-growing poplar hybrids (Populus trichocarpa $\times$ P. deltoides)
Genetic variation in productivity, leaf traits and carbon isotope discrimination in hybrid poplars cultivated on contrasting sites
Ann. For. Sci. 65, 1- (2008)
Estimation of genetic parameters in the European black poplar (Populus nigra L.). Consequence on the breeding strategy
Use of wood shrinkage characteristics in breeding of fast-grown Acacia auriculiformis A. Cunn. ex Benth in Vietnam
Ann. For. Sci. 66, 1-9 (2009)
Natural hybridization between cultivated poplars and their wild relatives: evidence and consequences for native poplar populations | CommonCrawl |
Who is Danica McKellar? Net Worth, Age, Height, Bio, Husband, Movies, Family, Tv Series, Math Books
August 19, 2022, 10:02 pm inActress
Danica McKellar Net Worth, Age, Height, Bio: Danica McKellar is an aspiring American actress, mathematics writer, and education advocate who is known for her exceptional acting skills and roles. She got recognized for starring Winnie Cooper in ABC's TV series ''The Wonder Years'' between 1980 to 1993. She is also famous for her appearance in Netflix original series ''Project Mc2''. Along with this, she has worked in multiple movies and Tv series. Moreover, Danica has published 4 non-fiction books including ''Math Doesn't Suck''.
Danica McKellar Biography / Wiki
Danica McKellar was born on January 3, 1975, in La Jolla, California, United States. She alongside her family moved to Los Angeles when she was 8 years old. By birth, she holds the nationality of America and has a faith in Christianity. Talking about her education, she received a Bachelor of Science (B.Sc) degree (summa cum laude) in Analytical Mathematics in 1998 from the University of California, Los Angeles (UCLA). There she was a member of the Alpha Delta Pi sorority.
While doing graduation, she coauthored a scientific paper with Professor Lincoln Chayes and fellow student Brandy Winn titled "Percolation and Gibbs states multiplicity for ferromagnetic Ashkin–Teller models on {\displaystyle \mathbb {Z} ^{2}}." At the age of 7, she started learning acting at the Lee Strasberg Institute.
Danica McKellar Age / Birthday
Danica McKellar's age is 47 years old (as of 2022). Her zodiac sign is Capricorn.
Danica McKellar with actress Jen Lilley
Danica McKellar Height and Weight
Talking about Danica McKellar's physical appearance, she is about 5 feet 7 inches in height and 58 kg in weight. Her hair color is brown and her eyes color is also brown. She has acquired a stunning and captivating physique.
A post shared by Danica McKellar (@danicamckellar)
Danica McKellar The Wonder Years
In 1988, Danica got a bit breakthrough in her acting career by casting as the main teen character Gwendolyn "Winnie" Cooper, in ABC's teen comedy-drama series ''The Wonder Years''. In the series, she was cast as a child actress and appeared till 1993. Her role in the series ''The Wonder Years'' was liked by audiences and got a good response from critics. Other lead characters of the series are Fred Savage and Alley Mills. Whereas the series became commercially and critically successful.
Danica McKellar Movies and Tv Series
At the age of 10, in 1985, Danica commenced her acting career as a child actress. She first appeared as Nola in the Tv series ''The Twilight Zone''. Then she was cast in the ''The Wonder Years'' series and gained a lot of popularity that earning her several guest roles. Between 2002 to 2003, she played a recurring role as Elsie Snuffin in the NBC series "The West Wing" season 4. She also has played a recurring role in the TV series ''Inspector Mom'' as Maddie Monroe.
Danica McKellar with actress Melissa Joan Hart
Besides this, Danica has lent her voice to animated TV series such as ''Static Shock'', ''Game Over'', ''Young Justice'', ''DC Super Hero Girls'', etc. In 2015, she become a judge for the first time for the American reality competition show ''King of the Nerds''. After the year, she was judged for the ''Miss America 2016'' show. In addition, she appeared in the ''Domino Masters'' reality show as a judge in March 2022.
Other than TV series, Danica made her movie debut in 1992 with the ''Sidekicks'' movie by Aaron Norris. Her next role came in the ''Good Neighbor'' movie, where she portrayed the main role of Molly Wright with Billy Dee Williams and Tobin Bell. Since then, Danica has been cast in several movies such as "Jane White Is Sick & Twisted", "Raising Genius", "Where Hope Grows", "The Fiddling Horse", and "Lego DC Super Hero Girls: Super-Villain High".
Danica McKellar with actress Candace Cameron Bure
In addition, Danica has featured in two music videos, Debbie Gibson's "No More Rhyme" (1993) and Avril Lavigne's "Rock n Roll". She also has lent her voice to video games titled ''X-Men Legends', 'EverQuest II', 'Marvel: Ultimate Alliance', and 'Young Justice: Legacy'.
Danica McKellar Books
Apart from the acting industry, Danica gave her contribution to the education field. Being a math teacher, she has published a couple of books on the respective topic. In 2008, she launched her first book titled ''Math Doesn't Suck: How to Survive Middle School Math without Losing Your Mind or Breaking a Nail'' which was a New York Times bestseller as well as reviewed by Tara C. Smith and Anthony Jones.
In an interview, she said that she wrote the book "to show girls that math is accessible and relevant, and even a little glamorous" and to counteract "damaging social messages telling young girls that math and science aren't for them". On 5 August 2008, she released her second book called ''Kiss My Math: Showing Pre-Algebra Who's Boss'', which she mainly writes for the girls studying in 7th to 9th standard. Her third book title is ''Hot X: Algebra Exposed!'', published in August 2010.
Danica McKellar with author Carlos Whittaker
Her fourth book ''Girls Get Curves – Geometry Takes Shape'' is based on Geometry. Interestingly, Three of Danica's books were selected for The New York Times children's bestseller list. Not only this, she earned Mathical Honors for Goodnight, Numbers. She also received the Joint Policy Board for Mathematics (JPBM) Communications Award in January 2014 for her public appearance, books, and blogs.
Danica McKellar Net Worth
Danica McKellar's current net worth is thought to be more than $7.2 Million USD. Her sources of income are variable such as acting, teaching, advocacy, commercials, brand sponsorships, modeling, business ventures, social media platforms, etc. As her career is active, it can expect that her earnings will increase in the impending years.
Net Worth $7.2 Million USD
Danica McKellar Family / Siblings
Danica's father's name is Christopher McKellar, who is a real estate developer and her mother was a housewife named Mahaila McKellar (née Tello). She has a younger sister named Crystal McKellar, who is a corporate lawyer and actress by profession. Her mother is of Portuguese origin via the Azores and Madeira islands while she is of paternal Scottish, French, German, Spanish, and Dutch descent. Though her parents got divorced in 1982 and her dad married Molly, with whom he has two sons, named Christopher and Connor.
Danica McKellar with her sister Crystal McKellar
Danica McKellar Husband / Boyfriend
Who is the husband of the American diva Danica McKellar? In 2001, Danica started dating composer Michael "Mike" Verta and tied her knot with him on 22 March 2009. After one year of their marriage, in 2010 they welcomed their first son named Draco. But unfortunately, their married won't last long and they got divorced in June 2012.
After the separation from Verta, Danica was in a relationship with Scott Sveslosky, who is her partner in the Los Angeles legal firm Sheppard, Mullin, Richter & Hampton. On 16 July 2014, they got engaged and on 15 November of the same year, they married in Kauai, Hawaii. Danica has featured her present and ex-husbands as well as her son on her Instagram account.
Danica McKellar with her son Draco
Danica McKellar and Celesti Bairagey both are professional actress and is active on social media, especially on Instagram.
Danica hails from La Jolla, California, United States.
She is a prominent actress, mathematics writer, and education advocate.
Danica has given the voice of Jubilee in the video game "X-Men Legends" in 2004.
In the July 2005 edition of 'Stuff' magazine, Danica was featured in lingerie.
Through her books, she tried to motivate teenage girls to develop an interest in mathematics.
She has racked up thousands of followers on Instagram for sharing her gorgeous-looking pictures mostly with celebrities and co-stars.
Danica looks stunning, captivating, and attractive.
She loves traveling and has traveled to many scenic places such as Russia, Italy, London, and Paris.
She is an animal lover, especially likes dogs and cats.
Danica McKellar Instagram Click Here
Danica McKellar Twitter Click Here
Does Danica McKellar, Consume Alcohol? Not Revealed
Does she, Smoke? Not Revealed
Does she, Drive? Yes
Does Danica McKellar, Swims? Yes
Does she know cooking? No
Is she a Yoga Practitioner? No
Does Gym? Yes
Is Danica McKellar a Jogger? Yes
Eating Habit? Not Disclosed
Who is Danica McKellar?
Danica McKellar is an American actress, mathematics writer, and education advocate who is better known for her role as Winnie Cooper in ABC's TV series ''The Wonder Years'' between 1980 to 1993. She is also famous for her appearance in Netflix original series ''Project Mc2''. Moreover, Danica has published six non-fiction books including ''Math Doesn't Suck''.
Is Danica McKellar Married?
Yes, on 15 November 2014 she tied her knot with Scott Sveslosky whereas previously she married her long-time boyfriend Michael "Mike" Verta.
What is the age of Danica McKellar?
The age of Danica McKellar is 47 years (as of 2022).
When is the Birthday of Danica McKellar?
Danica McKellar's birthday is on January 3, 1975.
What is the zodiac sign of Danica McKellar?
The zodiac sign of Danica McKellar is Capricorn.
How tall is Danica McKellar?
Danica McKellar is 5′ 7" tall.
Where is Danica McKellar from?
Danica McKellar is from La Jolla, California, United States.
How much is the net worth of Danica McKellar?
Danica McKellar's net worth in 2022 is thought to be more than $7.2 Million USD.
Previous article Who is Tomasz Mandes? Height, Net Worth, Age, Movies, Tv-Series, Wife, Girlfriend, Family
Next article Who is Alex Ordonez? Age, Height, Net Worth, Biography, Family, Girlfriend, Wiki
More From: Actress
Who is Virág Bárány? Age, Height, Net Worth, Movies, Tv-Series, Family, Boyfriend, Siblings
Who is Stephanie Corneliussen? Height, Net Worth, Age, Spouse, Movies, Tv-Series, Family
Who is Liv Mjönes? Height, Net Worth, Age, Movies, Tv-Series, Husband, Family, Boyfriend
Who is Alyla Browne? Height, Net Worth, Age, Movies, Tv-Series, Family, Parents, Siblings
Who is Tomasz Mandes? Height, Net Worth, Age, Movies, Tv-Series, Wife, Girlfriend, Family
Who is Alex Ordonez? Age, Height, Net Worth, Biography, Family, Girlfriend, Wiki | CommonCrawl |
Optimization Seminars
Center for Mathematical Modeling – U. de Chile
On the construction of maximal p-cyclically monotone operators
14 de January de 2021 Seminarfflores
Speaker: Professor Orestes Bueno
Universidad del Pacífico, Lima, Perú
Date: January 20, 2021 at 10:00 (Chilean-time)
Title: On the construction of maximal p-cyclically monotone operators
Abstract: In this talk we deal with the construction of explicit examples of maximal p-cyclically maximal monotone operators. To the date, there is only one instance of an explicit example of a maximal 2-cyclically monotone operator that is not maximal monotone. We present several other examples, and a proposal of how such examples can be constructed.
A recorded video of the conference is …. ; the slides can be downloaded here
Venue: Online via Google Meet http://meet.google.com/mqh-bgjv-iyb
A brief biography of the speaker: Orestes Bueno is an Associate Professor at Universidad del Pacífico, Lima, Perú. He obtained his PhD at the Instituto de Matemática Pura e Aplicada (IMPA), Brazil, in 2012. His main interests are: Maximal Monotone Operators, Generalized Convexity and Monotonicity, Functional Analysis.
Coordinators: Fabián Flores-Bazán (Universidad de Concepción) and Abderrahim Hantoute (CMM).
On diametrically maximal sets, maximal premonotone maps and premonotone bifunctions
13 de November de 2020 Seminarfflores
Speaker: Professor Wilfredo Sosa
Graduate Program of Economics, Catholic University of Brasilia, Brazil
Date: November 18, 2020 at 10:00
Title: On diametrically maximal sets, maximal premonotone maps and premonotone bifunctions
Abstract: First, we study diametrically maximal sets in the Euclidean space (those which are not properly contained in a set with the same diameter), establishing their main properties. Then, we use these sets for exhibiting an explicit family of maximal premonotone operators. We also establish some relevant properties of maximal premonotone operators, like their local boundedness, and finally we introduce the notion of premonotone bifunctions, presenting a canonical relation between premonotone operators and bifunctions, that extends the well known one, which holds in the monotone case.
Venue: Online via Google Meet meet.google.com/tam-ddhj-psx
A brief biography of the speaker: Wilfredo Sosa es profesor del Programa de Graduados de Economía de la Universidad Católica de Brasilia, Brazil; Egresado de la Universidad de Ingeniería de Lima, Perú. Formado en el IMPA de Rio de Janeiro Brasil. Co-Fundador del IMCA de Lima Peru. Miembro titular de la Academia de Ciencias de Perú. Areas de interés: Optimization theory; Duality theory; Equilibrium theory; Mathematical economy.
Coordinators: Fabián Flores-Bazán (Universidad de Concepción) and Abderrahim Hantoute (CMM)
An algebraic view of the smallest strictly monotonic function
31 de October de 2020 Seminarfflores
Speaker: Professor César Gutiérrez
IMUVA (Mathematics Research Institute of the University of Valladolid), Valladolid, Spain
Title: An algebraic view of the smallest strictly monotonic function
Abstract: The talk concerns with one of the most popular functions to derive nonconvex separation results. Complete characterizations for both its level sets and basic properties such as monotonicity and convexity are provided in terms of its parameters. Most of these characterizations work without considering any additional requirement or assumption. Finally, as an application, a vectorial form of the Ekeland variational principle is provided.
A recorded video of the conference is here ; the slides can be downloaded here
Venue: Online via Google Meet meet.google.com/tta-bhpu-raa
A brief biography of the speaker: César Gutiérrez (ORCID iD 0000-0002-8223-2088) is Professor at Universidad of Valladolid (Spain) and researcher of the Mathematics Research Institute of the University of Valladolid (IMUVA). He is author of 54 papers on several subjects related to vector and set-valued optimization. Currently, he is Associate Editor of Optimization.
Principal-Agent problem in insurance: from discrete- to continuous-time
5 de October de 2020 Seminarfflores
Speaker: Doctor Nicolás Hernández
Center for Mathematical Modeling (CMM), Universidad de Chile, Santiago, Chile
Date: October 07, 2020 at 10:00
Title: Principal-Agent problem in insurance: from discrete-to continuous-time
Abstract: In this talk we present a contracting problem between an insurance buyer and the seller, subject to prevention efforts in the form of self-insurance and self-protection. We start with a static formulation, corresponding to an optimization problem with variational inequality constraint, and extend the main properties of the optimal contract to the continuous-time formulation, corresponding to a stochastic control problem in weak form under non-singular measures.
A recorded video of the conference is here; the slides can be downloaded here
Venue: Online via Google Meet here
A brief biography of the speaker: Nicolás Hernández is currently a Postdoctoral Researcher at the Center for Mathematical Modeling (CMM), at Universidad de Chile. He obtained his PhD in 2017, as a cotutelle between Université Paris-Dauphine and Universidad de Chile. His research areas of interest are Contract theory, stochastic control, mathematical finance, probability, optimization, game theory.
Sigma-convex functions and Sigma-subdifferentials
14 de September de 2020 Seminarfflores
Speaker: Prof. Mohammad Hossein Alizadeh
Institute for Advanced Studies in Basic Sciences (IASBS),
Zanjan, Iran
Date: September 23, 2020 at 10:00
Title: Sigma-convex functions and Sigma-subdifferentials
Abstract: In this talk we present and study the notion of $\sigma$-subdifferential of a proper function $f$ which contains the Clarke-Rockafellar subdifferential of $f$ under some mild assumptions on $f$.
We show that some well known properties of the convex function, namely Lipschitz property in the interior of its domain, remain valid for the large class of $\sigma$-convex functions.
Venue: Online via Google Meet meet.google.com/uoq-kifr-nsg
A brief biography of the speaker: Mohammad Hossein Alizadeh is an Assistant
Professor at the Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan, Iran. He obtains his Ph.D. from the University the Aegean, Greece, in 2012. He is mainly interested in the following areas:
Monotone and generalized monotone operators, Monotone and generalized monotone
bifunctions, generalized convexity and generalized inverses.
Coordinators: Abderrahim Hantoute (CMM) and Fabián Flores-Bazán (Universidad de Concepción)
Generalized Newton Algorithms for Tilt-Stable Minimizers in Nonsmooth Optimization
26 de August de 2020 Seminarahantoute
Speaker: Prof. Boris Mordukhovich
Distinguished University Professor of Mathematics Wayne State University
Date: September 2, 2020 at 10:00
Title: Generalized Newton Algorithms for Tilt-Stable Minimizers in Nonsmooth Optimization
Abstract: This talk aims at developing two versions of the generalized Newton method to compute local minimizers for nonsmooth problems of unconstrained and constraned optimization that satisfy an important stability property known as tilt stability. We start with unconstrained minimization of continuously differentiable cost functions having Lipschitzian gradients and suggest two second-order algorithms ofthe Newton type: one involving coderivatives of Lipschitzian gradient mappings, and the other based on graphical derivatives of the latter. Then we proceed with the propagation of these algorithms to minimization of extended-real-valued prox-regular functions, while covering in this way problems of constrained optimization, by using Moreau envelopes. Employing advanced techniques of second-order variational analysis and characterizations of tilt stability allows us to establish the solvability of subproblems in both algorithms and to prove the Q-superlinear convergence of their iterations. Based on joint work with Ebrahim Sarabi (Miami University, USA).
Venue: Online via Google Meet meet.google.com/gyf-mpcb-tre
A brief biography of the speaker: Prof. Boris Mordukhovich was born and educated in the former Soviet Union. He got his PhD from the Belarus State University (Minsk) in 1973. He is currently a Distinguished University Professor of Mathematics at Wayne State University. Mordukhovich is an expert in optimization, variational analysis, generalized differentiation, optimal control, and their applications to economics, engineering, behavioral sciences, and other fields. He is the author and a co-author of many papers and 5 monographs in these areas. Prof. Mordukhovich is an AMS Fellow, a SIAM Fellow, and a recipient of many international awards and honors including Doctor Honoris Causa degrees from 6 universities worldwide. He was the Founding Editor (2008) and a co-Editor-in-Chief (2009-2014) of Set-Valued and Variational Analysis, and is now an Associate Editor of many high-ranked journals including SIAM J. Optimization, JOTA, JOGO, etc. In 2016 he was elected to the Accademia Peloritana dei Pericolanti (Italy). Prof. Mordukhovich is in the list of Highly Cited Researchers in Mathematics.
An overview of Sweeping Processes with applications
Speaker: Prof. Emilio Vilches
Instituto de Ciencias de la Ingeniería, Universidad de O'Higgins, Rancagua, Chile
Date: August 26, 2020 at 10:00
Title: An overview of Sweeping Processes with applications
Abstract: The Moreau's Sweeping Process is a first-order differential inclusion, involving the normal cone to a moving set depending on time. It was introduced and deeply studied by J.J. Moreau in the 1970s as a model for an elastoplastic mechanical system. Since then, many other applications have been given, and new variants have appeared. In this talk, we review the latest developments in the theory of sweeping processes and its variants. We highlight open questions and provide some applications.
This work has been supported by ANID-Chile under project Fondecyt de Iniciación 11180098.
The recorded video of the conference can be downloaded here
The slides of the conference can be downloaded here
Venue: Online via Google Meet https://meet.google.com/toh-nxch-fhb
A brief biography of the speaker: Prof. Emilio Vilches is Assistant Professor at Universidad de O'Higgins, Rancagua, Chile. He obtains his Ph.D. from the University of Chile and the University of Burgundy in 2017. He is mainly interested in the application of convex and variational analysis to nonsmooth dynamical systems.
Epi-convergence, asymptotic analysis and stability in set optimization problems
3 de August de 2020 Seminarfflores
Speaker: Prof. Rubén López
University of Tarapacá, Arica, Chile
Title: Epi-convergence, asymptotic analysis and stability in set optimization problems
Abstract: We study the stability of set optimization problems with data that are not necessarily bounded. To do this, we use the well-known notion of epi-convergence coupled with asymptotic tools for set-valued maps. We derive characterizations for this notion that allows us to study the stability of vector and set type solutions by considering variations of the whole data (feasible set and objective map). We extend the notion of total epi-convergence to set-valued maps.
* This work has been supported by Conicyt-Chile under project FONDECYT 1181368
Joint work with Elvira Hérnández, Universidad Nacional de Educación a Distancia, Madrid, Spain
Venue: Online via Google Meet – https://meet.google.com/hgo-zwkr-fvh
A brief biography of the speaker: Prof. Rubén López is Professor at the University of Tarapacá, Arica – Chile. He studied at Moscow State University – Mech Math (1996, Russia) and Universidad de Concepción – DIM (2005, Chile). He works on Optimization: asymptotic analysis, variational convergences, stability theory, approximate solutions and well-posedness.
Satisfying Instead of Optimizing in the Nash Demand Games
15 de July de 2020 Seminarfflores
Speaker: Prof. Sigifredo Laengle
University of Chile, Santiago, Chile
Date: July 22, 2020 at 10:00
Abstract: The Nash Demand Game (NDG) has been one of the first models (Nash 1953) that has tried to describe the process of negotiation, competition, and cooperation. This model is still subject to active research, in fact, it maintains a set of open questions regarding how agents optimally select their decisions and how they face uncertainty. However, the agents act rather guided by chance and necessity, with a Darwinian flavor. Satisfying, instead of optimising. The Viability Theory (VT) has this approach. Therefore, we investigate the NDG under this point of view. In particular, we ask ourselves two questions: if there are decisions in the NDG that ensure viability and if this set also contains Pareto and equilibrium strategies. Thus, carrying out the work, we find that the answers to both questions are not only affirmative, but that we also advance in characterising viable NDGs. In particular, we conclude that a certain type of NDGs ensures viability and equilibrium. Many interesting questions originate from this initial work. For example, is it possible to fully characterise the NDG by imposing viability conditions? Under what conditions does viability require cooperation? Is extreme polarisation viable?
Venue: Online via Google Meet – meet.google.com/jhb-umew-kwp
A brief biography of the speaker: Prof. Sigifredo Laengle is an Associate Professor at the University of Chile since 2007. He received his PhD in Germany working on the theoretical problem of the value of information in organisations. He has published articles that articulate phenomena of strategic interaction, and optimisation.
Coordinators: Abderrahim Hantoute and Fabián Flores-Bazán (Universidad de Concepción)
Enlargements of the Moreau-Rockafellar Subdifferential
13 de July de 2020 Seminarahantoute
Speaker: Prof. Michel Théra
University of Limoges, France
Abstract: The Moreau-Rockafellar subdifferential is a highly important notion in convex analysis and optimization theory. But there are many functions which fail to be subdifferentiable at certain points. In particular, there is a continuous convex function defined on $\ell^2(\mathbb{N})$, whose Moreau–Rockafellar subdifferential is empty at every point of its domain. This talk proposes some enlargements of the Moreau-Rockafellar subdifferential: the sup$^\star$-subdifferential, sup-subdifferential and symmetric subdifferential, all of them being nonempty for the mentioned function. These enlargements satisfy the most fundamental properties of the Moreau–Rockafellar subdifferential: convexity, weak$^*$-closedness, weak$^*$-compactness and, under some additional assumptions, possess certain calculus rules. The sup$^\star$ and sup subdifferentials coincide with the Moreau–Rockafellar subdifferential at every point at which the function attains its minimum, and if the function is upper semi-continuous, then there are some relationships for the other points. They can be used to detect minima and maxima of arbitrary functions.
The slides of the conference can be downloaded here.
Venue: Online via Google Meet – meet.google.com/unx-gcse-wkn
A brief biography of the speaker: Michel Théra is a French mathematician. He obtained his PhD from the Université de Pau et des Pays de l'Adour (1978) and his thèse d'Etat at the University of Panthéon-Sorbonne (1988). Former President of the French Society of Industrial and Applied Mathematics, he has been also Vice President of the University of Limoges in charge of the International Cooperation. He is presently a professor emeritus of Mathematics in the Laboratory XLIM from the University of Limoges, where he retired as Professeur de classe exceptionnelle. He became Adjoint Professor of Federation University Australia, chairing there the International Academic Advisory Group of the Centre for Informatics and Applied Optimisation (CIAO). He is also scientific co-director of the International School of Mathematics "Guido Stampacchia" at the"Ettore Majorana" Foundation and Centre for Scientific Culture (Erice, Sicily). During several years, he has been a member of the Committee for the Developing Countries of the European Mathematical Society and became after his term an associate member. His research focuses on variational analysis, convex analysis, continuous optimization, monotone operator theory and the interaction among these fields of research, and their applications. He has published 130 articles in international journals on various topics related to variational analysis, optimization, monotone operator theory and nonlinear functional analysis. He serves as editor for several journals on continuous optimization and has been responsible for several international research programs until his retirement.
Coordinators: Abderrahim Hantoute and Fabián Flores-Bazán (DIM-UdeC)
Latest seminar
Center for Mathematical Modeling (CMM)
Faculty of Physical and Mathematical Sciences
Beauchef 851,
Edificio Norte, piso 7
Santiago – CHILE
©2021 CMM - Center for Mathematical Modeling | FCFM | Universidad de Chile
a CNRS International Mixed Unit | CommonCrawl |
Birnbaum–Saunders distribution
The Birnbaum–Saunders distribution, also known as the fatigue life distribution, is a probability distribution used extensively in reliability applications to model failure times. There are several alternative formulations of this distribution in the literature. It is named after Z. W. Birnbaum and S. C. Saunders.
Theory
This distribution was developed to model failures due to cracks. A material is placed under repeated cycles of stress. The jth cycle leads to an increase in the crack by Xj amount. The sum of the Xj is assumed to be normally distributed with mean nμ and variance nσ2. The probability that the crack does not exceed a critical length ω is
$P(X\leq \omega )=\Phi \left({\frac {\omega -n\mu }{\sigma {\sqrt {n}}}}\right)$
where Φ() is the cdf of normal distribution.
If T is the number of cycles to failure then the cumulative distribution function (cdf) of T is
$P(T\leq t)=1-\Phi \left({\frac {\omega -t\mu }{\sigma {\sqrt {t}}}}\right)=\Phi \left({\frac {t\mu -\omega }{\sigma {\sqrt {t}}}}\right)=\Phi \left({\frac {\mu {\sqrt {t}}}{\sigma }}-{\frac {\omega }{\sigma {\sqrt {t}}}}\right)=\Phi \left({\frac {\sqrt {\mu \omega }}{\sigma }}\left[\left({\frac {t}{\omega /\mu }}\right)^{0.5}-\left({\frac {\omega /\mu }{t}}\right)^{0.5}\right]\right)$
The more usual form of this distribution is:
$F(x;\alpha ,\beta )=\Phi \left({\frac {1}{\alpha }}\left[\left({\frac {x}{\beta }}\right)^{0.5}-\left({\frac {\beta }{x}}\right)^{0.5}\right]\right)$
Here α is the shape parameter and β is the scale parameter.
Properties
The Birnbaum–Saunders distribution is unimodal with a median of β.
The mean (μ), variance (σ2), skewness (γ) and kurtosis (κ) are as follows:
$\mu =\beta \left(1+{\frac {\alpha ^{2}}{2}}\right)$
$\sigma ^{2}=(\alpha \beta )^{2}\left(1+{\frac {5\alpha ^{2}}{4}}\right)$
$\gamma ={\frac {4\alpha (11\alpha ^{2}+6)}{(5\alpha ^{2}+4)^{\frac {3}{2}}}}$
$\kappa =3+{\frac {6\alpha ^{2}(93\alpha ^{2}+40)}{(5\alpha ^{2}+4)^{2}}}$
Given a data set that is thought to be Birnbaum–Saunders distributed the parameters' values are best estimated by maximum likelihood.
If T is Birnbaum–Saunders distributed with parameters α and β then T−1 is also Birnbaum-Saunders distributed with parameters α and β−1.
Transformation
Let T be a Birnbaum-Saunders distributed variate with parameters α and β. A useful transformation of T is
$X={\frac {1}{2}}\left[\left({\frac {T}{\beta }}\right)^{0.5}-\left({\frac {T}{\beta }}\right)^{-0.5}\right]$.
Equivalently
$T=\beta \left(1+2X^{2}+2X(1+X^{2})^{0.5}\right)$.
X is then distributed normally with a mean of zero and a variance of α2 / 4.
Probability density function
The general formula for the probability density function (pdf) is
$f(x)={\frac {{\sqrt {\frac {x-\mu }{\beta }}}+{\sqrt {\frac {\beta }{x-\mu }}}}{2\gamma \left(x-\mu \right)}}\phi \left({\frac {{\sqrt {\frac {x-\mu }{\beta }}}-{\sqrt {\frac {\beta }{x-\mu }}}}{\gamma }}\right)\quad x>\mu ;\gamma ,\beta >0$ ;\gamma ,\beta >0}
where γ is the shape parameter, μ is the location parameter, β is the scale parameter, and $\phi $ is the probability density function of the standard normal distribution.
Standard fatigue life distribution
The case where μ = 0 and β = 1 is called the standard fatigue life distribution. The pdf for the standard fatigue life distribution reduces to
$f(x)={\frac {{\sqrt {x}}+{\sqrt {\frac {1}{x}}}}{2\gamma x}}\phi \left({\frac {{\sqrt {x}}-{\sqrt {\frac {1}{x}}}}{\gamma }}\right)\quad x>0;\gamma >0$
Since the general form of probability functions can be expressed in terms of the standard distribution, all of the subsequent formulas are given for the standard form of the function.
Cumulative distribution function
The formula for the cumulative distribution function is
$F(x)=\Phi \left({\frac {{\sqrt {x}}-{\sqrt {\frac {1}{x}}}}{\gamma }}\right)\quad x>0;\gamma >0$
where Φ is the cumulative distribution function of the standard normal distribution.
Quantile function
The formula for the quantile function is
$G(p)={\frac {1}{4}}\left[\gamma \Phi ^{-1}(p)+{\sqrt {4+\left(\gamma \Phi ^{-1}(p)\right)^{2}}}\right]^{2}$
where Φ −1 is the quantile function of the standard normal distribution.
References
• Birnbaum, Z. W.; Saunders, S. C. (1969), "A new family of life distributions", Journal of Applied Probability, 6 (2): 319–327, doi:10.2307/3212003, JSTOR 3212003, archived from the original on September 23, 2017
• Desmond, A.F. (1985), "Stochastic models of failure in random environments", Canadian Journal of Statistics, 13 (3): 171–183, doi:10.2307/3315148, JSTOR 3315148
• Johnson, N.; Kotz, S.; Balakrishnan, N. (1995), Continuous Univariate Distributions, vol. 2 (2nd ed.), New York: Wiley
• Lemonte, A. J.; Cribari-Neto, F.; Vasconcellos, K. L. P. (2007), "Improved statistical inference for the two-parameter Birnbaum–Saunders distribution", Computational Statistics and Data Analysis, 51: 4656–4681, doi:10.1016/j.csda.2006.08.016
• Lemonte, A. J.; Simas, A. B.; Cribari-Neto, F. (2008), "Bootstrap-based improved estimators for the two-parameter Birnbaum–Saunders distribution", Journal of Statistical Computation and Simulation, 78: 37–49, doi:10.1080/10629360600903882
• Cordeiro, G. M.; Lemonte, A. J. (2011), "The β-Birnbaum–Saunders distribution: An improved distribution for fatigue life modeling", Computational Statistics and Data Analysis, 55 (3): 1445–1461, doi:10.1016/j.csda.2010.10.007
• Lemonte, A. J. (2013), "A new extension of the Birnbaum–Saunders distribution", Brazilian Journal of Probability and Statistics, 27 (2): 133–149, doi:10.1214/11-BJPS160
External links
• Fatigue life distribution
This article incorporates public domain material from the National Institute of Standards and Technology.
Probability distributions (list)
Discrete
univariate
with finite
support
• Benford
• Bernoulli
• beta-binomial
• binomial
• categorical
• hypergeometric
• negative
• Poisson binomial
• Rademacher
• soliton
• discrete uniform
• Zipf
• Zipf–Mandelbrot
with infinite
support
• beta negative binomial
• Borel
• Conway–Maxwell–Poisson
• discrete phase-type
• Delaporte
• extended negative binomial
• Flory–Schulz
• Gauss–Kuzmin
• geometric
• logarithmic
• mixed Poisson
• negative binomial
• Panjer
• parabolic fractal
• Poisson
• Skellam
• Yule–Simon
• zeta
Continuous
univariate
supported on a
bounded interval
• arcsine
• ARGUS
• Balding–Nichols
• Bates
• beta
• beta rectangular
• continuous Bernoulli
• Irwin–Hall
• Kumaraswamy
• logit-normal
• noncentral beta
• PERT
• raised cosine
• reciprocal
• triangular
• U-quadratic
• uniform
• Wigner semicircle
supported on a
semi-infinite
interval
• Benini
• Benktander 1st kind
• Benktander 2nd kind
• beta prime
• Burr
• chi
• chi-squared
• noncentral
• inverse
• scaled
• Dagum
• Davis
• Erlang
• hyper
• exponential
• hyperexponential
• hypoexponential
• logarithmic
• F
• noncentral
• folded normal
• Fréchet
• gamma
• generalized
• inverse
• gamma/Gompertz
• Gompertz
• shifted
• half-logistic
• half-normal
• Hotelling's T-squared
• inverse Gaussian
• generalized
• Kolmogorov
• Lévy
• log-Cauchy
• log-Laplace
• log-logistic
• log-normal
• log-t
• Lomax
• matrix-exponential
• Maxwell–Boltzmann
• Maxwell–Jüttner
• Mittag-Leffler
• Nakagami
• Pareto
• phase-type
• Poly-Weibull
• Rayleigh
• relativistic Breit–Wigner
• Rice
• truncated normal
• type-2 Gumbel
• Weibull
• discrete
• Wilks's lambda
supported
on the whole
real line
• Cauchy
• exponential power
• Fisher's z
• Kaniadakis κ-Gaussian
• Gaussian q
• generalized normal
• generalized hyperbolic
• geometric stable
• Gumbel
• Holtsmark
• hyperbolic secant
• Johnson's SU
• Landau
• Laplace
• asymmetric
• logistic
• noncentral t
• normal (Gaussian)
• normal-inverse Gaussian
• skew normal
• slash
• stable
• Student's t
• Tracy–Widom
• variance-gamma
• Voigt
with support
whose type varies
• generalized chi-squared
• generalized extreme value
• generalized Pareto
• Marchenko–Pastur
• Kaniadakis κ-exponential
• Kaniadakis κ-Gamma
• Kaniadakis κ-Weibull
• Kaniadakis κ-Logistic
• Kaniadakis κ-Erlang
• q-exponential
• q-Gaussian
• q-Weibull
• shifted log-logistic
• Tukey lambda
Mixed
univariate
continuous-
discrete
• Rectified Gaussian
Multivariate
(joint)
• Discrete:
• Ewens
• multinomial
• Dirichlet
• negative
• Continuous:
• Dirichlet
• generalized
• multivariate Laplace
• multivariate normal
• multivariate stable
• multivariate t
• normal-gamma
• inverse
• Matrix-valued:
• LKJ
• matrix normal
• matrix t
• matrix gamma
• inverse
• Wishart
• normal
• inverse
• normal-inverse
• complex
Directional
Univariate (circular) directional
Circular uniform
univariate von Mises
wrapped normal
wrapped Cauchy
wrapped exponential
wrapped asymmetric Laplace
wrapped Lévy
Bivariate (spherical)
Kent
Bivariate (toroidal)
bivariate von Mises
Multivariate
von Mises–Fisher
Bingham
Degenerate
and singular
Degenerate
Dirac delta function
Singular
Cantor
Families
• Circular
• compound Poisson
• elliptical
• exponential
• natural exponential
• location–scale
• maximum entropy
• mixture
• Pearson
• Tweedie
• wrapped
• Category
• Commons
| Wikipedia |
\begin{document}
\title [Th{\'{e}}or{\`{e}}mes de r{\'{e}}ciprocit{\'{e}} ]{Reciprocity theorems for holomorphic representations of some infinite-dimensional groups\\[12pt]Quelques th{\'{e} }or{\`{e}}mes de r{\'{e}}ciprocit{\'{e}} pour les repr{\'{e}} sentations holomorphes irr{\'{e}} ductibles de certains groupes de dimension infinie} \author[Tuong Ton-That]{\parbox{\textwidth}{\centering Tuong Ton-That\\The University of Iowa\\Iowa City, IA 52242-1419 USA\\ {\texttt{[email protected]}} }} \subjclass{02.20.Tw, 02.20.Qs, 03.65.Fd} \begin{abstract} Let $\mu$ denote the \emph{Gaussian measure} on $\mathbb{C}^{n\times k}$ defined by $d\mu\left( Z\right) =\pi^{-nk}\exp\left[ -\operatorname *{Tr}\left( ZZ^{\dag}\right) \right] \,dZ$, where $\operatorname *{Tr}$ denotes the trace function, $Z^{\dag}=\bar{Z}^{T}$, and $dZ$ denotes the Lebesgue measure on $\mathbb{C}^{n\times k}$. Let $\mathcal{F}_{n\times k}$ denote the Barg\-mann--Segal--Fock space of holomorphic entire functions on $\mathbb{C}^{n\times k}$ which are also square-integrable with respect to $\mu$. Fix $n$ and let $\mathcal{F}_{n\times\infty}$ denote the Hilbert-space completion of the \emph{inductive limit} $\lim_{k\rightarrow\infty}\mathcal {F}_{n\times k}$. Let $G_{k}$ and $H_{k}$ be compact groups such that $H_{k}\subset G_{k}\subset\mathrm{GL}_{k}\left( \mathbb{C}\right) $. Let $G_{\infty}$ (resp.\ $H_{\infty}$) denote the inductive limit $\bigcup _{k=1}^{\infty}G_{k}$ (resp.\ $\bigcup_{k=1}^{\infty}H_{k}$). Then the representation $R_{G_{\infty}}$ (resp.\ $R_{H_{\infty}}$) of $G_{\infty}$ (resp.\ $H_{\infty}$), obtained by right translation on $\mathcal{F}_{n\times\infty}$, is a \emph{holomorphic representation} of $G_{\infty}$ (resp.\ $H_{\infty}$) in the sense defined by Ol'shanskii. Then $R_{G_{\infty }}$ and $R_{H_{\infty}}$ give rise to the dual representations $R_{G_{n}^{\prime}}^{\prime}$ and $R_{H_{n}^{\prime}}^{\prime}$ of the \emph{dual pairs} $\left( G_{n}^{\prime},G_{\infty}^{}\right) $ and $\left( H_{n}^{\prime},H_{\infty}^{}\right) $, respectively. The generalized \emph{Bargmann--Segal--Fock space} $\mathcal{F}_{n\times\infty}$ can be considered as both a $\left( G_{n}^{\prime},G_{\infty}^{}\right ) $\emph{-dual module} and an $\left( H_{n}^{\prime},H_{\infty}^{}\right ) $-dual module. It is shown that the following multiplicity-free decompositions of $\mathcal {F}_{n\times\infty}$ into \emph{isotypic components} $\mathcal{F} _{n\times\infty }=\sum\limits_{\left( \lambda\right) }{\!{\oplus}\,}\mathcal {I}_{n\times\infty}^{\left( \lambda\right) }= \sum\limits_{\left( \mu\right) }{\!{\oplus}\,}\mathcal{I}_{n\times\infty }^{\left( \mu\right) }$ hold, where $\left( \lambda\right) $ is a common \emph{irreducible signature} of the pair $\left( G_{n}^{\prime},G_{\infty}^{}\right) $ and $\left( \mu\right) $ a common irreducible signature of the pair $\left( H_{n}^{\prime},H_{\infty}^{}\right) $, and $\mathcal{I}_{n\times\infty }^{\left( \lambda\right) }$ (resp.\ $\mathcal{I}_{n\times\infty}^{\left( \mu\right) }$) is both the isotypic component of the equivalence classes $\left( \lambda\right) _{G_{\infty}}$ (resp.\ $\left( \mu\right) _{H_{\infty}}$) and $\left( \lambda^{\prime}\right) _{G_{n}^{\prime}}$ (resp.\ $\left( \mu^{\prime}\right) _{H_{n}^{\prime}}$). A \emph{reciprocity theorem,} giving the multiplicity of $\left( \mu\right) _{H_{\infty}}$ in the restriction to $H_{\infty}$ of $\left( \lambda\right) _{G_{\infty}}$ in terms of the multiplicity of $\left( \lambda^{\prime}\right) _{G_{n}^{\prime}}$ in the restriction to $G_{n}^{\prime}$ of $\left( \mu^{\prime}\right) _{H_{n}^{\prime}}$, constitutes the main result of this paper. Several applications of this theorem to Physics are also discussed.\\[10pt] \textsc{R{\'{e}}sum{\'{e}}.} Soit $\mu $ la mesure de Gauss definie sur l'espace vectoriel $\mathbb{C}^{n\times k}$ par la formule \newline \begin{minipage}{\textwidth} \hskip-36pt\begin{minipage}{\textwidth} \begin{equation*} d\mu\left( Z\right) =\pi^{-nk}\exp\left[ -\operatorname *{Tr}\left( ZZ^{\dag}\right) \right] \,dZ,\qquad z\in\mathbb{C}^{n\times k}, \end{equation*} \end{minipage}\hskip-36pt\vspace*{\abovedisplayskip} \end{minipage} o{\`{u}} l'on d{\'{e}}signe par $\operatorname *{Tr}$ la trace d'une matrice, $Z^{\dag}=\bar{Z}^{T} $, et par $dZ$ la mesure de Lebesgue sur $\mathbb{C}^{n\times k}$. Soit $\mathcal{F}_{n\times k} $ l'espace hilbertien de Barg\-mann--Segal--Fock des fonctions enti{\`{e}}res holomorphes $f\colon\mathbb{C}^{n\times k}\rightarrow\mathbb{C}$ telles que $f$ soient de carr{\'{e}}-integrable par rapport {\`{a}} la mesure $\mu$. On fixe $n$ et l'on d{\'{e}}signe par $\mathcal{F}_{n\times\infty}$ le compl{\'{e}}t{\'{e}} de la \emph{limite inductive} par rapport {\`{a}} $k$ des espaces $\mathcal {F}_{n\times k}$. Pour chaque $k$ soient $G_{k}$ et $H_{k}$ deux groupes compacts tels que $H_{k}\subset G_{k}\subset\mathrm{GL}_{k}\left( \mathbb{C}\right ) $, et l'on suppose aussi que $H_{k-1}\subset H_{k}\subset H_{k+1}\subset\cdots$ et $G_{k-1}\subset G_{k}\subset G_{k+1}\subset\cdots$. Soit $G_{\infty}$ (resp.\ $H_{\infty}$) la limite inductive de la chaine $\left\{ G_{k}\right\}$ (resp.\ $\left\{ H_{k}\right\}$). Alors la repr{\'{e}}sentation $R_{G_{\infty}}$ (resp.\ $R_{H_{\infty}}$) de $G_{\infty }$ (resp.\ $H_{\infty}$), obtenue par translation {\`{a}} droite sur $\mathcal{F}_{n\times\infty}$, est \emph{holomorphe} dans le sens de Ol'shanskii. Les repr{\'{e}}sentations $R_{G_{\infty }}$ et $R_{H_{\infty}}$ donnent lieu aux repr{\'{e}}sentations $R_{G_{n}^{\prime}}^{\prime}$ et $R_{H_{n}^{\prime}}^{\prime}$, respectivement, des \emph{paires duales} $\left( G_{n}^{\prime},G_{\infty}^{}\right) $ et $\left( H_{n}^{\prime},H_{\infty}^{}\right) $. L'espace hilbertien generalis{\'{e}} de Bargmann--Segal--Fock $\mathcal{F}_{n\times\infty}$ peut {\^{e}}tre consider{\'{e}} en m{\^{e}}me temps comme un $\left( G_{n}^{\prime},G_{\infty}^{}\right) $-module et un $\left( H_{n}^{\prime},H_{\infty}^{}\right) $-module. On montre que l'on a les d{\'{e}}compositions suivantes de $\mathcal {F}_{n\times\infty}$ en uniques \emph{composantes isotypiques} \newline \begin{minipage}{\textwidth} \hskip-36pt\begin{minipage}{\textwidth} \begin{equation*} \mathcal{F}_{n\times\infty }=\sum\limits_{\left( \lambda\right) }{\!{\oplus}\,}\mathcal {I}_{n\times\infty}^{\left( \lambda\right) }= \sum\limits_{\left( \mu\right) }{\!{\oplus}\,} \mathcal{I}_{n\times\infty}^{\left( \mu\right) }, \end{equation*} \end{minipage}\hskip-36pt\vspace*{\abovedisplayskip} \end{minipage} o{\`{u}} $\left( \lambda\right) $ est une \emph{signature irr{\'{e} }ductible} commune de la paire $\left( G_{n}^{\prime},G_{\infty}^{}\right) $ et $\left( \mu\right) $ celle de la paire $\left( H_{n}^{\prime},H_{\infty}^{}\right) $, et o{\`{u}} $\mathcal{I} _{n\times\infty}^{\left( \lambda\right) }$ (resp.\ $\mathcal{I}_{n\times\infty}^{\left( \mu\right) }$) est {\`{a}} la fois la composante isotypique de la classe d'{\'{e}}quivalence de $\left( \lambda\right) _{G_{\infty}}$ (resp.\ $\left( \mu\right) _{H_{\infty}}$) et celle de $\left( \lambda^{\prime}\right) _{G_{n}^{\prime}} $ (resp.\ $\left( \mu^{\prime}\right) _{H_{n}^{\prime}}$). On donne une d{\'{e}} monstration d'un \emph{th{\'{e}}or{\`{e}}me de r{\'{e}}ciprocit{\'{e}},} donnant la multiplicit{\'{e}} de $\left( \mu\right) _{H_{\infty}} $ dans la restriction {\`{a}} $H_{\infty}$ de $\left( \lambda\right) _{G_{\infty}}$, en fonction de la multiplicit{\'{e}} de $\left( \lambda^{\prime}\right) _{G_{n}^{\prime} }$ dans la restriction {\`{a}} $G_{n}^{\prime}$ de $\left( \mu^{\prime}\right) _{H_{n}^{\prime}}$. L'article se termine par une discussion de plusieurs applications en Physique du th{\'{e}}or{\`{e}}me pr{\'{e}}c{\'{e}}dant. \end{abstract} \maketitle
\section{\label{Int}Introduction}
In recent years there is great interest, both in Physics and in Mathematics, in the theory of unitary representations of infinite-dimensional groups and their Lie algebras (see, for example, \cite{Kac90}, and the literature cited therein). Starting with the seminal work of I.~Segal in \cite{Seg50} the representation theory of $\mathrm{U}\left( \infty\right) $ and other classical infinite-dimensional groups was thoroughly investigated by Kirillov in \cite{Kir73}, Stratila and Voiculescu in \cite{StVo75}, Pickrell in \cite{Pic87}, Ol'shanskii in \cite{Ol'88}, Gelfand and Graev in \cite{GeGr90}, Kac in \cite{Kac80}, to cite just a few. A more complete list of references can be found in the comprehensive and important work of Ol'shanskii in \cite{Ol'90}.
In \cite{Ol'90} Ol'shanski generalized Howe's theory of dual pairs to some infinite-dimensional dual pairs of groups. Recently in \cite{Ton98} and \cite{Ton97} we investigated the generalized Casimir invariants of these infinite-dimensional dual pairs. In \cite{Ton95} we gave a general reciprocity theorem for finite-dimensional dual pairs of groups which generalized our previous results in \cite{KlTo89} and \cite{LeTo95}. In this article we give a generalization of this reciprocity theorem to the case of dual pairs where one member is infinite-dimensional and the other is finite-dimensional, and discuss the general case where both members are infinite-dimensional. If Section \ref{FD} we will review the reciprocity theorem given in \cite{Ton95} which serves as the necessary background for the generalized theorem, and more importantly, discuss several interesting applications of this theorem. Section \ref{FID} deals with our main theorem, and the paper ends with a short conclusion in Section \ref{Con}.
\section{\label{FD}The Reciprocity Theorem for Finite-Dimensional Pairs of Groups and Its Applications}
In \cite{Ton95} our reciprocity theorem can be applied to the more general context of dual representations but for this paper we shall restrict ourself to the case of the oscillator dual representations and where one of the members is a compact group.
Let $\mathbb{C}^{n\times k}$ denote the vector space of all $n\times k$ complex matrices. Let $\mu$ denote the Gaussian measure on $\mathbb{C} ^{n\times k}$ defined by \begin{equation} d\mu\left( Z\right) =\pi^{-nk}\exp\left[ -\operatorname*{Tr}\left( ZZ^{\dag}\right) \right] \,dZ,\qquad Z\in\mathbb{C}^{n\times k}, \label{eqFD.1} \end{equation} where in Eq.\ (\ref{eqFD.1}) $Z^{\dag}$ denotes the adjoint of the matrix $Z$ and $dZ$ denotes the Lebesgue measure on $\mathbb{C}^{n\times N}$. Let $\mathcal{F}_{n\times k}\equiv\mathcal{F}\left( \mathbb{C}^{n\times k}\right) $ denote the Bargmann--Segal--Fock space of all holomorphic entire functions on $\mathbb{C}^{n\times k}$ which are also square-integrable with respect to $d\mu$. Endowed with the inner product \begin{equation}
\rip{f}{g}
=\int_{\mathbb{C}^{n\times k}}f\left( Z\right) \overline{g\left( Z\right) }\,d\mu\left( Z\right) \mathrel{;}\qquad f,g\in\mathcal{F}_{n\times k}, \label{eqFD.2} \end{equation} $\mathcal{F}_{n\times k}$ has a Hilbert-space structure. It can be easily verified that the inner product \begin{equation}
\ip{f}{g}=f\left( D\right) \overline{g\left( \bar{Z}\right) }|_{Z=0} \label{eqFD.3} \end{equation} where $f\left( D\right) $ denotes the formal power series obtained by replacing $Z_{\alpha j}$ by the partial derivative $\partial/\partial Z_{\alpha j}$ ($1\leq\alpha\leq n$, $1\leq j\leq k$). In fact if $\left( r\right) =\left( r_{11},\dots,r_{nk}\right) $ is a multi-index of integers $r_{\alpha j}\geq0$ let $Z^{\left( r\right) }\equiv Z_{11}^{r_{11}}\cdots Z_{nk}^{r_{nk}}$ and $\left( r\right) !=r_{11}!\cdots r_{nk}!$ then it is easy to verify that \begin{equation}
\rip{\frac{Z^{\left( r\right) }}{\left[ \left ( r\right) !\right] ^{\frac{1}{2}}}}{\frac{Z^{\left( r^{\prime}\right) } }{\left[ \left( r^{\prime}\right) !\right] ^{\frac{1}{2}}}}
=\ip{\frac{Z^{\left( r\right) }}{\left[ \left( r\right) !\right] ^{\frac{1} {2}}}}{\frac{Z^{\left( r^{\prime}\right) }}{\left[ \left( r^{\prime} \right) !\right] ^{\frac{1}{2}}}}=\delta_{\left( r\right) \left( r^{\prime }\right) }. \label{eqFD.4} \end{equation} It follows immediately from Eq.\ (\ref{eqFD.4}) that $\left\{ Z^{\left( r\right) }\mathop{\big/}\left[ \left( r\right) !\right] ^{\frac{1}{2} }\right\} _{\left( r\right) }$ forms an orthonormal basis for $\mathcal{F}_{n\times k}$ when $\left( r\right) $ ranges over all multi-indices; moreover $\mathcal{P}_{n\times k}\equiv\mathcal{P}\left( \mathbb{C}^{n\times k}\right) $, the subspace of all polynomial functions on $\mathbb{C}^{n\times k}$, is dense in $\mathcal{F}_{n\times k}$.
Let $G$ and $G^{\prime}$ be two topological groups. Let $R_{G}$ and $R_{G^{\prime}}^{\prime}$ be continuous unitary and completely (discretely) reducible representations of $G$ and $G^{\prime}$ on $\mathcal{F}_{n\times k}$ such that $R_{G}$ and $R_{G^{\prime}}^{\prime}$ commute. Then we have the following definition of \emph{dual representations} (for the definition of dual representations in a more general context see \cite{Ton95}).
\begin{definition} \label{DefFD.1}The representations $R_{G}$ and $R_{G^{\prime}}^{\prime}$ are said to be \emph{dual} if the $G^{\prime}\times G$-module $\mathcal{F} _{n\times k}$ is decomposed into a multiplicity-free orthogonal direct sum of the form \begin{equation} \mathcal{F}_{n\times k}=\sum\limits_{\left( \lambda\right) }{\!{\oplus} \,}\mathcal{I}_{n\times k}^{\left( \lambda\right) }, \label{eqFD.5} \end{equation} where in Eq.\ \textup{(\ref{eqFD.5})} the label $\left( \lambda\right) $ characterizes both an equivalence class of an irreducible unitary representation $\lambda_{G}$ of $G$ and an equivalence class of an irreducible representation $\lambda_{G^{\prime}}^{\prime}$, and $\mathcal{I}_{n\times k}^{\left( \lambda\right) }$ denotes the $\left( \lambda\right) $\emph{-isotypic component,} i.e., the direct sum \textup{(}not canonical\/\textup{)} of all irreducible subrepresentations of $R_{G}$ \textup{(}resp.\ $R_{G^{\prime}}^{\prime}$\textup{)} that belong to the equivalence class $\lambda_{G}$ \textup{(}resp.\ $\lambda_{G^{\prime}} ^{\prime}$\textup{).} Moreover the $G^{\prime}\times G$-submodule $\mathcal{I}_{n\times k}^{\left( \lambda\right) }$ is irreducible for all \emph{signatures} $\left( \lambda\right) $; i.e., $\mathcal{I}_{n\times k}^{\left( \lambda\right) }\approx V^{\left( \lambda_{G}\right) } \mathop{\hat{\otimes}}W^{\left( \lambda_{G^{\prime}}^{\prime}\right) }$, where $V^{\left( \lambda_{G}\right) }$ \textup{(}resp.\ $W^{\left( \lambda_{G^{\prime}}^{\prime}\right) }$\textup{)} is an irreducible $G$-module of class $\left( \lambda_{G}\right) $ \textup{(}resp.\ $G^{\prime }$-module of class $\left( \lambda_{G^{\prime}}^{\prime}\right) $\textup{).}
We refer to the decomposition \textup{(\ref{eqFD.5})} as the \emph{canonical decomposition} of the $G^{\prime}\times G$-module $\mathcal{F}_{n\times k}$. \end{definition}
In this context we have the following theorem which is a special case of Theorem \textup{4.1} in \cite{Ton95}.
\begin{theorem} \label{ThmFD.2}Let $G$ be a compact group. Let $R_{G}$ and $R_{G^{\prime} }^{\prime}$ be given dual representations on $\mathcal{F}_{n\times k}$. Let $H$ be a compact subgroup of $G$ and let $R_{H}$ be the representation of $H$ on $\mathcal{F}_{n\times k}$ obtained by restricting $R_{G}$ to $H$. If there exists a group $H^{\prime}\supset G^{\prime}$ and a representation $R_{H^{\prime}}^{\prime}$ on $\mathcal{F}_{n\times k}$ such that $R_{H^{\prime}}^{\prime}$ is dual to $R_{H}$ and $R_{G^{\prime}}^{\prime}$ is the restriction of $R_{H^{\prime}}^{\prime}$ to the subgroup $G^{\prime}$ of $H^{\prime}$ then we have the following multiplicity-free decompositions of $\mathcal{F}_{n\times k}$ into isotypic components \begin{equation} \mathcal{F}_{n\times k}=\sum\limits_{\left( \lambda\right) }{\!{\oplus} \,}\mathcal{I}_{n\times k}^{\left( \lambda\right) }= \sum\limits_{\left( \mu\right) }{\!{\oplus}\,}\mathcal{I}_{n\times k}^{\left( \mu\right) } \label{eqFD.6} \end{equation} where $\left( \lambda\right) $ is a common irreducible signature of the pair $\left( G^{\prime},G\right) $ and $\left( \mu\right) $ is a common irreducible signature of the pair $\left( H^{\prime},H\right) $.
If $\lambda_{G}$ \textup{(}resp.\ $\lambda_{G^{\prime}}^{\prime}$\textup{)} denotes an irreducible unitary representation of class $\left( \lambda \right) $ \linebreak and $\mu^{}_{H}$ \textup{(}resp.\ $\mu_{H^{\prime} }^{\prime}$\textup{)} denotes an irreducible unitary representation of class $\left( \mu\right) $ \linebreak then the multiplicity $\dim\left[
\operatorname*{Hom}_{H}\left( \mu^{}_{H}:\lambda_{G}|_{H}\right) \right] $ of the irreducible representation \linebreak $\mu^{}_{H}$ in the restriction to $H$ of the representation $\lambda_{G}$ is equal to the multiplicity $\dim\left[ \operatorname*{Hom}_{G^{\prime}}\left( \lambda_{G^{\prime}
}^{\prime}:\mu_{H^{\prime}}^{\prime}|_{G^{\prime}}\right) \right] $ of the irreducible representation $\lambda_{G^{\prime}}^{\prime}$ in the restriction to $G^{\prime}$ of the representation $\mu_{H^{\prime}}^{\prime}$. \end{theorem}
\begin{remarks}
In many cases $\operatorname*{Hom}_{H}\left( \mu:\lambda_{G}|_{H}\right) $ and $\operatorname*{Hom}_{G}\left( \lambda_{G}^{\prime}:\mu_{H^{\prime}
}^{\prime}|_{G}\right) $ are shown to be isomorphic and can be explicitly constructed in terms of generalized Casimir operators as given in \cite{KlTo92} and \cite{LeTo94}. \end{remarks}
To illustrate this theorem we devote the rest of this section to some typical examples and discuss their generalization.
\begin{examples} \label{ExaFD.3} \quad
1) Consider $\mathcal{F}_{1\times k}$ with $k\geq2$; then $\mathcal{F} _{1\times k}$ is the classical Barg\-mann space first considered by V.~Bargmann in \cite{Bar61}. Then $\mathcal{P}_{1\times k}$ is the algebra of all polynomial functions in $k$ variables $\left( Z_{1},\dots,Z_{k}\right) =Z$. Let $G=\mathrm{U}\left( k\right) $ and $G^{\prime}=\mathrm{U}\left( 1\right) $; then the complexification of $\mathrm{U}\left( k\right) $ \textup{(}resp.\ $\mathrm{U}\left( 1\right) $\textup{)} is $G_{\mathbb{C} }=\mathrm{GL}_{k}\left( \mathbb{C}\right) $ \textup{(}resp.\ $G_{\mathbb{C} }^{\prime}=\mathrm{GL}_{1}\left( \mathbb{C}\right) $\textup{).} An element $f$ of $\mathcal{F}_{1\times k}$ is of the form \begin{equation}
f\left( Z\right) =\sum_{\left| \left( r\right) \right| =0}^{\infty }c_{\left( r\right) }Z^{\left( r\right) } \label{eqFD.7} \end{equation}
with $\left( r\right) =\left( r_{1},\dots,r_{k}\right) $, $\left| \left(
r\right) \right| =r_{1}+\dots+r_{k}$, and $Z^{\left( r\right) } =Z_{1}^{r_{1}}\cdots Z_{k}^{r_{k}}$, $c_{\left( r\right) }\in\mathbb{C}$
such that $\sum_{\left| \left( r\right) \right| =0}^{\infty}\left|
c_{\left( r\right) }\right| ^{2}\left( r\right) !<\infty$, where $\left( r\right) !=r_{1}!\cdots r_{k}!$. The system $\left\{ Z^{\left( r\right) }\mathop{\big/}\left[ \left( r\right) !\right] ^{\frac{1}{2}}\right\} $, where $\left( r\right) $ ranges over all multi-indices, forms an orthonormal basis for $\mathcal{F}_{1\times k}$. $R_{G_{\mathbb{C}}}$ and $R_{G}$ are defined by \begin{equation}
\begin{cases} \left[ R_{G_{\mathbb{C}}}\left( g\right) f\right] \left( Z\right ) =f\left( Zg\right) , & g\in\mathrm{GL}_{k}\left( \mathbb{C}\right) , \\ \left[ R_{G}\left( u\right) f\right] \left( Z\right) =f\left( Zu\right ) , & u\in\mathrm{U}\left( k\right) . \end{cases}
\label{eqFD.8} \end{equation} $R_{G_{\mathbb{C}}^{\prime}}^{\prime}$ and $R_{G^{\prime}}^{\prime}$ are defined by \begin{equation}
\begin{cases} \left[ R^{\prime}_{G^{\prime}_{\mathbb{C}}}\left( g^{\prime}\right ) f\right] \left( Z\right) =f\left( \left( g^{\prime}\right) ^{t} Z\right) , & g^{\prime}\in\mathrm{GL}_{1}\left( \mathbb{C}\right) , \\ \left[ R^{\prime}_{G^{\prime}}\left( u^{\prime}\right) f\right] \left( Z\right ) =f\left( \left( u^{\prime}\right) ^{t}Z\right) , & u^{\prime}\in\mathrm {U}\left( 1\right) . \end{cases}
\label{eqFD.9} \end{equation}
The infinitesimal action of $R_{G_{\mathbb{C}}}$ is given by \begin{equation} R_{ij}=Z_{i}\frac{\partial\;}{\partial Z_{j}},\qquad1\leq i,j\leq k, \label{eqFD.10} \end{equation} which form a basis for a Lie algebra isomorphic to \textrm{gl}$_{k}\left( \mathbb{C}\right) $.
The infinitesimal action of $R_{G_{\mathbb{C}}^{\prime}}^{\prime}$ is given by \begin{equation} L=\sum_{i=1}^{k}Z_{i}\frac{\partial\;}{\partial Z_{i}}, \label{eqFD.11} \end{equation} which forms a basis for a Lie algebra isomorphic to $\mathrm{gl}_{1}\left( \mathbb{C}\right) $. If $p,q\in\mathcal{P}_{1\times k}$ then from Eq.\ \textup{(2.1)} of \cite{Ton76a} we have \begin{equation} R_{G_{\mathbb{C}}}\left( g\right) p\left( D\right) R_{G_{\mathbb{C}} }\left( g^{-1}\right) =\left[ R_{G_{\mathbb{C}}}\left( g^{\checkmark }\right) p\right] \left( D\right) ,\qquad g\in\mathrm{GL}_{k}\left( \mathbb{C}\right) ,\;g^{\checkmark}=\left( g^{-1}\right) ^{t}, \label{eqFD.12} \end{equation} so that if $u\in\mathrm{U}\left( k\right) $ then \begin{align} \ip{R_{G}\left( u\right) p}{R_{G}\left( u\right) q} & =\left[ R_{G}\left( u\right) p\right] \left( D\right) \overline{\left( R_{G}\left( u\right)
q\right) \left( \bar{Z}\right) }\bigg|_{Z=0}\label{eqFD.13}\\ & =R_{G}\left( u^{\checkmark}\right) p\left( D\right) R_{G}\left( u^{t}\right) R\left( \bar{u}\right) \overline{q\left( \bar{Z}\right)
}\bigg|_{Z=0}\nonumber\\ & =p\left( D\right) R_{G}\left( u^{t}\bar{u}\right) q\left( \bar
{Z}u^{\checkmark}\right) \bigg|_{Z=0}\nonumber\\ & =\ip{p}{q},\nonumber \end{align} since $u^{t}\bar{u}=1$. A similar computation shows that \begin{equation} R_{G_{\mathbb{C}}^{\prime}}^{\prime}\left( g^{\prime}\right) p\left( D\right) R_{G_{\mathbb{C}}^{\prime}}^{\prime}\left( \left( g^{\prime }\right) ^{-1}\right) =\left[ R\left( \left( g^{\prime}\right) ^{\checkmark}\right) \right] \left( D\right) ,\qquad g^{\prime} \in\mathrm{GL}_{1}\left( \mathbb{C}\right) , \label{eqFD.14} \end{equation} so that if $u\in\mathrm{U}\left( 1\right) $ then \begin{equation} \ip{R^{\prime}_{G^{\prime}}\left( u^{\prime}\right) p}{R^{\prime}_{G^{\prime} }\left( u^{\prime}\right) q}=\ip{p}{q}. \label{eqFD.15} \end{equation} Note that all equations above from \textup{(\ref{eqFD.12})} to \textup{(\ref{eqFD.15})} remain valid if we replace $\mathbb{C}^{1\times k}$ by $\mathbb{C}^{n\times k}$ and $\mathrm{GL}_{1}\left( \mathbb{C}\right) $ \textup{(}resp.\ $\mathrm{U}\left( 1\right) $\textup{)} by $\mathrm{GL} _{n}\left( \mathbb{C}\right) $ \textup{(}resp.\ $\mathrm{U}\left( n\right) $\textup{).}
It follows that $R_{G}$, $G=\mathrm{U}\left( k\right) $ \textup{(} resp.\ $R_{G^{\prime}}^{\prime}$, $G^{\prime}=\mathrm{U}\left( n\right) $\textup{)} is a continuous unitary representation of $G$ \textup{(} resp.\ $G^{\prime}$\textup{)} on $\mathcal{F}_{n\times k}$.
Let $\mathcal{P}_{1\times k}^{\left( m\right) }$ denote the subspace \textup{(}of $\mathcal{F}_{1\times k}$\textup{)} of all homogeneous polynomial functions of degree $m\geq0$. Then by the Borel--Weil theorem \textup{(}see, e.g., \cite{Ton76a}\textup{)} the restriction of $R_{G_{\mathbb{C}}}$ to $\mathcal{P}_{1\times k}^{\left( m\right) }$ is an irreducible subrepresentation of $R_{G_{\mathbb{C}}}$ with highest weight $(\underset {k}{\underbrace{m,0,\dots,0}})$ and highest weight vector $cZ_{1}^{m}$, $c\in\mathbb{C}^{\ast}$. In fact, by letting the infinitesimal operators $R_{ij}$ act on $\mathcal{P}_{1\times k}^{\left( m\right) }$ one can easily show that $\mathcal{P}_{1\times k}^{\left( m\right) }$ is an irreducible subrepresentation of $R_{G_{\mathbb{C}}}$. By ``Weyl's unitarian trick'' the restriction of this irreducible subrepresentation to $G$ gives an irreducible unitary representation of $G$.
Let $0\neq p\in\mathcal{P}_{1\times k}^{\left( m\right) }$. Then $\left( R_{G^{\prime}}^{\prime}\left( g^{\prime}\right) p\right) \left( Z\right) =p\left( \left( g^{\prime}\right) ^{t}Z\right) =p\left( g^{\prime }Z\right) =\left( g^{\prime}\right) ^{m}p\left( Z\right) $ for all $g^{\prime}\in\mathrm{GL}_{1}\left( \mathbb{C}\right) $. So the one-dimensional subspace of $\mathcal{F}_{1\times k}$ spanned by $p$ is an irreducible $G_{\mathbb{C}}^{\prime}$-submodule with highest weight $\left( m\right) $ and its restriction to $G^{\prime}$ is an irreducible unitary $G^{\prime}$-submodule. In fact, Euler's formula implies that \begin{equation} Lp=mp\text{,\qquad for all }p\in\mathcal{P}_{1\times k}^{\left( m\right) }. \label{eqFD.16} \end{equation} Thus the canonical decomposition of the $G^{\prime}\times G$-module $\mathcal{F}_{1\times k}$ is simply \begin{equation} \mathcal{F}_{1\times k}=\sum\limits_{\makebox[0pt]{\hss$\scriptstyle m=0$\hss }}^{\infty}{\!{\oplus}\,}\mathcal{P}_{1\times k}^{\left( m\right) }. \label{eqFD.17} \end{equation} Let $H$ denote the special orthogonal subgroup $\mathrm{SO}\left( k\right) $. Then $H_{\mathbb{C}}=\mathrm{SO}_{k}\left( \mathbb{C}\right) $. Then the ring of all $H$ \textup{(}or $H_{\mathbb{C}}$\textup{)}-invariant polynomials in $\mathcal{P}_{1\times k}$ is generated by the constants and $p_{0}\left( Z\right) =\sum_{1\leq i\leq k}Z_{i}^{2}$. The ring of all $H$ \textup{(}or $H_{\mathbb{C}}$\textup{)}-invariant differential operators with constant coefficients is generated by the constants and the Laplacian $\bigtriangleup =p_{0}\left( D\right) =\sum_{1\leq i\leq k}\partial^{2}/\partial Z_{i}^{2}$. To find the dual representation of $R_{H}$ we follow the method given in \cite{Ton95} by setting \begin{equation} X^{+}=\frac{1}{2}p_{0},\qquad X^{-}=\frac{1}{2}p_{0}\left( D\right) =\frac{1}{2}\bigtriangleup\text{,\qquad and }E=\frac{k}{2}+L. \label{eqFD.18} \end{equation} Then $X^{+}$ \textup{(}resp.\ $X^{-}$\textup{)} acts on $\mathcal{F}_{1\times k}$ as a \emph{creation} \textup{(}resp.\ \emph{annihilation}\textup{)} \emph{operator} and $E$ acts on $\mathcal{F}_{1\times k}$ as a number operator. In fact, if $p\in\mathcal{P}_{1\times k}^{\left( m\right) }$ then $X^{+}p=\frac{1}{2}p_{0}p$, $X^{-}p=\frac{1}{2}\bigtriangleup p$, and $Ep=\left( \left( k/2\right) +m\right) p$, so that $X^{+}$ \emph{raises} $\mathcal{P}_{1\times k}^{\left( m\right) }$ to $\mathcal{P}_{1\times k}^{\left( m+2\right) }$, $X^{-}$ \emph{lowers} $\mathcal{P}_{1\times k}^{\left( m\right) }$ to $\mathcal{P}_{1\times k}^{\left( m-2\right) }$ and $H$ \emph{multiplies} \textup{(}elementwise\textup{) }$\mathcal{P} _{1\times k}^{\left( m\right) }$ by the \emph{number} $\left( k/2\right) +m$. An easy computation shows that \begin{equation} \left[ E,X^{+}\right] =2X^{+},\qquad\left[ E,X^{-}\right] =-2X^{-} ,\qquad\left[ X^{-},X^{+}\right] =E. \label{eqFD.19} \end{equation} Eq.\ \textup{(\ref{eqFD.19})} gives a faithful representation of the Lie algebra $\mathrm{sl}_{2}\left( \mathbb{R}\right) $. Thus the dual action of $H$ is given by this representation. The integrated form of this Lie algebra representation is more subtle to describe: it is the \emph{metaplectic representation of the two-sheeted covering group} $\widetilde{\mathrm{SL} _{2}\left( \mathbb{R}\right) }$ of $\mathrm{SL}_{2}\left( \mathbb{R} \right) $ \textup{(}or $\mathrm{Sp}_{2}\left( \mathbb{R}\right) $\textup{),} and this group is not a matrix group. Its concrete description can be obtained by applying the Bargmann--Segal transform which sends the Schr\"{o}dinger representation of this group to its Fock representation $\mathcal{F}_{1\times k}$. However, for our purpose, its infinitesimal action \textup{(\ref{eqFD.19})} together with the action of its maximal compact group $G^{\prime}=\mathrm{U}\left( 1\right) $, which is particularly simple, will suffice. Indeed, it is easy to show that we have the following decomposition of $\mathcal{P}_{1\times k}^{\left( m\right) }$: \begin{equation} \mathcal{P}_{1\times k}^{\left( m\right) }=\mkern27mu\sum\limits_{\makebox [0pt]{\hss$\scriptstyle i=0,\dots,\left[ m/2\right] $\hss}}{\!{\oplus} \mkern9mu}p_{0}^{i}\mathcal{H}_{1\times k}^{\left( m-2i\right) }, \label{eqFD.20} \end{equation} where $\left[ m/2\right] $ denotes the integral part of $m/2$, and $\mathcal{H}_{1\times k}^{\left( m-2i\right) }$ denotes the subspace of all harmonic homogeneous polynomials of degree $\left( m-2i\right) $, i.e., all functions $p\in\mathcal{P}_{1\times k}^{\left( m-2i\right) }$ such that $\bigtriangleup p=0$. For an integer $r\geq0$ then it can be easily shown that the restriction $R_{H}^{\left( r\right) }$ of $R_{H}$ to $\mathcal{H} _{1\times k}^{\left( r\right) }$ is an irreducible representation of $H$ with signature $(\underset{\left[ k/2\right] }{\underbrace{r,0,\dots,0}})$ and highest weight vector \begin{equation} f^{\left( r\right) _{H}}\left( Z\right) =
\begin{cases} \left( Z_{1}+iZ_{s+1}\right) ^{r}, & \text{if }k=2s, \\ \left( Z_{1}+iZ_{s+2}\right) ^{r}, & \text{if }k=2s+1,\qquad i=\sqrt{-1}. \end{cases}
\label{eqFD.21} \end{equation} For each integer $j\geq0$, the restriction of $R_{H}$ to the subspace $p_{0}^{j}\mathcal{H}^{\left( r\right) }$ is equivalent to $R_{H}^{\left( r\right) }$ since $p_{0}^{j}$ is $H$-invariant. Set \begin{equation} \mathcal{I}_{1\times k}^{\left( r\right) }=\sum\limits_{\makebox [0pt]{\hss$\scriptstyle j=0$\hss}}^{\infty}{\!{\oplus}\,}p_{0}^{j} \mathcal{H}_{1\times k}^{\left( r\right) }; \label{eqFD.22} \end{equation} then $\mathcal{I}_{1\times k}^{\left( r\right) }$ is the $(\underset {k}{\underbrace{r,0,\dots,0}})$-isotypic component of $R_{H}^{\left( r\right) }$. From \textup{(\ref{eqFD.20})} and \textup{(\ref{eqFD.22})} we see that \begin{equation} \mathcal{F}_{1\times k}=\sum\limits_{\makebox[0pt]{\hss$\scriptstyle r=0$\hss }}^{\infty}{\!{\oplus}\,}\mathcal{I}_{1\times k}^{\left( r\right) }. \label{eqFD.23} \end{equation} Obviously, $R_{H^{\prime}}^{\prime}\left( u\right) =R_{G}^{\prime}\left( u\right) $, $u\in G^{\prime}$, leaves each one-dimensional subspace $cp_{0}^{j}h$, $c\in\mathbb{C}$, invariant, since $R_{G}^{\prime}\left( u\right) \left( p_{0}^{j}h\right) =u^{r+2j}\left( p_{0}^{j}h\right) $ \textup{(}alternatively, $E\left( p_{0}^{j}h\right) =\left( \left( k/2\right) +r+2j\right) \left( p_{0}^{j}h\right) $\textup{),} for all $h\in\mathcal{H}_{1\times k}^{\left( r\right) }$. Clearly, $X^{+}\left( p_{0}^{j}h\right) =\frac{1}{2}p_{0}^{\left( j+1\right) }h$, $h\in \mathcal{H}_{1\times k}^{\left( r\right) }$. Finally from the equation \begin{equation} X^{-}\left( p_{0}f\right) =\left( k+2s\right) f+\frac{1}{2}p_{0} \bigtriangleup f \label{eqFD.24} \end{equation} if $f$ is a polynomial function of degree $s$, we deduce by induction on the integer $j\geq1$ that \begin{equation} X^{-}\left( p_{0}^{j}h\right) =j\left( k+2\left( r+j-1\right) \right) p_{0}^{j-1}h,\qquad h\in\mathcal{H}_{1\times k}^{\left( r\right) }. \label{eqFD.25} \end{equation} For each fixed $h\in\mathcal{H}_{1\times k}^{\left( r\right) }$ let $Jh$ denote the subspace of $\mathcal{I}_{1\times k}^{\left( r\right) }$ spanned by the set $\left\{ p_{0}^{j}h\mid j=0,1,2,\dots\right\} $. Then it follows from the previous discussion that the subrepresentation of the Lie algebra $\mathrm{sl}_{2}\left( \mathbb{R}\right) $ on $Jh$ is irreducible, and thus the metaplectic subrepresentation of $\widetilde{\mathrm{SL}_{2}\left( \mathbb{R}\right) }$ on $Jh$ is irreducible as well. As a $\mathrm{U}\left( 1\right) $-module $Jh$ is reducible, and for this special case each one-dimensional subspace $cp_{0}^{j}h$, $c\in\mathbb{C}$, is an irreducible submodule, and the lowest one is $ch$ which has weight $r$ \textup{(}or $\left( k/2\right) +r$\textup{)} since \begin{equation} R_{G}^{\prime}\left( u\right) h=u^{r}h,\qquad u\in\mathrm{U}\left( 1\right) ,\text{\qquad or\qquad}Eh=\left( \frac{k}{2}+r\right) h. \label{eqFD.26} \end{equation} In general, if a \emph{holomorphic discrete series} of a noncompact semisimple Lie group such as $\widetilde{\mathrm{SL}_{2}\left( \mathbb{R}\right) }$ considered as a $K$-module, where $K$ is its maximal compact subgroup, decomposes into a discrete sum of irreducible submodules, each one of them can be characterized by a signature \textup{(}highest weight, for example\textup{)} and the one with the lowest highest weight \textup{(}under the lexicographic ordering\textup{)} is unique. This \emph{lowest} $K$\emph{-type highest weight} which corresponds to the \emph{Harish Chandra's or Blattner's parameter,} can be used to label the given holomorphic discrete series. We shall call this label its \emph{signature.} In our example, the holomorphic discrete series $Jh$ of $\widetilde{\mathrm{SL}_{2}\left( \mathbb{R}\right) }$ has signature $r$. If $\dim\left( \mathcal{H}_{1\times r}^{\left( r\right) }\right) =d$ \textup{(}actually, $d=\binom{k+r-1} {r}-\binom{k+r-3}{r-2}$\textup{)} $\mathcal{I}_{1\times k}^{\left( r\right) }$ is the $r$-isotypic component \textup{(}of the metaplectic representation of $\widetilde{\mathrm{SL}_{2}\left( \mathbb{R}\right) }$\textup{)} which contains $d$ isomorphic copies of signature $r$.
Now let us verify Theorem \textup{\ref{ThmFD.2}} for this simple example. From Eq.\ \textup{(\ref{eqFD.20}) we have} \begin{multline} \dim\left[ \operatorname*{Hom}\nolimits_{\mathrm{SO}\left( k\right) }\left( (\underset{\left[ k/2\right] }{\underbrace{r,0,\dots,0} })_{\mathrm{SO}\left( k\right) }:(\underset{k}{\underbrace{m,0,\dots,0}
})_{\mathrm{U}\left( k\right) }\bigg|_{\mathrm{SO}\left( k\right) }\right) \right] \label{eqFD.27}\\ =
\begin{cases} 1, & \text{if }r=m-2i\text{ for }i=0,\dots,\left[ m/2\right] , \\ 0, & \text{otherwise,} \end{cases}
\end{multline} and from Eq.\ \textup{(\ref{eqFD.22})} and Eq.\ \textup{(\ref{eqFD.26})} we have \begin{equation} \dim\left[ \operatorname*{Hom}\nolimits_{\mathrm{U}\left( 1\right) }\left( m_{\mathrm{U}\left( 1\right) }:r_{\widetilde{\mathrm{SL}_{2}\left(
\mathbb{R}\right) }}\bigg|_{\mathrm{U}\left( 1\right) }\right) \right] =
\begin{cases} 1, & \text{if }2j+r=m, \\ 0, & \text{otherwise,} \end{cases}
\label{eqFD.28} \end{equation} which are obviously identical.
For arbitrary $n$ such that $n\leq k$ Eq.\ \textup{(\ref{eqFD.7})} remains valid with $\left( r\right) =\left( r_{11},\dots,r_{nk}\right) $ and $Z^{\left( r\right) }=Z_{11}^{r_{11}}\cdots Z_{nk}^{r_{nk}}$. Eq.\ \textup{(\ref{eqFD.8}), (\ref{eqFD.12}), (\ref{eqFD.13}), (\ref{eqFD.14}) remain valid. Eq.\ (\ref{eqFD.10}) is replaced by} \begin{equation} R_{ij}=\sum_{\alpha=1}^{n}Z_{\alpha i}\frac{\partial\;}{\partial Z_{\alpha j} },\qquad1\leq i,j\leq k. \tag*{$($\textup{\ref{eqFD.10}}$)^{\prime}$} \end{equation} Eq.\ \textup{(\ref{eqFD.11})} is replaced by \begin{equation} L_{\alpha\beta}=\sum_{i=1}^{k}Z_{\alpha i}\frac{\partial\;}{\partial Z_{\beta i}},\qquad1\leq\alpha,\beta\leq n. \tag*{$($\textup{\ref{eqFD.11}}$)^{\prime }$} \end{equation} Let $B_{n}^{\prime}$ denote the lower triangular Borel subgroup of $G_{\mathbb{C}}^{\prime}=\mathrm{GL}_{n}\left( \mathbb{C}\right) $, let $\left( \lambda\right) $ be an $n$-tuple of integers such that $\lambda _{1}\geq\lambda_{2}\geq\dots\geq\lambda_{n}\geq0$, let $\lambda\colon B_{n}^{\prime}\rightarrow\mathbb{C}^{\ast}$ be the holomorphic character defined on $B_{n}^{\prime}$ by \[ \lambda\left( b^{\prime}\right) =\left( b_{11}^{\prime}\right) ^{\lambda_{1}}\cdots\left( b_{nn}^{\prime}\right) ^{\lambda_{n}}\text{\qquad if }b^{\prime}= \begin{bmatrix} b_{11}^{\prime} & & \raisebox{-9pt}[0pt][0pt]{\llap{\huge{$0$}\kern3pt}}\\ & \ddots & \\ \raisebox{3pt}[0pt][0pt]{\rlap{\kern3pt\huge{$\ast$}}} & & b_{nn}^{\prime} \end{bmatrix} \text{ belongs to }B_{n}^{\prime}. \] Let $\mathcal{P}_{n\times k}^{\left( \lambda\right) }$ denote the subspace of all polynomial functions on $\mathbb{C}^{n\times k}$ which also satisfy the covariant condition \begin{equation} f\left( b^{\prime}Z\right) =\lambda\left( b^{\prime}\right) f\left( Z\right) ,\qquad\left( b^{\prime},Z\right) \in B_{n}^{\prime} \times\mathbb{C}^{n\times k}. \label{eqFD.29} \end{equation} Let $R_{\lambda}$ denote the representation of $G$ obtained by right translation on $\mathcal{P}_{n\times k}^{\left( \lambda\right) }$. Then by the Borel--Weil theorem \textup{(}see, e.g., \cite[Theorem 1.5]{Ton76a} \textup{)} $R_{\lambda}$ is irreducible with highest weight $\left( \lambda\right) $ and highest weight vector \begin{equation} cf_{\lambda}\left( Z\right) =c\Delta_{1}^{\lambda_{1}-\lambda_{2}}\left( Z\right) \Delta_{2}^{\lambda_{2}-\lambda_{3}}\left( Z\right) \cdots \Delta_{{}}^{\lambda_{n}}\left( Z\right) ,\qquad c\in\mathbb{C}^{\ast}, \label{eqFD.30} \end{equation} where in Eq.\ \textup{(\ref{eqFD.30})} $\Delta_{i}\left( Z\right) $ denotes the $i^{\text{th}}$ principal minor of $Z$.
Similarly let $B_{k}^{t}$ denote the upper triangular Borel subgroup of $G_{\mathbb{C}}=\mathrm{GL}_{k}\left( \mathbb{C}\right) $ and let $\lambda^{\prime}\colon B_{k}^{t}\rightarrow\mathbb{C}^{\ast}$ be the holomorphic character defined on $B_{k}^{t}$ by \[ \lambda^{\prime}\left( b\right) =b_{11}^{\lambda_{1}}\cdots b_{nn} ^{\lambda_{n}}\text{\qquad if }b= \begin{bmatrix} b_{11} & & & & \raisebox{-12pt}[0pt][0pt]{\llap{\Huge{$\ast$}\kern6pt}}\\ & \ddots & & & \\ & & b_{nn} & & \\ & & & \ddots & \\ \raisebox{6pt}[0pt][0pt]{\rlap{\kern6pt\Huge{$0$}}} & & & & b_{kk} \end{bmatrix} \text{ belongs to }B_{k}^{t}. \]
Let $\mathcal{P}_{n\times k}^{\left( \lambda^{\prime}\right) }$ denote the subspace of all polynomial functions on $\mathbb{C}^{n\times k}$ which also satisfy the covariant condition \begin{equation} f\left( Zb\right) =\lambda^{\prime}\left( b\right) f\left( Z\right) ,\qquad\left( b,Z\right) =B_{k}^{t}\times\mathbb{C}^{n\times k}\text{.} \label{eqFD.31} \end{equation} Let $R_{\lambda^{\prime}}^{\prime}$ denote the representation of $G^{\prime}$ on $\mathcal{P}_{n\times k}^{\left( \lambda^{\prime}\right) }$ defined by \begin{equation} \left[ R_{\lambda^{\prime}}^{\prime}\left( g^{\prime}\right) f\right] \left( Z\right) =f\left( \left( g^{\prime}\right) ^{t}Z\right) ,\qquad g^{\prime}\in G^{\prime}. \label{eqFD.32} \end{equation} Then $R_{\lambda^{\prime}}^{\prime}$ is irreducible with highest weight $\left( \lambda^{\prime}\right) $ and with the same highest weight vector given by Eq.\ \textup{(\ref{eqFD.30}).} By Weyl's unitarian trick the restriction of $R_{\lambda}$ \textup{(}resp.\ $R_{\lambda^{\prime}}^{\prime} $\textup{)} to $G=\mathrm{U}\left( k\right) $ \textup{(}resp.\ $G^{\prime }=\mathrm{U}\left( n\right) $\textup{)} remains irreducible with the same signature.
Let $\mathcal{I}_{n\times k}^{\left( \lambda\right) }$ denote the $G_{\mathbb{C}}^{\prime}\times G_{\mathbb{C}}^{{}}$ \textup{(}or $G^{\prime }\times G$\textup{)}-cyclic module in $\mathcal{F}_{n\times k}$ generated by the highest vector $f_{\lambda}$ given by Eq.\ \textup{(\ref{eqFD.29});} then by Theorem \textup{3,} p.\ \textup{150,} of \cite{Zel73}, $\mathcal{I} _{n\times k}^{\left( \lambda\right) }$ is irreducible with highest weight $\left( \lambda^{\prime},\lambda\right) $. For the sake of simplicity we say that the $G_{\mathbb{C}}^{\prime}\times G_{\mathbb{C}}^{{}}$-module $\mathcal{I}_{n\times k}^{\left( \lambda\right) }$ has \emph{signature} $\left( \lambda\right) $. To prove that $\mathcal{I}_{n\times k}^{\left( \lambda\right) }\approx\mathcal{P}_{n\times k}^{\left( \lambda^{\prime }\right) }\mathop{\hat{\otimes}}\mathcal{P}_{n\times k}^{\left( \lambda\right) }$ we define a map $\Phi\colon\mathcal{P}_{n\times k}^{\left( \lambda^{\prime}\right) }\mathop{\hat{\otimes}}\mathcal{P}_{n\times k}^{\left( \lambda\right) }\rightarrow\mathcal{I}_{n\times k}^{\left( \lambda\right) }$ as follows:
Let $f^{\prime}\otimes f\in\mathcal{P}_{n\times k}^{\left( \lambda^{\prime }\right) }\mathop{\hat{\otimes}}\mathcal{P}_{n\times k}^{\left( \lambda\right) }$. Then $f^{\prime}$ and $f$ can be represented in the following form: \begin{equation} f^{\prime}=\sum_{i\in I^{\prime}}c_{i}^{\prime}R_{\lambda^{\prime}}^{\prime }\left( g_{i}^{\prime}\right) f_{\lambda}^{{}},\qquad f=\sum_{j\in I} c_{j}R_{\lambda}\left( g_{j}\right) f_{\lambda}, \label{eqFD.33} \end{equation} where in Eq.\ \textup{(\ref{eqFD.33})} $c_{i}^{\prime},c_{j}^{{}}\in \mathbb{C}$, $g_{i}^{\prime}\in G_{\mathbb{C}}^{\prime}$, $g_{j}\in G_{\mathbb{C}}$, and $I^{\prime}$ and $I$ are two finite index sets. Set $\Phi\left( f^{\prime}\otimes f\right) =\sum_{i\in I^{\prime},\,j\in I} c_{i}^{\prime}c_{j}^{{}}T\left( g_{i}^{\prime},g_{j}^{{}}\right) f_{\lambda }^{{}}$, where $\left[ T\left( g_{i}^{\prime},g_{j}^{{}}\right) f_{\lambda }\right] \left( Z\right) =f\left( \left( g_{i}^{\prime}\right) ^{t}Zg_{j}^{{}}\right) $. Since \[ R_{\lambda^{\prime}}^{\prime}\left( g^{\prime}\right) f^{\prime}=\sum_{i\in I}c_{i}^{\prime}R_{\lambda^{\prime}}^{\prime}\left( g_{{}}^{\prime} g_{i}^{\prime}\right) f_{\lambda}^{{}} \] and \[ R_{\lambda}\left( g\right) f=\sum_{j\in I}c_{j}R_{\lambda}\left( gg_{j}\right) f_{\lambda} \] it follows that \begin{align*} \Phi\left[ \left( R_{\lambda^{\prime}}^{\prime}\left( g^{\prime}\right) \otimes R_{\lambda}\left( g\right) \right) \left( f^{\prime}\otimes f\right) \right] & =\sum_{i\in I,\,j\in T}c_{i}^{\prime}c_{j}^{{}}T\left( g_{{}}^{\prime}g_{i}^{\prime},gg_{j}^{{}}\right) f_{\lambda}^{{}}\\ & =T\left( g^{\prime},g\right) \Phi\left( f^{\prime}\otimes f\right) \end{align*} for all $g^{\prime}\in G_{\mathbb{C}}^{\prime}$ and $g\in G_{\mathbb{C}}$. This means that $\Phi$ is an intertwining operator and by Schur's lemma $\Phi$ is either $0$ or an isomorphism. Since \[ \Phi\left( f_{\lambda}\otimes f_{\lambda}\right) =f_{\lambda} \] it follows that $\Phi$ is an isomorphism. Since $\mathcal{P}_{n\times k}$ is dense in $\mathcal{F}_{n\times k}$ Theorem \textup{3} \textup{(} p.\ \textup{150)} of \cite{Zel73} \textup{(}see also \cite{KlTo89}\textup{)} implies that we have the Hilbert sum $\mathcal{F}_{n\times k}= \smash [b]{\sum\limits_{\left( \lambda\right) }{\!{\oplus}\,}\mathcal{I}_{n\times k}^{\left( \lambda\right) }}$ for the pair $\left( \mathrm{U}\left( n\right) ,\mathrm{U}\left( k\right) \right) $.
Now suppose $k>2n$ and set $H=\mathrm{SO}\left( k\right) $, $H_{\mathbb{C} }=\mathrm{SO}_{k}\left( \mathbb{C}\right) $. Let $J_{n\times k}$ denote the ring of all $H$ \textup{(}or $H_{\mathbb{C}}$\textup{)}-invariant polynomials in $\mathcal{P}_{n\times k}$. Then $J_{n\times k}$ is generated by the constants and the $n\left( n+1\right) /2$ algebraically independent polynomials \begin{equation} p_{\alpha\beta}\left( Z\right) =\sum_{i=1}^{k}Z_{\alpha i}Z_{\beta i} ,\qquad1\leq\alpha\leq\beta\leq n. \label{eqFD.34} \end{equation} It follows that the ring of all $H$ \textup{(}or $H_{\mathbb{C}}$ \textup{)}-invariant differential operators with constant coefficients is generated by the constants and the Laplacians \begin{equation} \bigtriangleup_{\alpha\beta}=p_{\alpha\beta}\left( D\right) =\sum_{i=1} ^{k}\frac{\partial^{2}\;}{\partial Z_{\alpha i}\partial Z_{\beta i}} ,\qquad1\leq\alpha\leq\beta\leq n. \label{eqFD.35} \end{equation} The infinitesimal action of $R_{G_{\mathbb{C}}}^{\prime}$ is generated by \begin{equation} L_{\alpha\beta}=\sum_{i=1}^{k}Z_{\alpha i}\frac{\partial\;}{\partial Z_{\beta i}},\qquad1\leq\alpha,\beta\leq n. \label{eqFD.36} \end{equation} Set $P_{\alpha\beta}=-p_{\alpha\beta}$, $E_{\alpha\beta}=L_{\alpha\beta} +\frac{1}{2}k\delta_{\alpha\beta}$, and $D_{\alpha\beta}=\bigtriangleup _{\alpha\beta}$; then it follows from \cite{KLT95} \textup{(}see Eq.\ \textup{(3.3))} that $\left\{ E_{\alpha\beta},P_{\alpha\beta} ,D_{\alpha\beta}\right\} $ defines a faithful representation of $\mathrm{sp}_{2n}\left( \mathbb{R}\right) $ on $\mathcal{F}_{n\times k}$. By construction this representation is dual to the infinitesimal action of $R_{H}$. The global action $R_{H^{\prime}}^{\prime}$ is a unitary metaplectic representation of $\widetilde{\mathrm{Sp}_{2n}\left( \mathbb{R}\right) }$, the two-sheeted covering of $\mathrm{Sp}_{2n}\left( \mathbb{R}\right) $ \textup{(}see \cite{KLT95} for details\textup{).} As in the case of the pair $\left( \mathrm{U}\left( n\right) ,\mathrm{U}\left( k\right) \right) $ the common highest weight vector \textup{(}for $R_{H^{\prime}}^{\prime}$ the lowest $K^{\prime}$-type highest weight vector\textup{)} of signature $\left( \mu\right) =\left( \mu^{}_{1},\dots,\mu^{}_{n}\right) $ with $\mu^{} _{1}\geq\dots\geq\mu^{}_{n}\geq0$ and $\mu^{}_{i}\in\mathbb{N}$, $1\leq i\leq n$, of the pair $\left( \widetilde{\mathrm{Sp}_{2n}\left( \mathbb{R}\right) ,}\mathrm{SO}\left( k\right) \right) $ is \begin{equation} f_{\mu}\left( Z\right) =\Delta_{1}^{\mu^{}_{1}-\mu^{}_{2}}\left( Zq\right) \Delta_{2}^{\mu^{}_{2}-\mu^{}_{3}}\left( Zq\right) \cdots\Delta_{n}^{\mu ^{}_{n}}\left( Zq\right) , \label{eqFD.37} \end{equation} where the $k\times k$ matrix $q$ is given by \[ \frac{1}{\sqrt{2}}\left[ \begin{tabular}
[c]{c|c} $\openone_{\nu_{\mathstrut}}$ & $\openone_{\nu_{\mathstrut}}$\\\hline $i\openone_{\nu_{\mathstrut}}$ & $-i\openone_{\nu_{\mathstrut}}$ \end{tabular} \right] \text{\qquad if }k=2\nu, \] and \[ \frac{1}{\sqrt{2}} \begin{bmatrix} \openone_{\nu} & 0 & \openone_{\nu}\\ 0 & \sqrt{2} & 0\\ i\openone_{\nu} & 0 & -i\openone_{\nu} \end{bmatrix} \text{\qquad if }k=2\nu+1, \] and where $\openone_{\nu}$ is the unit matrix of order $\nu$.
An element $p$ of $\mathcal{P}_{n\times k}$ is called $H$\emph{-harmonic} if $\bigtriangleup_{\alpha\beta}p=0$ for all $\alpha,\beta=1,\dots,n$. Let $\mathcal{H}_{n\times k}$ denote the subspace of all $H$-harmonic polynomial functions of $\mathcal{P}_{n\times k}$ and let $\mathcal{H}_{n\times k}\left( \mu\right) $ denote the subspace of all elements $h$ of $\mathcal{H}_{n\times k}$ which also satisfy the covariant condition \begin{equation} h\left( b^{\prime}Z\right) =\left( b_{11}^{\prime}\right) ^{\mu^{}_{1} }\cdots\left( b_{nn}^{\prime}\right) ^{\mu^{}_{n}}h\left( Z\right) ,\qquad\forall\,b^{\prime}\in B_{n}^{\prime}. \label{eqFD.38} \end{equation} Then according to Theorem \textup{3.1} of \cite{Ton76a}, the representation $R_{H}$ of $H$ which is obtained by right translations on $\mathcal{H} _{n\times k}\left( \mu\right) $ is irreducible with signature $\left( \mu\right) $.
The infinitesimal action of $R_{H}$ is given by \begin{equation} R_{ij}^{H}=\sum_{\alpha=1,\dots,n}\left( Z_{\alpha i}\frac{\partial \;}{\partial Z_{\alpha j}}-Z_{\alpha j}\frac{\partial\;}{\partial Z_{\alpha i}}\right) ,\qquad1\leq i<j\leq k. \label{eqFD.39} \end{equation} {}From \cite{KLT95} the dual infinitesimal action of $R_{H}$ is given by the system $\left\{ E_{\alpha\beta},P_{\alpha\beta},D_{\alpha\beta}\right\} $ which satisfies the commutation relations \begin{equation}
\begin{cases} \left[ E_{\alpha\beta},E_{\mu\nu}\right] =\delta_{\beta\mu}E_{\alpha\nu }-\delta_{\alpha\nu}E_{\mu\beta}\\ \left[ E_{\alpha\beta},P_{\mu\nu}\right] =\delta_{\beta\mu}P_{\alpha\nu }+\delta_{\beta\nu}P_{\alpha\mu}\\ \left[ E_{\alpha\beta},D_{\mu\nu}\right] =-\delta_{\alpha\mu}D_{\beta \nu}-\delta_{\alpha\nu}D_{\beta\mu}\\ \left[ P_{\alpha\beta},D_{\mu\nu}\right] =\delta_{\alpha\mu}E_{\nu\beta }+\delta_{\alpha\nu}E_{\mu\beta}+\delta_{\beta\mu}E_{\nu\alpha}+\delta _{\beta\nu}E_{\mu\alpha}\\ \left[ P_{\alpha\beta},P_{\mu\nu}\right] =\left[ D_{\alpha\beta} ,D_{\mu\nu}\right] =0\\ P_{\alpha\beta},\;P_{\beta\alpha},\;D_{\alpha\beta} =D_{\beta\alpha}\\ P_{\alpha\beta}^{\dag} =D_{\alpha\beta}^{{}},\qquad D_{\alpha\beta}^{\dag }=P_{\alpha\beta}^{{}},\qquad E_{\alpha\beta}^{\dag}=E_{\beta\alpha}^{{}}, \\ \text{\qquad for all }\alpha,\beta,\mu,\nu=1,\dots,n. \end{cases}
\label{eqFD.40} \end{equation} By Corollary \textup{3.11} of \cite{Ton76a} the $\mu$-isotypic component in $\mathcal{H}_{n\times k}$ consists of $d_{\mu}$ copies isomorphic to $\mathcal{H}_{n\times k}\left( \mu\right) $, where $d_{\mu}$ is the degree of an irreducible representation of $G^{\prime}=\mathrm{U}\left( n\right) $ of signature $\left( \mu^{}_{1},\dots,\mu^{}_{n}\right) $. Since from Eq.\ \textup{(\ref{eqFD.40})} and the fact that $f_{\mu}$ is $H$-harmonic \begin{align*} D_{\mu\nu}E_{\alpha\beta}f_{\mu} & =\left[ D_{\mu\nu},E_{\alpha\beta }\right] f_{\mu}+E_{\alpha\beta}D_{\mu\nu}f_{\mu}\\ & =\delta_{\alpha\mu}D_{\beta\nu}f_{\mu}+\delta_{\alpha\nu}D_{\beta\mu} f_{\mu}\\ & =0, \end{align*} it follows that $E_{\alpha\beta}f_{\mu}$ is $H$-harmonic for every $\alpha,\beta=1,\dots,n$. Since $\left[ E_{\alpha\beta}^{{}},R_{ij} ^{H}\right] =0$ for all $\alpha,\beta=1,\dots,n$ and $i,j=1,\dots,k$ it follows that $E_{\alpha\beta}\colon\mathcal{H}_{n\times k}\left( \mu\right) \rightarrow\mathcal{H}_{n\times k}$ are intertwining operators, and thus are either $0$ or isomorphisms. It follows that the $\mathfrak{g}^{\prime}$-module generated by the cyclic vector $f_{\mu}$ is irreducible with signature $\left( \mu_{1},\dots,\mu^{}_{n}\right) $. In fact, from Eq.\ \textup{(3.14)} of \cite{Ton76a} this space is a $G^{\prime}$-module. Let $G^{\prime}f_{\mu}$ denote this $G^{\prime}$-module; then by construction $G^{\prime}f_{\mu}\subset\mathcal{H}_{n\times k}$.
If $h\in G^{\prime}f_{\mu}$ then from Eq.\ \textup{(\ref{eqFD.40})} we have \begin{align*} D_{\mu\nu}P_{\alpha\beta}h & =\left[ D_{\mu\nu},P_{\alpha\beta}\right] h+P_{\alpha\beta}D_{\mu\nu}h\\ & =-\left( \delta_{\alpha\mu}E_{\nu\beta}+\delta_{\alpha\nu}E_{\mu\beta }+\delta_{\beta\mu}E_{\nu\alpha}+\delta_{\beta\nu}E_{\mu\nu}\right) h, \end{align*} and therefore $D_{\mu\nu}P_{\alpha\beta}h$ belongs to $G^{\prime}f_{\mu}$. It follows that $J_{n\times k}G^{\prime}f_{\mu}$ is an irreducible $\mathrm{sp} _{2n}\left( \mathbb{R}\right) $-module with signature $\left( \mu\right) $. Let $\mathcal{H}_{n\times k}^{\prime}\left( \mu\right) $ denote this module and let $\mathcal{I}_{n\times k}^{\left( \mu\right) }$ be the $H^{\prime}\times H$-cyclic module generated by $f_{\mu}$; then a proof similar to the case $\mathcal{I}_{n\times k}^{\left( \lambda\right) }$ shows that $\mathcal{H}_{n\times k}^{\prime}\left( \mu\right) \mathop{\hat {\otimes}} \mathcal{H}_{n\times k}^{{}}\left( \mu\right) $ is isomorphic to $\mathcal{I}_{n\times k}^{\left( \mu\right) }$. By the ``separation of variables theorem'' \textup{2.5} of \cite{Ton76a} and from the fact that $\mathcal{P}_{n\times k}$ is dense in $\mathcal{F}_{n\times k}$ it follows that the orthogonal direct sum decomposition $\mathcal{F}_{n\times k} =\sum\limits_{\left( \mu\right) }{\!{\oplus}\,}\mathcal{I}_{n\times k}^{\left( \mu\right) }$ holds. Therefore the reciprocity theorem \textup{\ref{ThmFD.2}} holds for these pairs $\left( G^{\prime},G\right) $ and $\left( H^{\prime},H\right) $ as well.
\noindent2) Let $k=2l$ and consider again the dual pair $\left( G^{\prime }=\mathrm{U}\left( n\right) ,G=\mathrm{U}\left( k\right) \right) $. Let $H=$\textrm{Sp}$\left( k\right) $; then $H_{\mathbb{C}}=$\textrm{Sp} $_{k}\left( \mathbb{C}\right) $. If $l\geq n\geq2$ then the theory of symplectic harmonic polynomials in \cite{Ton77} implies that the dual representation to the representation $R_{H}$ on $\mathcal{F}_{n\times k}$ is a representation of the group $\mathrm{SO}^{\ast}\left( 2n\right) =H^{\prime}$ whose infinitesimal action is given by Eq.\ \textup{(4.2)} of \cite{KLT95}. Using Theorem \textup{2.1} of \cite{Ton77} and the ``separation of variables theorem'' for this case we can show similarly that $\mathcal{F}_{n\times k}=\sum\limits_{\left( \mu\right) }{\!{\oplus}\,}\mathcal{I}_{n\times k}^{\left( \mu\right) }$ for this dual pair $\left( \mathrm{SO}^{\ast }\left( 2n\right) ,\mathrm{Sp}\left( k\right) \right) $. Thus the reciprocity theorem \textup{\ref{ThmFD.2}} holds again for these pairs $\left( G^{\prime},G\right) $ and $\left( H^{\prime},H\right) $.
\noindent3) The case of the dual pairs $\left( G^{\prime}=\mathrm{U}\left( p\right) \times\mathrm{U}\left( q\right) ,G=\mathrm{U}\left( k\right) \times\mathrm{U}\left( k\right) \right) $ and \linebreak $\left( H^{\prime}=\mathrm{U}\left( p,q\right) ,H=\mathrm{U}\left( k\right) \right) $ can be treated in a similar fashion using the results of \cite{Ton76b} and the infinitesimal action of $H^{\prime}$ on $\mathcal{F} _{n\times k}$ is given by Eq.\ \textup{(6.4)} of \cite{Ton95}. However, its generalization to the case $H=\mathrm{U}\left( \infty\right) $ in Section \textup{\ref{FID}} is quite delicate and requires a quite different embedding that we shall describe in detail below.
Let $p$ and $q$ be positive integers such that $p+q=n$. Let $k$ be an integer such that $k\geq2\max\left( p,q\right) $. Let $\left( \lambda\right) $ be a $q$-tuple of integers such that $\lambda_{1}\geq\lambda_{2}\geq\dots \geq\lambda_{q}\geq0$. Let $R_{\lambda}$ denote the representation of $\mathrm{GL}_{k}\left( \mathbb{C}\right) $ \textup{(}or $\mathrm{U}\left( k\right) $\textup{)} defined on $\mathcal{P}_{q\times k}^{\left( \lambda\right) }$ given by Eq.\ \textup{(\ref{eqFD.29})} and \textup{(\ref{eqFD.30})} with $n$ replaced by $q$. We define the \emph{contragredient} \textup{(}or \emph{dual\/}\textup{)} \emph{representation} of $R_{\lambda}$ as follows.
Let $s_{r}$ denote the $r\times r$ matrix with ones along the reverse diagonal and zero elsewhere: \[ \begin{pmatrix} \raisebox{-7pt}[0pt][0pt]{\rlap{\kern3pt\huge{$0$}}} & & 1\\ & \begin{picture}(0.5,1)(0.25,0.167) \multiput(1.25,1.25)(-0.25,-0.25){7} {\makebox(0,0){$\cdot$}} \end{picture} & \\ 1 & & \raisebox{1pt}[0pt][0pt]{\llap{\huge{$0$}\kern3pt}} \end{pmatrix} . \] If $W\in\mathbb{C}^{q\times k}$ let $\tilde{W}=s_{q}Ws_{k}$. Thus $\tilde{W}$ is of the form \begin{equation} \tilde{W}= \begin{bmatrix} W_{q,k} & \cdots & W_{q,1}\\ \vdots & & \vdots\\ W_{1,k} & \cdots & W_{1,1} \end{bmatrix} . \label{eqFD.41} \end{equation} Let $\mathcal{P}_{q\times k}^{\left( \lambda^{\checkmark}\right) }$ denote the subspace of all polynomial functions in $\tilde{W}$ which also satisfy the covariant condition \begin{equation} f\left( \tilde{b}^{\prime}\tilde{W}\right) =\lambda\left( b^{\prime }\right) f\left( \tilde{W}\right) \label{eqFD.42} \end{equation} for all $b^{\prime}\in B_{q}^{\prime}$, where $B_{q}^{\prime}$ is the lower triangular Borel subgroup of $\mathrm{GL}_{q}\left( \mathbb{C}\right) $, and $\tilde{b}^{\prime}=s_{q}b^{\prime}s_{q}$.
Define the representation $R_{\lambda^{\checkmark}}$ of $\mathrm{GL} _{k}\left( \mathbb{C}\right) $ \textup{(}or $\mathrm{U}\left( k\right) $\textup{)} on $\mathcal{P}_{q\times k}^{\left( \lambda^{\checkmark}\right) }$ by \begin{equation} \left[ R_{\lambda^{\checkmark}}\left( g\right) f\right] \left( \tilde {W}\right) =f\left( \tilde{W}s_{k}gs_{k}\right) ,\qquad g\in\mathrm{GL} _{k}\left( \mathbb{C}\right) . \label{eqFD.43} \end{equation} Then $R_{\lambda^{\checkmark}}$ is irreducible with signature $\left( \smash[b]{\underset {k}{\underbrace{0,\dots,0,-\lambda_{q},-\lambda_{q-1},\dots,-\lambda_{1}}} }\right) $ and \emph{lowest weight vector} \begin{equation} cf_{\lambda^{\checkmark}}\left( \tilde{W}\right) =\Delta_{1}^{\lambda _{1}-\lambda_{2}}\left( \tilde{w}\right) \Delta_{2}^{\lambda_{2}-\lambda _{3}}\left( \tilde{w}\right) \cdots\Delta_{{}}^{\lambda_{q}}\left( \tilde{w}\right) ,\qquad c\in\mathbb{C}^{\ast}, \label{eqFD.44} \end{equation} of weight $\left( -\lambda_{1},-\lambda_{2},\dots,-\lambda_{q},0,\dots ,0\right) $.
Let $\mathcal{P}_{q\times k}^{\left( \lambda^{\checkmark}\right) ^{\prime}}$ denote the subspace of all polynomial functions in $\tilde{W}$ which also satisfy the covariant condition \begin{equation} f\left( \tilde{W}\tilde{b}\right) =\lambda\left( b\right) f\left( \tilde{W}\right) , \label{eqFD.45} \end{equation} where $\tilde{b}=s_{k}bs_{k}$, $b\in B_{k}^{t}$ \textup{(}it follows that $\tilde{b}$ is a lower triangular matrix of the form $\tilde{b}=\left( \begin{smallmatrix} b_{kk} & & \raisebox{-6pt}[0pt][0pt]{\llap{\Large{$0$}\kern2pt}}\\ & \raisebox{0pt}[10pt]{$\ddots$} & \\ \raisebox{2pt}[0pt][0pt]{\rlap{\kern2pt\Large{$\ast$}}} & & b_{11} \end{smallmatrix} \right) $\textup{).} Let $R_{\left( \lambda^{\checkmark}\right) ^{\prime} }^{\prime}$ denote the representation of $\mathrm{GL}_{q}\left( \mathbb{C}\right) $ \textup{(}or of $G^{\prime}=\mathrm{U}\left( q\right) $\textup{)} on $\mathcal{P}_{q\times k}^{\left( \lambda^{\checkmark}\right) ^{\prime}}$ defined by \begin{equation} \left[ R_{\left( \lambda^{\checkmark}\right) ^{\prime}}^{\prime}\left( g^{\prime}\right) f\right] \left( \tilde{W}\right) =f\left( s_{q}\left( g^{\prime}\right) ^{-1}s_{q}\tilde{W}\right) ,\qquad g^{\prime} \in\mathrm{GL}_{q}\left( \mathbb{C}\right) . \label{eqFD.46} \end{equation} Then $R_{\left( \lambda^{\checkmark}\right) ^{\prime}}^{\prime}$ is irreducible with highest weight $\left( \lambda^{\checkmark}\right) ^{\prime}$ and with \emph{lowest weight vector} given by $cf_{\lambda ^{\checkmark}}$, $c\in\mathbb{C}^{\ast}$, of weight $\left( -\lambda _{1},-\lambda_{2},\dots,-\lambda_{q}\right) $.
As in the case $\left( \lambda^{\prime}\right) \otimes\left( \lambda \right) $ it can be shown that $\mathcal{P}_{q\times k}^{\left( \lambda^{\checkmark}\right) ^{\prime}}\mathop{\hat{\otimes}}\mathcal{P} _{q\times k}^{\left( \lambda^{\checkmark}\right) }$ is isomorphic to $\mathcal{I}_{q\times k}^{\left( \lambda^{\checkmark}\right) }$ and we have the Hilbert sum decomposition $\mathcal{F}_{q\times k}=\smash[b]{ \sum \limits_{ \left( \lambda^{\checkmark}\right) }{\!{\oplus}\,} \mathcal{I}_{q\times k}^{ \left( \lambda^{\checkmark}\right) } }$ for the pair $\left( \mathrm{U}\left( q\right) ,\mathrm{U}\left( k\right) \right) $.
Now let $G=\mathrm{U}\left( k\right) \times\mathrm{U}\left( k\right) $ act on $\mathcal{F}_{n\times k}$ via the outer tensor product \begin{equation} \left[ R_{\mathrm{U}\left( k\right) }\mathop{\hat{\otimes}}R_{\mathrm{U} \left( k\right) ^{\checkmark}}\right] \left( g_{1},g_{2}\right) f\left( \left[ \raisebox{2pt}{\rule{2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt}\rule {1pt}{0pt}\raisebox{5pt}{\makebox[0pt]{\hss$Z$\hss}}\raisebox{-11pt}{\makebox [0pt]{\hss$\tilde{W}$\hss}}\rule{1pt}{0pt}\rule{2pt}{1pt}\rule{2pt}{0pt} \rule{2pt}{1pt}}\right] \right) =f\left( \left[ \raisebox{2pt}{\rule {2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt} \rule{2pt}{0pt}\rule{2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt}\rule{2pt} {0pt}\rule{2pt}{1pt}\rule{1pt}{0pt}\raisebox{5pt}{\makebox[0pt]{\hss $Zg_{1}$\hss}}\raisebox{-11pt}{\makebox[0pt]{\hss$\tilde{W}s_{k} g_{2}^{\checkmark}s_{k}$\hss}}\rule{1pt}{0pt}\rule{2pt}{1pt}\rule{2pt} {0pt}\rule{2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt}\rule{2pt}{0pt}\rule {2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt} }\right] \right) , \label{eqFD.47} \end{equation} where $Z\in\mathbb{C}^{p\times k}$, $\tilde{W}\in\mathbb{C}^{q\times k}$, $p+q=n$, $g_{1},g_{2}\in\mathrm{U}\left( k\right) $. Then $G^{\prime }=\mathrm{U}\left( p\right) \times\mathrm{U}\left( q\right) $ acts on $\mathcal{F}_{n\times k}$ via the outer tensor product \begin{equation} \left[ R_{\mathrm{U}\left( p\right) }^{\prime}\mathop{\hat{\otimes} }R_{\mathrm{U}\left( q\right) ^{\checkmark}}^{\prime}\right] \left( g_{1}^{\prime},g_{2}^{\prime}\right) f\left( \left[ \raisebox{2pt} {\rule{2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt}\rule{1pt}{0pt}\raisebox {5pt}{\makebox [0pt]{\hss$Z$\hss}}\raisebox{-11pt}{\makebox[0pt]{\hss$\tilde{W}$\hss}} \rule{1pt}{0pt}\rule{2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt}}\right] \right) =f\left( \left[ \raisebox{2pt}{\rule{2pt}{1pt}\rule{2pt}{0pt}\rule{2pt} {1pt}\rule{2pt}{0pt}\rule{2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt}\rule {2pt}{0pt}\rule{2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt}\rule{2pt}{0pt} \rule{2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt}\rule{1pt}{0pt}\raisebox {5pt}{\makebox[0pt]{\hss$\left( g_{1}^{\prime}\right) ^{t}Z$\hss}} \raisebox{-11pt}{\makebox[0pt]{\hss$s_{q}\left( g_{2}^{\prime}\right ) ^{-1}s_{q}\tilde{W}$\hss}}\rule{1pt}{0pt}\rule{2pt}{1pt}\rule{2pt}{0pt} \rule{2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt}\rule{2pt}{0pt}\rule{2pt} {1pt}\rule{2pt}{0pt}\rule{2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt}\rule {2pt}{0pt}\rule{2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt}}\right] \right) , \label{eqFD.48} \end{equation} where $\left( g_{1}^{\prime},g_{2}^{\prime}\right) \in\mathrm{U}\left( p\right) \times\mathrm{U}\left( q\right) $.
It follows that we have the isotypic decomposition for the dual pairs $\left( G^{\prime},G\right) $ \begin{equation} \mathcal{F}_{n\times k}=\sum\limits_{\left( \nu\right) \otimes\left( \lambda^{\checkmark}\right) }{\!{\oplus}\,}\mathcal{I}_{n\times k}^{\left( \nu\right) \otimes\left( \lambda^{\checkmark}\right) }, \label{eqFD.49} \end{equation} where $\mathcal{I}_{n\times k}^{\left( \nu\right) \otimes\left( \lambda^{\checkmark}\right) }$ is isomorphic to $\mathcal{I}_{p\times k}^{\left( \nu\right) }\otimes\mathcal{I}_{q\times k}^{\left( \lambda^{\checkmark}\right) }$.
Let $H=\left\{ \left( g,g\right) :g\in\mathrm{U}\left( k\right) \right\} $; then $H$ is isomorphic to $\mathrm{U}\left( k\right) $ and $H$ acts on $\mathcal{F}_{n\times q}$ via the inner \textup{(}or Kronecker\textup{)} tensor product $R_{H}=R_{\mathrm{U}\left( k\right) }\otimes R_{\mathrm{U} \left( k\right) ^{\checkmark}}$. Let $J_{n\times k}$ denote the ring of all $H$ \textup{(}or $H_{\mathbb{C}}\approx\mathrm{GL}_{k}\left( \mathbb{C} \right) $\textup{)}-invariant polynomials in $\mathcal{P}_{n\times k}$. Then from \cite{Ton76b} and \cite{Ton95} $J_{n\times k}$ is generated by the constants and the $p\times q$ algebraically independent polynomials \begin{equation} p_{\alpha\beta}\left( \left[ \raisebox{2pt}{\rule{2pt}{1pt}\rule{2pt} {0pt}\rule{2pt}{1pt}\rule{1pt}{0pt}\raisebox{5pt}{\makebox[0pt]{\hss$Z$\hss} }\raisebox{-11pt}{\makebox[0pt]{\hss$\tilde{W}$\hss}}\rule{1pt}{0pt}\rule {2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt}}\right] \right) =\left( Zs_{k}\tilde{W}^{t}\right) _{\alpha\beta}=\sum_{i=1}^{k}Z_{\alpha i}W_{\beta i},\qquad1\leq\alpha\leq p,\;1\leq\beta\leq q. \label{eqFD.50} \end{equation} It follows that the ring of all $H$ or $\left( H_{\mathbb{C}}\right) $-invariant differential operators with constant coefficients is generated by the constants and the Laplacians \begin{equation} \bigtriangleup_{\alpha\beta}=p_{\alpha\beta}\left( D\right) =\sum_{i=1} ^{k}\frac{\partial^{2}\;}{\partial Z_{\alpha i}\partial W_{\beta i}} ,\qquad1\leq\alpha\leq p,\;1\leq\beta\leq q. \label{eqFD.51} \end{equation} Together with the infinitesimal action of $\mathrm{GL}_{n}\left( \mathbb{C}\right) $ on $\mathcal{F}_{n\times k}$ the $p_{\alpha\beta}$'s and $\bigtriangleup_{\alpha\beta}$'s generate a Lie algebra isomorphic to \textrm{su}$\left( p,q\right) $ with commutation relations given by Eq.\ \textup{(6.4)} in \cite{Ton95}. The global action of this infinitesimal action defines a representation $R_{H^{\prime}}^{\prime}$ of $H^{\prime }=\mathrm{SU}\left( p,q\right) $ on $\mathcal{F}_{n\times k}$ which is dual to the representation $R_{H}$.
An element $p$ of $\mathcal{P}_{n\times k}$ is called $H$\emph{-harmonic} if $\bigtriangleup_{\alpha\beta}p=0$ for all $\alpha=1,\dots,p$, and $\beta=1,\dots,q$. Let $\mathcal{H}_{n\times k}$ denote the subspace of all $H$-harmonic polynomial functions of $\mathcal{P}_{n\times k}$ and let $\mathcal{H}_{n\times k}\left( \mu\right) $ denote the subspace of $\mathcal{H}_{n\times k}$ generated by the elements $f\in\mathcal{P}_{p\times k}^{\left( \nu\right) }\otimes\mathcal{P}_{q\times k}^{\left( \lambda^{\checkmark}\right) }$ which also satisfy the condition $\bigtriangleup_{\alpha\beta}=0$, $1\leq\alpha\leq p$, $1\leq\beta\leq q$. Let $R_{H}^{\left( \mu\right) }$, $\mu=\left( \nu\right) \otimes\left( \lambda^{\checkmark}\right) $, denote the representation of $H$ on $\mathcal{H}_{n\times k}\left( \mu\right) $ defined by \begin{equation} \left[ R_{H}^{\left( \mu\right) }\left( g\right) f\right] \left( \left[ \raisebox{2pt}{\rule{2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt}\rule {1pt}{0pt}\raisebox{5pt}{\makebox[0pt]{\hss$Z$\hss}}\raisebox{-11pt} {\makebox[0pt]{\hss$\tilde{W}$\hss}}\rule{1pt}{0pt}\rule{2pt}{1pt}\rule {2pt}{0pt}\rule{2pt}{1pt}}\right] \right) =f\left( \left[ \raisebox {2pt}{\rule{2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt}\rule{2pt}{0pt}\rule {2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt} \rule{2pt}{0pt}\rule{2pt}{1pt}\rule{1pt}{0pt}\raisebox{5pt}{\makebox [0pt]{\hss$Zg$\hss}}\raisebox{-11pt}{\makebox[0pt]{\hss$\tilde{W} s_{k}g^{\checkmark}s_{k}$\hss}}\rule{1pt}{0pt}\rule{2pt}{1pt}\rule{2pt} {0pt}\rule{2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt}\rule{2pt}{0pt}\rule {2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt} }\right] \right) , \label{eqFD.52} \end{equation} for all $g\in H$. Then Theorem \textup{5.2} of \cite{Ton95} implies that:
\emph{The representation }$R_{H}^{\left( \mu\right) }$\emph{ of } $H\approx\mathrm{U}\left( k\right) $\emph{ on }$\mathcal{H}_{n\times k}\left( \mu\right) $\emph{ is an irreducible unitary representation of class }$\left( \mu\right) $\emph{ which has signature} \begin{equation} \left( \mu\right) =(\underset{k}{\underbrace{\nu_{1},\dots,\nu_{p} ,0,\dots,0,-\lambda_{q},\dots,-\lambda_{1}}}), \label{eqFD.53} \end{equation} where in Eq.\ \textup{(\ref{eqFD.53})} $\nu_{\alpha}$, $1\leq\alpha\leq p$, and $\lambda_{\beta}$, $1\leq\beta\leq q$, are integers such that $\nu_{1} \geq\dots\geq\nu_{p}\geq0$ and $\lambda_{1}\geq\dots\geq\lambda_{q}\geq0$. Let $f_{\mu}\left( \left[ \raisebox{2pt}{\rule{2pt}{1pt}\rule{2pt}{0pt} \rule{2pt}{1pt}\rule{1pt}{0pt}\raisebox{5pt}{\makebox[0pt]{\hss$Z$\hss} }\raisebox{-11pt}{\makebox[0pt]{\hss$\tilde{W}$\hss}}\rule{1pt}{0pt}\rule {2pt}{1pt}\rule{2pt}{0pt}\rule{2pt}{1pt}}\right] \right) =f_{\nu}\left( Z\right) f_{\lambda^{\checkmark}}\left( \tilde{W}\right) $, where $f_{\nu}$ is given by Eq.\ \textup{(\ref{eqFD.30})} with $\nu$ replacing $\lambda$ and $f_{\lambda^{\checkmark}}$ is given by Eq.\ \textup{(\ref{eqFD.44}).} Let $\mathcal{I}_{n\times k}^{\left( \mu\right) }$ be the $H^{\prime}\times H$-cyclic module generated by $f_{\mu}$; then a proof similar to the previous cases shows that $\mathcal{H}_{n\times k}^{\prime}\left( \mu\right) \mathop{\hat{\otimes}}\mathcal{H}_{n\times k}^{{}}\left( \mu\right) $ is isomorphic to $\mathcal{I}_{n\times k}^{\left( \mu\right) }$. By the ``separation of variables theorem'' \textup{1.5} of \cite{Ton76b} and Theorem \textup{5.1} of \cite{Ton95} it follows that the orthogonal direct sum decomposition $\mathcal{F}_{n\times k}=\sum\limits_{\left( \mu\right) }{\!{\oplus}\,}\mathcal{I}_{n\times k}^{\left( \mu\right) }$ holds. Therefore the reciprocity theorem \textup{\ref{ThmFD.2}} also holds for these pairs $\left( G^{\prime},G\right) $ and $\left( H^{\prime},H\right) $.
\noindent4) This example is a generalization of the previous example. Consider $r$ copies of one of the following groups: $\mathrm{U}\left( k\right) $, $\mathrm{SO}\left( k\right) $, or \textrm{Sp}$\left( k\right) $, with $k$ even for the last, and let each of them act on a Bargmann--Segal--Fock space $\mathcal{F}_{p_{i}\times k}$, $1\leq i\leq r$, by right translations. Let $p_{1}+p_{2}+\dots+p_{r}=n$, and let $G$ denote the direct product of $r$ copies of each type of group. In the case of $\mathrm{U}\left( k\right) $ we allow the $r^{\text{th}}$ copy to act on $\mathcal{F}_{p_{2}\times k}$ either directly or contragrediently; for the other cases it is not necessary to consider the contragredient representations since they are identical to the direct representations.
On each $\mathcal{F}_{p_{i}\times k}$ for the $\mathrm{U}\left( k\right) $ action we have the dual action of $\mathrm{U}\left( p_{i}\right) $ by left translations, and with possibly the dual \textup{(}left\textup{)} contragredient representation in the case $i=r$. For $\mathrm{SO}\left( k\right) $ we have the metaplectic representation of $\widetilde {\mathrm{Sp}_{2p_{i}}}\left( \mathbb{R}\right) $, and for \textrm{Sp} $\left( k\right) $ we have the corresponding representation of $\mathrm{SO}^{\ast}\left( 2p_{i}\right) $. Let $G^{\prime}$ denote the dual group of $G$ thus obtained. Let $H$ denote the diagonal subgroup of $G$; then in the case of $\mathrm{U}\left( k\right) $ an element of $H$ is of the form $(\underset{r}{\underbrace{u,u,\dots,u}})$ or $(\underset{r-1}{\underbrace {u,\dots,u},\bar{u}})$, $u\in\mathrm{U}\left( k\right) $, and in other cases an element of $H$ is of the form $(\underset{r}{\underbrace{u,u,\dots,u}})$, $u\in\mathrm{SO}\left( k\right) $ or $u\in$\textrm{Sp}$\left( k\right) $. Let $H^{\prime}$ denote the dual group of $H$ thus obtained. Then $H^{\prime}$ is isomorphic in each case to $\mathrm{U}\left( n\right) $\textrm{, }$\widetilde{\mathrm{Sp}_{2n}}\left( \mathbb{R}\right) $, or $\mathrm{SO} ^{\ast}\left( 2n\right) $. As in previous examples it is straightforward to verify that the reciprocity theorem \textup{\ref{ThmFD.2}} holds for these pairs $\left( G^{\prime},G\right) $ and $\left( H^{\prime},H\right) $.
\end{examples}
\section{\label{FID}Reciprocity Theorems for Finite-Infinite Dimensional Dual Pairs of Groups}
Let $\mathcal{H}$ be an infinite-dimensional separable complex Hilbert space with a fixed basis $\left\{ e_{1},e_{2},\dots,e_{k},\dots\right\} $. Let $\mathrm{GL}_{k}\left( \mathbb{C}\right) $ denote the group of all invertible bounded linear operators on $\mathcal{H}$ which leave the vectors $e_{n}$, $n>k$, fixed. We define $\mathrm{\ GL}_{\infty}\left( \mathbb{C} \right) $ as the \emph{inductive limit} of the ascending chain of subgroups \[ \mathrm{GL}_{1}\left( \mathbb{C}\right) \subset\dots\subset\mathrm{GL} _{k}\left( \mathbb{C}\right) \subset\cdots. \] Thus \begin{multline*} \mathrm{GL}_{\infty}\left( \mathbb{C}\right) =\{A=\left( a_{ij}\right) ,\;i,j\in\mathbb{N}\mid A\text{ is invertible }\\ \text{and all but a finite number of }a_{ij}-\delta_{ij}\text{ are }0\}. \end{multline*} If for each $k$ we have a Lie subgroup $G_{k}$ of $\mathrm{GL}_{k}\left( \mathbb{C}\right) $ such that $G_{k}$ is naturally embedded in $G_{k+1}$, $k=1,\dots,n,\dots$, then we can define the \emph{inductive limit} $G_{\infty }=\varinjlim G_{k}=\bigcup_{k=1}^{\infty}G_{k}$. For example, $\mathrm{U} \left( \infty\right) =\left\{ u\in\mathrm{GL}_{\infty}\left( \mathbb{C}\right) :u^{\ast}=u^{-1}\right\} $, and thus $\mathrm{U}\left( \infty\right) $ is the inductive limit of the groups $\mathrm{U}_{k}$ of all unitary operators of $\mathcal{H}$ which leave the vectors $e_{n}$, $n>k$, fixed.
Following Ol'shanskii we call a unitary representation of $G_{\infty}$ \emph{tame} if it is continuous in the group topology in which the ascending chain of subgroups of type $\left\{ \begin{pmatrix} 1_{k} & 0\\ 0 & \ast \end{pmatrix} \right\} ,$ $k=1,2,3,\dots$, constitutes a fundamental system of neighborhoods of the identity $1_{\infty}$. Assume that for each $k$ a continuous unitary representation $\left( R_{k},\mathcal{H}_{k}\right) $ is given and an isomorphic embedding $i_{k+1}^{k}\colon\mathcal{H}_{k} \rightarrow\mathcal{H}_{k+1}$ commuting with the action of $G_{k}$ (i.e., $i_{k+1}^{k}\circ R_{k}\left( g\right) =R_{k+1}\left( g\right) \circ i_{k+1}^{k}$) is given. For $j\leq k$ define the \emph{connecting map} $\varphi_{jk}\colon G_{j}\times\mathcal{H}_{j}\rightarrow G_{k}\times \mathcal{H}_{k}$ by \begin{equation} \varphi_{jk}\left( g_{j},x_{j}\right) =\left( g_{k},x_{k}\right) ,\qquad\left( g_{j},x_{j}\right) \in G_{j}\times\mathcal{H}_{j}, \label{eqFID.1} \end{equation} where in Eq.\ (\ref{eqFID.1}) $g_{k}$ (resp.\ $x_{k}$) denotes the natural embedding of $g_{j}$ (resp.\ $x_{j}$) in $G_{k}$ (resp.\ $\mathcal{H}_{k}$). Then obviously the diagram \begin{equation} \begin{array} [c]{ccc} G_{j}\times\mathcal{H}_{j} & \overset{R_{j}}{\longrightarrow} & \mathcal{H} _{j}\\ \smash{\vcenter{\hrule width0.275pt height14pt}\kern-0.275pt\makebox[0.275pt] {\hss \raisebox{-3pt}{$\downarrow$}\hss}}\vrule width0ptheight22ptdepth12pt\llap {$\scriptstyle\varphi_{jk}\;$} & & \smash{\vcenter{\hrule width0.275pt height14pt}\kern-0.275pt\makebox[0.275pt] {\hss\raisebox{-3pt}{$\downarrow$}\hss} }\vrule width0ptheight22ptdepth12pt\rlap{$\scriptstyle\;i_{k}^{j}=i_{k} ^{k-1}\circ\dots\circ i_{j+2}^{j+1}\circ i_{j+1}^{j}$}\\ G_{k}\times\mathcal{H}_{k} & \overset{R_{k}}{\longrightarrow} & \mathcal{H} _{k} \end{array} \label{eqFID.2} \end{equation} is commutative. Let $\mathcal{H}_{\infty}$ denote the Hilbert-space completion of $\bigcup_{k=1}^{\infty}\mathcal{H}_{k}$ and define a representation $R_{\infty}$ of $G_{\infty}$ on $\mathcal{H}_{k}$ by \begin{equation} R_{\infty}\left( g\right) x=R_{k}\left( g\right) x\text{\qquad if }g\in G_{k}\text{ and }x\in\mathcal{H}_{k}. \label{eqFID.3} \end{equation} Then obviously $R_{\infty}$ is a unique continuous unitary representation of $G_{\infty}$ on $\bigcup_{k=1}^{\infty}\mathcal{H}_{k}$ which can be extended to a unique continuous unitary representation of $G_{\infty}$ on $\mathcal{H}_{\infty}$. Let $\varphi_{k}$ denote the canonical map of $\left( G_{k},\mathcal{H}_{k}\right) $ into $\left( G_{\infty},\mathcal{H}_{\infty }\right) $ and $i_{k}$ denote the canonical map of $\mathcal{H}_{k}$ into $\mathcal{H}_{\infty}$; then obviously the diagram \begin{equation} \begin{array} [c]{ccc} G_{k}\times\mathcal{H}_{k} & \overset{R_{k}}{\longrightarrow} & \mathcal{H} _{k}\\ \smash{\vcenter{\hrule width0.275pt height14pt}\kern-0.275pt\makebox[0.275pt] {\hss \raisebox{-3pt}{$\downarrow$}\hss}}\vrule width0ptheight22ptdepth12pt\llap {$\scriptstyle\varphi_{k}\;$} & & \smash{\vcenter{\hrule width0.275pt height14pt}\kern-0.275pt\makebox[0.275pt] {\hss\raisebox{-3pt}{$\downarrow$}\hss} }\vrule width0ptheight22ptdepth12pt\rlap{$\scriptstyle\;i_{k}$}\\ G_{\infty}\times\mathcal{H}_{\infty} & \overset{R_{\infty}}{\longrightarrow} & \mathcal{H}_{\infty} \end{array} \label{eqFID.4} \end{equation} is commutative.
The following theorem, which is well-known when $i_{k+1}^{k}$ is an isometric embedding (see, e.g., \cite{Ol'90}), is crucial for what follows.
\begin{theorem} \label{ThmFID.1}If the representations $\left( R_{k},\mathcal{H}_{k}\right) $ are all irreducible then the inductive limit representation $\left( R_{\infty},\mathcal{H}_{\infty}\right) $ is also irreducible. \end{theorem}
\begin{proof} Let $A$ be a bounded operator on $\mathcal{H}_{\infty}$ which belongs to the commutant of the algebra of operators generated by the set $\left\{ R_{\infty}\left( g\right) ,g\in G_{\infty}\right\} $. Since $\bigcup _{k=1}^{\infty}\mathcal{H}_{k}$ is dense in $\mathcal{H}_{\infty}$ and all the linear operators involved are continuous we can without loss of generality consider them as operating on $\bigcup_{k=1}^{\infty}\mathcal{H}_{k}$ and satisfying $A\left( i_{l}^{k}\left( x\right) \right) =Ax$ for $k\leq l$ and for all $x\in\mathcal{H}_{k}$. Let $P_{k}$ denote the projection of $\bigcup_{k=1}^{\infty}\mathcal{H}_{k}$ onto $\mathcal{H}_{k}$. Let $A_{k}$ denote the restriction of $A$ to $\mathcal{H}_{k}$; then $A_{k}$ is a bounded linear operator of $\mathcal{H}_{k}$ into $\bigcup_{n=1}^{\infty} \mathcal{H}_{n}$. It follows immediately that $P_{k}A_{k}\colon\mathcal{H} _{k}\rightarrow\mathcal{H}_{k}$ is a bounded linear operator on $\mathcal{H} _{k}$. Let $x\in\mathcal{H}_{k}$ and suppose $A_{k}x=Ax$ belongs to $\mathcal{H}_{l}$. If $l\leq k$ we may use the isomorphic embedding $i_{k} ^{l}=i_{k}^{k-1}\circ\dots\circ i_{l+1}^{l}\colon\mathcal{H}_{l} \rightarrow\mathcal{H}_{k}$ to identify $Ax$ with an element of $\mathcal{H} _{k}$ so that $P_{k}A_{k}x=A_{k}x=Ax$, and thus \[ R_{k}\left( g_{k}\right) P_{k}A_{k}x=R_{\infty}\left( g_{k}\right) Ax=AR_{\infty}\left( g_{k}\right) x=P_{k}R_{k}\left( g_{k}\right) x,\qquad\forall\,g_{k}\in G_{k}. \] If $l>k$ then use $i_{l}^{k}$ to identify $\mathcal{H}_{k}$ with a subspace of $\mathcal{H}_{l}$. Write $Ax=y+z$ where $y$ belongs to the identified subspace of $\mathcal{H}_{k}$ and $z$ belongs to its orthogonal complement in $\mathcal{H}_{l}$. Since all representations are unitary and for $g_{k}\in G_{k}$ we have $i_{l}^{k}\circ R_{k}\left( g_{k}\right) =R_{l}\left( g_{k}\right) \circ i_{l}^{k}$ it follows that \[ P_{k}R_{\infty}\left( g_{k}\right) A_{k}x=P_{k}R_{k}\left( g_{k}\right) y=R_{\infty}\left( g_{k}\right) P_{k}Ax. \] By assumption $R_{\infty}\left( g_{k}\right) Ax=AR_{\infty}\left( g_{k}\right) x$, therefore \begin{multline*} R_{k}\left( g_{k}\right) P_{k}A_{k}x=R_{\infty}\left( g_{k}\right) P_{k}Ax\\ =P_{k}R_{\infty}\left( g_{k}\right) A_{k}x=P_{k}A_{k}R_{\infty}\left( g_{k}\right) x=P_{k}A_{k}R_{k}\left( g_{k}\right) x. \end{multline*} Since this relation holds for all $x\in\mathcal{H}_{k}$ and $g_{k}\in G_{k}$ it follows that $P_{k}A_{k}$ belongs to the commutant of the algebra of operators on $\mathcal{H}_{k}$ generated by the set $\left\{ R_{k}\left( g_{k}\right) ,g_{k}\in G_{k}\right\} $. Schur's lemma for operator algebras (see, e.g., \cite[Proposition 2.3.1, p.~39]{Dix69}) implies that $P_{k} A_{k}=\lambda_{k}I_{k}$, where $\lambda_{k}$ is a scalar depending on $k$ and $I_{k}$ is the identity operator on $\mathcal{H}_{k}$. Now $A$ is a map of inductive limit sets such that $P_{k}A_{k}\colon\mathcal{H}_{k}\rightarrow \mathcal{H}_{k}$, and it follows from the definition of an inductive limit map that $\lambda_{k}=\lambda_{l}$ for sufficiently large $k$, $l$ with $k<l$. Indeed, if $x\in\mathcal{H}_{k}$ and $A_{k}x=Ax\in\mathcal{H}_{j}$ with $j\leq k$ then $P_{k}A_{k}x=i_{k}^{j}\left( Ax\right) =\lambda_{k}x$. For $l>k$ we then have \begin{multline*} \lambda_{l}^{{}}i_{l}^{k}\left( x\right) =P_{l}A_{l}\left( i_{l}^{k}\left( x\right) \right) =P_{l}A\left( i_{l}^{k}\left( x\right) \right) =P_{l}Ax\\ =i_{l}^{j}\left( Ax\right) =i_{l}^{k}\left( i_{k}^{j}\left( Ax\right) \right) =i_{l}^{k}\left( P_{k}A_{k}\left( x\right) \right) =\lambda _{k}^{{}}i_{l}^{k}\left( x\right) . \end{multline*} On the other hand, if $Ax\in\mathcal{H}_{j}$ with $j>k$ then for all $l\geq j$ we have \begin{multline*} P_{l}A_{l}\left( i_{l}^{k}\left( x\right) \right) =P_{l}A\left( i_{l} ^{k}\left( x\right) \right) =P_{l}Ax=P_{l}A_{j}\left( i_{j}^{k}\left( x\right) \right) \\ =P_{l}P_{j}A_{j}\left( i_{j}^{k}\left( x\right) \right) =P_{l}\left( \lambda_{j}^{{}}i_{j}^{k}\left( x\right) \right) =i_{l}^{j}\left( \lambda_{j}^{{}}i_{j}^{k}\left( x\right) \right) =\lambda_{j}^{{}}i_{l} ^{j}\left( i_{j}^{k}\left( x\right) \right) =\lambda_{j}^{{}}i_{l} ^{k}\left( x\right) . \end{multline*} Since $P_{l}A_{l}\left( i_{l}^{k}\left( x\right) \right) =\lambda_{l}^{{} }i_{l}^{k}\left( x\right) $, we must have $\lambda_{j}=\lambda_{l}$ for all $l\geq j$. This implies that $A=\lambda I_{\infty}$ where $\lambda \in\mathbb{C}$ is a constant and $I_{\infty}$ is the identity on $\mathcal{H}_{\infty}$. By the same Schur's lemma quoted above the representation $R_{\infty}$ on $\mathcal{H}_{\infty}$ must be irreducible. \end{proof}
Now fix $n$ and consider the chain of Hilbert spaces $\mathcal{F}_{n\times k}$ from Section \ref{FD} with $k>2n$. Let $\left( G_{n}^{\prime},G_{k}^{{} }\right) $ denote a dual pair of groups with dual representations $\left( R_{n}^{\prime},R_{k}^{{}}\right) $ acting on $\mathcal{F}_{n\times k}$ as in Theorem \ref{ThmFD.2}. Then we have the chain of embedded subgroups $G_{k}\subset G_{k+1}\subset\cdots$; for example, $\mathrm{U}\left( k\right) $ is naturally embedded in $\mathrm{U}\left( k+1\right) $ via the embedding $u\rightarrow\left( \begin{smallmatrix} u & 0\\ 0 & 1 \end{smallmatrix} \right) $, $u\in\mathrm{U}\left( k\right) $. Therefore we can define the inductive limit $G_{\infty}=\varinjlim G_{k}=\bigcup_{k>2n}^{\infty}G_{k}$. We also have an isometric embedding $i_{k+1}^{k}\colon\mathcal{F}_{n\times k}\rightarrow\mathcal{F}_{n\times\left( k+1\right) }$ such that \[ i_{k+1}^{k}\circ R_{k}\left( g\right) =R_{k+1}\left( g\right) \circ i_{k+1}^{k}. \] To see this we take the case $n=1$: then an element $f$ of $\mathcal{F} _{n\times k}$ is a function of $Z=\left( Z_{1},\dots,Z_{k}\right) $ of the form given by Eq.\ (\ref{eqFD.7}), and the verification of the equation above is straightforward. Let $\mathcal{F}_{n\times\infty}$ denote the Hilbert-space completion on $\bigcup_{k>2n}^{\infty}\mathcal{F}_{n\times k}$. Then it is clear that the inductive limit representation $R_{\infty}$ of $G_{\infty}$ on $\mathcal{F}_{n\times\infty}$ is tame and satisfies the relations (\ref{eqFID.2}), (\ref{eqFID.3}), and (\ref{eqFID.4}).
If $G_{k}$ is a compact group then every irreducible unitary representation of $G_{k}$ is of the form $\left( \rho_{\lambda_{k}},V_{\lambda_{k}}\right) $ with highest weight $\left( \lambda_{k}\right) =\left( m_{1},m_{2} ,\dots,m_{i},\dots\right) $, where $m_{1},m_{2},\dots$ are nonnegative integers satisfying $m_{1}\geq m_{2}\geq\cdots$ and the numbers $m_{i}$ are equal to $0$ for sufficiently large $i$. Consider the decomposition (\ref{eqFD.5}) of Definition \ref{DefFD.1} of the dual module $\mathcal{F} _{n\times k}$ into isotypic components \[ \mathcal{F}_{n\times k}=\sum\limits_{\makebox[0pt]{\hss$\scriptstyle \left( \lambda_{k}\right) $\hss}}{\!{\oplus}\,}\mathcal{I}_{n\times k}^{\left( \lambda_{k}\right) } \] where the signatures $\left( \lambda_{k}\right) $ actually depend essentially on $n$, but since $n$ is fixed, to alleviate the notation we just tacitly assume this dependence. Also for $k$ sufficiently large if $\left( \lambda_{k}\right) =\left( m_{1},\dots,m_{i},\dots\right) $ then $\left( \lambda_{k+1}\right) =\left( m_{1},\dots,m_{i},\dots,\dots\right) $ and we write succinctly $\left( \lambda_{k}\right) \subset\left( \lambda _{k+1}\right) $.
For sufficiently large $k$ we can exhibit an isomorphic embedding $i_{k+1} ^{k}\colon\mathcal{I}_{n\times k}^{\left( \lambda_{k}\right) } \rightarrow\mathcal{I}_{n\times\left( k+1\right) }^{\left( \lambda _{k+1}\right) }$. If $H_{k}$ is a subgroup of $G_{k}$ such that $H_{n}^{\prime}$ contains $G_{n}^{\prime}$ and $\left( H_{n}^{\prime} ,H_{k}^{{}}\right) $ forms a dual pair then the same process can be repeated for the chain $\left( H_{n}^{\prime},H_{k}^{{}}\right) \subset\left( H_{n}^{\prime},H_{k+1}^{{}}\right) \subset\cdots$. If $G_{k}$ (or $H_{k}$) is of the type $\underset{r}{\underbrace{\mathrm{U}\left( k\right) \times \dots\times\mathrm{U}\left( k\right) }}$ then each $i_{k+1}^{k}$ is an isometric embedding; for other types of $G_{k}$ (or $H_{k}$) the definition of $i_{k+1}^{k}$ is more subtle. This can be examined case by case although the process is very tedious. To illustrate this we consider the case $\mathcal{F}_{1\times k}$ with $H_{k}=\mathrm{SO}\left( k\right) $ and $G_{1}=\widetilde{\mathrm{Sp}_{2}\left( \mathbb{R}\right) }=\widetilde {\mathrm{SL}_{2}\left( \mathbb{R}\right) }$. Then Eq.\ (\ref{eqFD.22}) and Eq.\ (\ref{eqFD.23}) imply that \[ \mathcal{F}_{1\times k}=\sum\limits_{\makebox[0pt]{\hss$\scriptstyle r=0$\hss}}^{\infty}{\!{\oplus}\,}\mathcal{I}_{1\times k}^{\left( r\right) _{k}}\text{\qquad with\qquad}\mathcal{I}_{1\times k}^{\left( r\right) _{k}}= \sum\limits_{\makebox[0pt]{\hss$\scriptstyle j=0$\hss}}^{\infty}{\!{\oplus}\,}p_{0,k}^{j}\mathcal{H}_{1\times k}^{\left( r\right) _{k}} \] where $p_{0,k}\left( Z\right) =Z_{1}^{2}+\dots+Z_{k}^{2}$, $\left( r\right) _{k}=(\underset{k}{\underbrace{r,0,\dots,0}})$, and $\mathcal{H} _{1\times k}^{\left( r\right) }$ are the subspace of all harmonic homogeneous polynomials of degree $r$. Obviously a harmonic homogeneous polynomial $h$ of degree $r$ in $k$ variables can be considered as a harmonic homogeneous polynomial of $r$ in $k+1$ variables. So we can define an isomorphic embedding $i_{k+1}^{k}\colon\mathcal{I}_{1\times k}^{\left( r\right) _{k}}\rightarrow\mathcal{I}_{1\times\left( k+1\right) }^{\left( r\right) _{k+1}}$ by sending $p_{0,k}^{j}h$ into $p_{0,\left( k+1\right) }^{j}h$, and clearly \begin{multline*} R_{H}\left( u_{k}\right) \left( p_{0,\left( k+1\right) }^{j}h\right) =p_{0,\left( k+1\right) }^{j}R_{H}\left( u_{k}\right) h=i_{k+1}^{k} p_{0,k}^{j}R_{H}\left( u_{k}\right) h\\ =i_{k+1}^{k}\left( \left( R_{H}\left( u_{k}\right) p_{0,k}^{j}\right) \left( R_{H}\left( u_{k}\right) h\right) \right) =i_{k+1}^{k}\left( R_{H}\left( u_{k}\right) \left( p_{0,k}^{j}h\right) \right) \end{multline*} for all $u_{k}\in H_{k}$. Thus, $R_{H}\left( u_{k}\right) \circ i_{k+1} ^{k}=i_{k+1}^{k}\circ R_{H}\left( u_{k}\right) $ for all $u_{k}\in H_{k}$. It follows that $i_{k+1}^{k}$ can be extended to the whole space $\mathcal{F}_{1\times k}$ and that $i_{k+1}^{k}\left( \mathcal{F}_{1\times k}\right) =\sum\limits_{\makebox[0pt]{\hss$\scriptstyle r=0$\hss}}^{\infty}{\!{\oplus}\,}i_{k+1}^{k}\left( \mathcal{I}_{1\times k}^{\left( r\right) _{k}}\right) $ is an isomorphic embedding of $\mathcal{F}_{1\times k}$ into $\mathcal{F}_{1\times\left( k+1\right) }$. Also note in this very special case $\left( r\right) _{k}\subset\left( r\right) _{k+1}$ for all $k>2$ and that no other signatures $\left( r\right) _{k+1}$ occur in $\mathcal{F}_{1\times\left( k+1\right) }$ without $\left( r\right) _{k}$ occurring in $\mathcal{F}_{1\times k}$; this fact is an exception and almost never happens in the general case (e.g., $n\geq2$). By Theorem \ref{ThmFID.1} the tensor product representations $R_{G_{n}^{\prime} }^{\left( \lambda^{\prime}\right) }\otimes R_{G_{\infty}}^{\left( \lambda\right) }$ and $R_{H_{n}^{\prime}}^{\left( \mu^{\prime}\right) }\otimes R_{H_{\infty}}^{\left( \mu\right) }$ of $G_{n}^{\prime}\times G_{\infty}^{{}}$ and $H_{n}^{\prime}\times H_{\infty}^{{}}$ on $\mathcal{I} _{n\times\infty}^{\left( \lambda\right) }$ and $\mathcal{I}_{n\times\infty }^{\left( \mu\right) }$, respectively, are irreducible with signature $\left( \lambda\right) _{\infty}$ and $\left( \mu\right) _{\infty}$, respectively, where if $\left( \lambda_{k}\right) =\left( m_{1},m_{2} ,\dots,m_{i},\dots\right) $ then $\left( \lambda\right) _{\infty }=(\underset{\infty}{\underbrace{m_{1},m_{2},\dots,m_{i},\dots,0}},\dots,0)$ and similarly for $\left( \mu\right) _{\infty}$. Note that as $n$ is fixed, the group $G_{n}^{\prime}$ remains fixed; however, its representation $R_{G_{n}^{\prime}}^{\prime}$ on $\mathcal{F}_{n\times k}$ does depend on $k$, and should be written as $\left( R_{G_{n}^{\prime}}^{\prime}\right) _{k}$, and as $k\rightarrow\infty$, $\left( R_{G_{n}^{\prime}}^{\prime}\right) _{\infty}$ has to be considered as an inductive limit of representations, although for $k$ sufficiently large all the representations $\left( R_{G_{n}^{\prime}}^{\left( \lambda^{\prime}\right) }\right) _{k}$ are equivalent. The same observations apply to $\left( R_{H_{n}^{\prime}} ^{\prime}\right) _{k}$ and $\left( R_{H_{n}^{\prime}}^{\left( \mu^{\prime }\right) }\right) _{k}$. To illustrate this let us consider again the case $\mathrm{U}\left( 1\right) \times\mathrm{U}\left( k\right) $ and $\widetilde{\mathrm{SL}_{2}\left( \mathbb{R}\right) }\times\mathrm{SO} \left( k\right) $ acting on $\mathcal{F}_{1\times k}$. Indeed, the infinitesimal action of $R_{G_{\mathbb{C}}^{\prime}}^{\prime}$ is given by Eq.\ (\ref{eqFD.11}) as $L_{k}=\sum_{i=1}^{k}Z_{i}\partial/\partial Z_{i}$ and $L_{k+1}=\sum_{i=1}^{k+1}Z_{i}\partial/\partial Z_{i}$, and for $p\in \mathcal{P}_{1\times k}^{\left( m\right) }\subset\mathcal{P}_{1\times\left( k+1\right) }^{\left( m\right) }$ Eq.\ (\ref{eqFD.16}) implies that \[ L_{k}p=L_{k+1}p=mp. \] By Eq.\ (\ref{eqFD.18}) the infinitesimal actions of $R_{H_{1}^{\prime} }^{\prime}$ on $\mathcal{F}_{1\times k}$ and $\mathcal{F}_{1\times\left( k+1\right) }$ are given, respectively, by \begin{equation} \left\{
\begin{aligned} E_{k} &= \frac{k}{2}+L_{k}, &\quad X_{k}^{+} &= \frac{1}{2}\sum_{i=1}^{k}Z_{i}^{2}, &\quad X_{k}^{-} &= \frac{1}{2}\sum_{i=1}^{k}\frac{\partial^{2}\;}{\partial Z_{i}^{2}}, &\quad&\text{and} \\ E_{k+1} &= \frac{k+1}{2}+L_{k+1}, &\quad X_{k+1}^{+} &= \frac{1}{2}\sum_{i=1}^{k+1}Z_{i}^{2}, &\quad X_{k+1}^{-} &= \frac{1}{2}\sum_{i=1}^{k+1}\frac{\partial^{2} \;}{\partial Z_{i}^{2}}. & & \end{aligned}
\right. \label{eqFID.5} \end{equation} If $h_{k}\in\mathcal{H}_{1\times k}^{\left( r\right) }$ then Eqs.\ (\ref{eqFD.24}), (\ref{eqFD.25}) applied to $\left\{ E_{k}^{{}} ,X_{k}^{+},X_{k}^{-}\right\} $ show that $J_{k}h_{k}$ is an irreducible representation of $\mathrm{sl}_{2}\left( \mathbb{R}\right) $ with signature $\left( r\right) $. Similarly if $h_{k+1}\in\mathcal{H}_{1\times\left( k+1\right) }^{\left( r\right) }$ then $J_{k+1}h_{k+1}$ is also an irreducible representation of $\mathrm{sl}_{2}\left( \mathbb{R}\right) $ with signature $\left( r\right) $.
Let $\mathcal{F}_{n\times\infty}$ denote the Hilbert-space completion of $\bigcup_{k}\mathcal{F}_{n\times k}$; then $\mathcal{F}_{n\times\infty }=\varinjlim\mathcal{F}_{n\times k}$ is the inductive limit of the chain $\left\{ \mathcal{F}_{n\times k}\right\} $.
After this necessary preparatory work we can now state and prove the main theorem of this paper.
\begin{theorem} \label{ThmFID.2}Let $G_{\infty}$ denote the inductive limit of a chain $G_{k}\subset G_{k+1}\subset\cdots$ of compact groups. Let $R_{G_{\infty}}$ and $R_{\left( G_{n}^{\prime}\right) _{\infty}}^{\prime}$ be given dual representations on $\mathcal{F}_{n\times\infty}$. Let $H_{\infty}$ denote the inductive limit of a chain of compact subgroups $H_{k}\subset H_{k+1} \subset\cdots$ such that $H_{k}\subset G_{k}$ for all $k$. Let $R_{H_{\infty} }$ be the representation of $H_{\infty}$ on $\mathcal{F}_{n\times\infty}$ obtained by restricting $R_{G_{\infty}}$ to $H_{\infty}$. If there exists a group $H_{n}^{\prime}\supset G_{n}^{\prime}$ and a representation $R_{\left( H_{n}^{\prime}\right) _{\infty}}^{\prime}$ on $\mathcal{F}_{n\times\infty}$ such that $R_{\left( H_{n}^{\prime}\right) _{\infty}}^{\prime}$ is dual to $R_{H_{\infty}}$ and $R_{\left( G_{n}^{\prime}\right) _{\infty}}^{\prime}$ is the restriction of $R_{\left( H_{n}^{\prime}\right) _{\infty}}^{\prime}$ to the subgroup $G_{n}^{\prime}$ of $H_{n}^{\prime}$ then we have the following multiplicity-free decompositions of $\mathcal{F}_{n\times\infty}$ into isotypic components: \begin{equation} \mathcal{F}_{n\times\infty}=\sum\limits_{\left( \lambda\right) }{\!{\oplus }\,}\mathcal{I}_{n\times\infty}^{\left( \lambda\right) }=\sum \limits_{\left( \mu\right) }{\!{\oplus}\,}\mathcal{I}_{n\times\infty }^{\left( \mu\right) } \label{eqFID.6} \end{equation} where $\left( \lambda\right) $ is a common irreducible signature of the pair $\left( G_{n}^{\prime},G_{\infty}^{{}}\right) $ and $\left( \mu\right) $ is a common irreducible signature of the pair $\left( H_{n}^{\prime },H_{\infty}^{{}}\right) $.
If $\lambda_{G_{\infty}}$ \textup{(}resp.\ $\lambda_{\left( G_{n}^{\prime }\right) _{\infty}}^{\prime}$\textup{)} denotes an irreducible unitary representation of class $\left( \lambda\right) $ and $\mu^{}_{H_{\infty}}$ \textup{(}resp.\ $\mu_{\left( H_{n}^{\prime}\right) _{\infty}}^{\prime} $\textup{)} denotes an irreducible unitary representation of class $\left( \mu\right) $ then the multiplicity $\dim\left[ \operatorname*{Hom}
_{H_{\infty}}\left( \mu^{}_{H_{\infty}}:\lambda_{G_{\infty}}|_{H_{\infty} }\right) \right] $ of the irreducible representation $\mu^{}_{H_{\infty}}$ in the restriction to $H_{\infty}$ of the representation $\lambda_{G_{\infty} }$ is equal to the multiplicity $\dim\left[ \operatorname*{Hom} _{G_{n}^{\prime}}\left( \lambda_{\left( G_{n}^{\prime}\right) _{\infty} }^{\prime}:\mu_{\left( H_{n}^{\prime}\right) _{\infty}}^{\prime}\bigg
|_{G_{n}^{\prime}}\right) \right] $ of the irreducible representation $\lambda_{\left( G_{n}^{\prime}\right) _{\infty}}^{\prime}$ in the restriction to $G_{n}^{\prime}$ of the representation $\mu_{\left( H_{n}^{\prime}\right) _{\infty}}^{\prime}$. \end{theorem}
\begin{proof} As remarked above, the dual $\left( G_{n}^{\prime},G_{\infty}^{{}}\right) $-module $\mathcal{I}_{n\times\infty}^{\left( \lambda\right) }$ is irreducible (by Theorem \ref{ThmFID.1}) with signature $\left( \lambda \right) $, and isotypic components of different signatures are mutually orthogonal since their projections $\mathcal{I}_{n\times k}^{\left( \lambda\right) _{k}}$ are mutually orthogonal. Finally if a vector in $\mathcal{F}_{n\times\infty}$, which we may assume to belong to $\mathcal{F} _{n\times k}$ for some $k$, is orthogonal to $\mathcal{I}_{n\times\infty }^{\left( \lambda\right) }$ for all $\left( \lambda\right) $, it must therefore be orthogonal to $\mathcal{I}_{n\times k}^{\left( \lambda\right) _{k}}$ for all $\left( \lambda\right) _{k}$, and hence must be the zero vector in $\mathcal{F}_{n\times k}$, and thus zero in $\mathcal{F} _{n\times\infty}$. A similar argument applies to the isotypic components $\mathcal{I}_{n\times\infty}^{\left( \mu\right) }$, and thus Eq.\ (\ref{eqFID.6}) holds.
Now fix $\left( \lambda\right) $ and $\left( \mu\right) $. Then the restriction of $R_{G_{\infty}}$ to $\mathcal{I}_{n\times\infty}^{\left( \lambda\right) }$ decomposes into a (non-canonical) orthogonal direct sum of equivalent irreducible unitary representations of signature $\left( \lambda\right) _{\infty}$. A representative of this representation may be obtained by applying Theorem \ref{ThmFID.1} to get the inductive limit $\left( G_{\infty},R_{\left( \lambda\right) _{\infty}}\right) $ of the chain $\left( G_{k},R_{\lambda_{k}}\right) $; for example, when $G_{k}=\mathrm{U}\left( k\right) $, the representation $R_{\lambda_{k}}$ is given by Eq.\ (\ref{eqFD.29}) on $\mathcal{P}_{n\times k}^{\left( \lambda\right) _{k}}$. Considered as a $G_{n}^{\prime}$-module $\mathcal{I} _{n\times\infty}^{\left( \lambda\right) }$ decomposes into a (non-canonical) orthogonal direct sum of equivalent irreducible unitary representations of signature $\left( \lambda^{\prime}\right) _{n}$. A representative of this representation may be obtained by applying Theorem \ref{ThmFID.1} to get the inductive limit $\left( G_{n}^{\prime},R_{\left( \lambda_{n}^{\prime }\right) _{\infty}}^{\prime}\right) $ (note that although $G_{n}^{\prime}$ is a stationary chain at $n$, the representations $R_{\left( \lambda _{n}^{\prime}\right) _{k}}^{\prime}$ depend on $k$ even though they are all equivalent and belong to the class $\left( \lambda^{\prime}\right) _{n}$); for example, when $G_{n}=\mathrm{U}\left( n\right) $ the representation $R_{\lambda_{n}^{\prime}}^{\prime}$ is given by Eq.\ (\ref{eqFD.32}) on $\mathcal{P}_{n\times k}^{\left( \lambda^{\prime}\right) _{n}}$ which is defined by Eq.\ (\ref{eqFD.31}). By an analogous argument we infer that the same conclusions hold for $\left( \mu\right) $, $\mathcal{I}_{n\times\infty }^{\left( \mu\right) }$, $\left( H_{\infty},R_{\left( \mu\right) _{\infty}}\right) $, $\left( H_{n}^{\prime},R_{\left( \mu_{n}^{\prime }\right) _{\infty}}^{\prime}\right) $.
Now consider the decomposition of the restriction to $H_{k}$ of the representation $R_{\lambda_{k}}$ of $G_{k}$. The multiplicity of $\left(
\mu\right) _{k}$ in $\left( \lambda\right) _{k}|_{H_{k}}$ is the dimension of \linebreak $\operatorname*{Hom}_{H_{k}}\left( R_{\mu^{}_{k}}
:R_{\lambda_{k}}|_{H_{k}}\right) $, where $\operatorname*{Hom}_{H_{k}}\left(
R_{\mu^{}_{k}}:R_{\lambda_{k}}|_{H_{k}}\right) $ is the vector space of linear homomorphisms intertwining $R_{\mu^{}_{k}}$ and $R_{\lambda_{k}
}|_{H_{k}}$. Since $G_{k}$ and $H_{k}$ are, by assumption, compact, this dimension is finite. If $T_{k}\colon\mathcal{H}_{\mu^{}_{k}}\rightarrow \mathcal{H}_{\lambda_{k}}$ is an element of $\operatorname*{Hom}_{H_{k}
}\left( R_{\mu^{}_{k}}:R_{\lambda_{k}}|_{H_{k}}\right) $, where $\mathcal{H}_{\mu^{}_{k}}$ (resp.\ $\mathcal{H}_{\lambda_{k}}$) denotes the representation space of $R_{\mu^{}_{k}}$ (resp.\ $R_{\lambda_{k}}$), then since $\mathcal{H}_{\mu^{}_{k}}\subset\mathcal{I}_{n\times k}^{\left( \mu\right) _{k}}$ and $\mathcal{H}_{\lambda_{k}}\subset\mathcal{I}_{n\times k}^{\left( \lambda\right) _{k}}$ it follows that we have an inductive chain of homomorphisms $\left\{ T_{k}\colon\mathcal{H}_{\mu^{}_{k}}\rightarrow \mathcal{H}_{\lambda_{k}}\right\} $. Let $\mathcal{H}_{\mu^{}_{\infty}}$ (resp.\ $\mathcal{H}_{\lambda_{\infty}}$) denote the inductive limit of $\mathcal{H}_{\mu^{}_{k}}$ (resp.\ $\mathcal{H}_{\lambda_{k}}$); then there exists a unique homomorphism $T_{\infty}\colon\mathcal{H}_{\mu^{}_{\infty} }\rightarrow\mathcal{H}_{\lambda_{\infty}}$ (see, for example, \cite[Theorem 2.5, p.~430]{Dug78}, or \cite[p.~44]{Rot79}). Again by Theorem \ref{ThmFID.1}, $R_{\lambda_{\infty}}=\varinjlim R_{\lambda_{k}}$ (resp.\ $R_{\mu^{}_{\infty} }=\varinjlim R_{\mu^{}_{k}}$) is irreducible with signature $\left( \lambda\right) _{\infty}$ (resp.\ $\left( \mu\right) _{\infty}$), and it is easy to show that $T_{\infty}$ is an intertwining homomorphism. Conversely, all homomorphisms of inductive limits arise that way. Consequently, the chain
$\operatorname*{Hom}_{H_{k}}\left( R_{\mu^{}_{k}}:R_{\lambda_{k}}|_{H_{k} }\right) $ induces the inductive limit $\operatorname*{Hom}_{H_{\infty}
}\left( R_{\mu^{}_{\infty}}:R_{\lambda_{\infty}}|_{H_{\infty}}\right) $. Obviously for sufficiently large $k$, $\dim\left[ \operatorname*{Hom}_{H_{k}
}\left( R_{\mu^{}_{k}}:R_{\lambda_{k}}|_{H_{k}}\right) \right] =\dim\left[
\operatorname*{Hom}_{H_{\infty}}\left( R_{\mu^{}_{\infty}}:R_{\lambda _{\infty}}|_{H_{\infty}}\right) \right] $. By duality, we obtain in the same way the inductive limit $\left[ \operatorname*{Hom}_{G_{n}^{\prime}}\left(
R_{\left( \lambda_{n}^{\prime}\right) _{\infty}}^{\prime}:R_{\left( \mu _{n}^{\prime}\right) _{\infty}}^{\prime}\bigg|_{G_{n}^{\prime}}\right) \right] $; actually this chain stabilizes for $k$ sufficiently large. It follows from Theorem \ref{ThmFD.2} (see also the proof of Theorem 4.1 in \cite{Ton95}) that $\dim\left[ \operatorname*{Hom}_{H_{\infty}}\left( \mu
^{}_{H_{\infty}}:\lambda_{G_{\infty}}|_{H_{\infty}}\right) \right] =\dim\left[ \operatorname*{Hom}_{G_{n}^{\prime}}\left( \lambda_{\left( G_{n}^{\prime}\right) _{\infty}}^{\prime}:\mu_{\left( H_{n}^{\prime}\right)
_{\infty}}^{\prime}\bigg|_{G_{n}^{\prime}}\right) \right] $. \end{proof}
As an example we again consider the case $\mathcal{F}_{1\times\infty}$ with $G_{\infty}=\mathrm{U}\left( \infty\right) $, $G_{1}^{\prime}=\mathrm{U} \left( 1\right) $, $H_{\infty}=\mathrm{SO}\left( \infty\right) $, and $H_{1}^{\prime}=\widetilde{\mathrm{SL}_{2}\left( \mathbb{R}\right) }$. Then from Eq.\ (\ref{eqFD.17}), $\left( \lambda\right) _{k}=(\underset {k}{\underbrace{m,0,\dots,0}})$, $\lambda_{1}^{\prime}=\left( m\right) $, and $\mathcal{I}_{1\times k}^{\left( \lambda\right) _{k}}=\mathcal{P} _{1\times k}^{\left( m\right) }$. It follows that $\left( \lambda\right) _{\infty}=\left( m,0,0,\vec{0}\right) $ and $\mathcal{I}_{1\times\infty }^{\left( \lambda\right) _{\infty}}=\mathcal{P}_{1\times\infty}^{\left( m\right) _{\infty}}$, the vector space of all homogeneous polynomials of degree $m$ in infinitely many variables $Z_{1}$, $Z_{2}$, etc. The infinitesimal action of $R_{\left( \mathrm{U}\left( 1\right) \right) _{k} }^{\prime}$ is given by Eq.\ (\ref{eqFD.10}), $L_{k}=\sum_{i=1}^{k} Z_{i}\partial/\partial Z_{i}$, so the infinitesimal action $L_{\left( m\right) _{\infty}}$ is given the formal series $\sum_{i=1}^{\infty} Z_{i}\partial/\partial Z_{i}$. For $H_{\infty}=\mathrm{SO}\left( \infty\right) $ and $H_{1}^{\prime}=\widetilde{\mathrm{SL}_{2}\left( \mathbb{R}\right) }$ the actions are more delicate to describe. From Eq.\ (\ref{eqFD.22}), $\left( \mu\right) _{k}=(\underset{\left[ k/2\right] }{\underbrace{r,0,\dots,0}})$, where $r$ is an integer $\geq0$, and therefore $\left( \mu\right) _{\infty}=\left( r,0,0,\vec{0}\right) $. Let $\mathcal{H}_{1\times k}^{\left( r\right) _{k}}$ denote the space of all harmonic homogeneous polynomials of degree $r$ in $k$ variables $Z_{1} ,\dots,Z_{k}$ then from Eq.\ (\ref{eqFD.22}) $\mathcal{I}_{1\times k}^{\left( r\right) _{k}}=\sum\limits_{j=0}^{\infty}{\!{\oplus}\,}p_{0,k}^{j} \mathcal{H}_{1\times k}^{\left( r\right) _{k}}$, where $p_{0,k}\left( Z\right) =\sum_{i=1}^{k}Z_{i}^{2}$. We define the actions $R_{\mathrm{SO} \left( \infty\right) }$ and $R_{\left( \widetilde{\mathrm{SL}_{2}\left( \mathbb{R}\right) }\right) _{\infty}}^{\prime}$ as follows:
Consider the algebras $\left( \mathrm{sl}_{2}\left( \mathbb{R}\right) \right) _{k}$ with the bases $\left\{ E_{k}^{{}},X_{k}^{+},X_{k} ^{-}\right\} $ given by Eq.\ (\ref{eqFID.5}); define the \emph{projective} or \emph{inverse limit} of the family $\left\{ \left( \mathrm{sl}_{2}\left( \mathbb{R}\right) \right) _{k},\mathcal{I}_{1\times k}^{\left( r\right) _{k}}\right\} $ as follows: For each pair of indices $l,k$ with $l\leq k$ a continuous homomorphism $\phi_{l}^{k}\colon\left( \mathrm{sl}_{2}\left( \mathbb{R}\right) \right) _{k}\rightarrow\left( \mathrm{sl}_{2}\left( \mathbb{R}\right) \right) _{l}$ by sending $E_{k}$ to $E_{l}$, $X_{k}^{+}$ to $X_{l}^{+}$, $X_{k}^{-}$ to $X^{-l}$, and extends by linearity to $\left( \mathrm{sl}_{2}\left( \mathbb{R}\right) \right) _{k}\rightarrow\left( \mathrm{sl}_{2}\left( \mathbb{R}\right) \right) _{l}$. Clearly $\phi _{l}^{k}$ satisfies fhe following:\renewcommand{\alph{enumi}}{\alph{enumi}}
\begin{enumerate} \item $\phi_{k}^{k}$ is the identity map for all $k$,
\item if $i\leq l\leq k$ then $\phi_{i}^{k}=\phi_{i}^{l}\circ\phi_{l}^{k}$. \end{enumerate}
\noindent The inverse limit of the system $\left\{ \mathrm{sl}_{2}\left( \mathbb{R}\right) _{k}\right\} $ is denoted by \begin{multline} \mathrm{sl}_{2}\left( \mathbb{R}\right) _{\infty}=\varprojlim\mathrm{sl} _{2}\left( \mathbb{R}\right) _{k}=\left\langle E_{\infty}^{{}},X_{\infty }^{+},X_{\infty}^{-}\right\rangle ,\\ \text{where }E_{\infty}=\frac{1}{2}1_{\infty}+L_{\infty},\;X_{\infty} ^{+}=\frac{1}{2}\sum_{i=1}^{\infty}Z_{i}^{2},\;X_{\infty}^{-}=\frac{1}{2} \sum_{i=1}^{\infty}\frac{\partial\;}{\partial Z_{i}^{2}}. \label{eqFID.7} \end{multline} Then $\left\{ E_{\infty}^{{}},X_{\infty}^{+},X_{\infty}^{-}\right\} $ acts on $\mathcal{F}_{1\times\infty}$ as follows: If $f\in\mathcal{F} _{1\times\infty}$ then we may assume that $f\in\mathcal{F}_{1\times k}$ for some $k$ and \begin{equation} E_{\infty}f=E_{k}f,\qquad X_{\infty}^{+}f=X_{k}^{+}f\text{\qquad and\qquad }X_{\infty}^{-}f=X_{k}^{-}f. \label{eqFID.8} \end{equation} If $\mathcal{H}_{1\times\infty}^{\left( r\right) _{\infty}}$ denotes the subspace (of $\mathcal{P}_{1\times\infty}^{\left( r\right) _{\infty}}$) of all harmonic homogeneous polynomials of infinitely many variables $Z_{1}$, $Z_{2}$, etc.\ (i.e., $h\in\mathcal{H}_{1\times\infty}$ if and only if $h\in\mathcal{P}_{1\times\infty}^{\left( r\right) _{\infty}}$ and $X_{\infty}^{-}h=0$) then \begin{equation} \mathcal{I}_{1\times\infty}^{\left( r\right) _{\infty}}=\sum\limits_{j=0} ^{\infty}{\!{\oplus}\,}\left( 2X_{\infty}^{+}\right) ^{j}\mathcal{H} _{1\times\infty}^{\left( r\right) _{\infty}}, \label{eqFID.9} \end{equation} where in Eq.\ (\ref{eqFID.9}) $2X_{\infty}^{+}=\left( p_{0}\right) _{\infty }=\sum_{i=1}^{\infty}Z_{i}^{2}$. Note that $\mathcal{H}_{1\times\infty }^{\left( r\right) _{\infty}}$ corresponds to the inductive limit of the chain $\left\{ \mathcal{H}_{1\times k}^{\left( r\right) _{k}}\right\} $. Let $R_{\mathrm{SO}\left( \infty\right) }^{\left( r\right) _{\infty}}$ denote the inductive limit representation of the chain $R_{\mathrm{SO}\left( k\right) }^{\left( r\right) _{l}}$; then $R_{\mathrm{SO}\left( \infty\right) }^{\left( r\right) _{\infty}}$ together with Eq.\ (\ref{eqFID.8}) describes completely the action of the dual pair $\left( \widetilde{\mathrm{SL}_{2}\left( \mathbb{R}\right) },\mathrm{SO}\left( \infty\right) \right) $ on the isotypic component $\mathcal{I} _{1\times\infty}^{\left( r\right) _{\infty}}$ and thus we have the isotypic decompositions for the dual pairs $\left( \mathrm{U}\left( 1\right) ,\mathrm{U}\left( \infty\right) \right) $ and $\left( \widetilde {\mathrm{SL}_{2}\left( \mathbb{R}\right) },\mathrm{SO}\left( \infty\right) \right) $, \[ \mathcal{F}_{1\times\infty}=\sum\limits_{\makebox[0pt]{\hss$\scriptstyle m=0$\hss}}^{\infty}{\!{\oplus}\,}\mathcal{I}_{1\times\infty}^{\left( m\right) _{\infty}}= \sum\limits_{\makebox[0pt]{\hss$\scriptstyle r=0$\hss}}^{\infty}{\!{\oplus}\,}\mathcal{I}_{1\times\infty}^{\left( r\right) _{\infty}}, \] and thus Theorem \ref{ThmFID.2} is verified for this example.
Since the next two examples are very important by their applications to Physics we shall state them as corollaries to Theorem \ref{ThmFID.2}.
\begin{corollary} \label{CorFID.3}Let $G_{\infty}$ denote the direct product of $r$ copies of $H_{\infty}$ where $H_{\infty}=\mathrm{U}\left( \infty\right) $, $\mathrm{SO}\left( \infty\right) $, or $\mathrm{Sp}\left( \infty\right) $. If $G_{\infty}$ acts as the exterior tensor product representation $V^{\left( \lambda_{1}\right) _{\infty}}\otimes\dots\otimes V^{\left( \lambda _{r}\right) _{\infty}}$, where each $V^{\left( \lambda_{i}\right) _{\infty }}$, $1\leq i\leq r$, is an irreducible unitary $H_{\infty}$-module, then $H_{\infty}$ acts as the inner \textup{(}or Kronecker\textup{)} tensor product representation on $V^{\left( \lambda_{1}\right) _{\infty}} \mathop{\hat{\otimes}}\dotsb\mathop{\hat{\otimes}}V^{\left( \lambda _{r}\right) _{\infty}}$. If $\lambda_{G_{\infty}}$ denotes an irreducible unitary representation of class $\left( \lambda_{1}\right) _{G_{\infty} }\otimes\dots\otimes\left( \lambda_{r}\right) _{G_{\infty}}$ and $\mu ^{}_{H_{\infty}}$ denotes an irreducible unitary representation of class $\left( \mu\right) _{H_{\infty}}$ then the multiplicity $\dim\left[
\operatorname*{Hom}_{H_{\infty}}\left( \mu^{}_{H_{\infty}}:\lambda _{G_{\infty}}|_{H_{\infty}}\right) \right] $ of the representation $\left( \mu\right) _{H_{\infty}}$ in the inner tensor product $\left( \lambda _{1}\right) _{\infty}\mathop{\hat{\otimes}}\dotsb\mathop{\hat{\otimes} }\left( \lambda_{r}\right) _{\infty}$ is equal to the multiplicity of $\left( \mu\right) _{H_{k}}$ in the inner tensor product $\left( \lambda_{1}\right) _{k}\mathop{\hat{\otimes}}\dotsb\mathop{\hat{\otimes} }\left( \lambda_{r}\right) _{k}$ for sufficiently large $k$. \end{corollary}
\begin{proof} If $\left( \lambda_{i}\right) _{\infty}=\left( \lambda_{i}^{1},\lambda _{i}^{2},\dots,\lambda_{i}^{j},\dots\right) $ where $\lambda_{i}^{j}$ are integers such that $\lambda_{i}^{1}\geq\lambda_{i}^{2}\geq\cdots$ and $\lambda_{i}^{j}=0$ for all but a finite number of $j$, let $n$ denote the total number of all nonzero entries $\lambda_{i}^{j}$, $1\leq i\leq r$; then $V^{\left( \lambda_{1}\right) _{\infty}}\otimes\dots\otimes V^{\left( \lambda_{r}\right) _{\infty}}$ can be realized as a subspace of the Bargmann--Segal--Fock space $\mathcal{F}_{n\times\infty}$. From Theorem \ref{ThmFID.2} it follows that $V^{\left( \lambda_{1}\right) _{\infty} }\otimes\dots\otimes V^{\left( \lambda_{r}\right) _{\infty}}$ belongs to the isotypic component $\mathcal{I}_{n\times\infty}^{\left( \lambda\right) _{G_{\infty}}}$ of $\mathcal{F}_{n\times\infty}$, thus $V^{\left( \lambda _{1}\right) _{\infty}}\otimes\dots\otimes V^{\left( \lambda_{r}\right) _{\infty}}$ is the inductive limit of the chain $\left\{ V^{\left( \lambda_{1}\right) _{k}}\otimes\dots\otimes V^{\left( \lambda_{r}\right) _{k}}\right\} $. If $\mu^{}_{H_{\infty}}$ is an irreducible unitary representation of class $\left( \mu\right) _{H_{\infty}}$ then by Theorem \ref{ThmFID.2} \[ \dim\left[ \operatorname*{Hom}\nolimits_{H_{\infty}}\left( \mu^{}
_{H_{\infty}}:\lambda_{G_{\infty}}|_{H_{\infty}}\right) \right] =\dim\left[ \operatorname*{Hom}\nolimits_{G_{n}^{\prime}}\left( \lambda_{\left( G_{n}^{\prime}\right) _{\infty}}^{\prime}:\mu_{\left( H_{n}^{\prime}\right)
_{\infty}}^{\prime}\bigg|_{G_{n}^{\prime}}\right) \right] , \] where $\lambda_{\left( G_{n}^{\prime}\right) _{\infty}}^{\prime}$ (resp.\ $\mu_{\left( H_{n}^{\prime}\right) _{\infty}}^{\prime}$) is the representation of $G_{n}^{\prime}$ (resp.\ $H_{n}^{{}}$) dual to $\lambda_{G_{\infty}}$(resp.\ $\mu^{}_{H_{\infty}}$). For sufficiently large $k$ every $\mu^{}_{H_{\infty}}$ is the inductive limit of a chain $\mu ^{}_{H_{k}}$ and for such a $k$ Theorem \ref{ThmFD.2} implies that \begin{align*} \dim\left[ \operatorname*{Hom}\nolimits_{H_{k}}\left( \mu^{}_{H_{k}}
:\lambda_{G_{k}}|_{H_{k}}\right) \right] & =\dim\left[ \operatorname*{Hom}\nolimits_{G_{n}^{\prime}}\left( \lambda_{\left( G_{n}^{\prime}\right) _{k}}^{\prime}:\mu_{\left( H_{n}^{\prime}\right)
_{k}}^{\prime}\bigg|_{G_{n}^{\prime}}\right) \right] \\ & =\dim\left[ \operatorname*{Hom}\nolimits_{G_{n}^{\prime}}\left( \lambda_{\left( G_{n}^{\prime}\right) _{\infty}}^{\prime}:\mu_{\left(
H_{n}^{\prime}\right) _{\infty}}^{\prime}\bigg|_{G_{n}^{\prime}}\right) \right] , \end{align*} and this achieves the proof of Corollary \ref{CorFID.3}. \end{proof}
\begin{remark} The reason that this corollary only holds for sufficiently large $k$ can be seen in the following example. Let $G_{k}=\underset{4\text{ times} }{\underbrace{\mathrm{U}\left( k\right) \times\dots\times\mathrm{U}\left( k\right) }}$ and $H_{k}=\mathrm{U}\left( k\right) $ and consider the tensor product $(\underset{k}{\underbrace{1,0,\dots,0}})\otimes(\underset {k}{\underbrace{2,0,\dots,0}})\otimes(\underset{k}{\underbrace{2,0,\dots,0} })\otimes(\underset{k}{\underbrace{3,0,\dots,0}})$; then for $k=2$ we have the spectral decomposition \[ \left( 1,0\right) \otimes\left( 2,0\right) \otimes\left( 2,0\right) \otimes\left( 3,0\right) =\left( 8,0\right) +3\left( 7,1\right) +5\left( 6,2\right) +5\left( 5,3\right) +2\left( 4,4\right) , \] for $k=3$ we have \begin{multline*} \left( 1,0,0\right) \otimes\left( 2,0,0\right) \otimes\left( 2,0,0\right) \otimes\left( 3,0,0\right) \\ =\left( 8,0,0\right) +3\left( 7,1,0\right) +5\left( 6,2,0\right) +5\left( 5,3,0\right) +2\left( 4,4,0\right) \\ +3\left( 6,1,1\right) +6\left( 5,2,1\right) +5\left( 4,3,1\right) +3\left( 4,2,2\right) +2\left( 3,3,2\right) , \end{multline*} for $k\geq4$ we have \begin{multline*} (\underset{k}{\underbrace{1,0,\dots,0}})\otimes(\underset{k}{\underbrace {2,0,\dots,0}})\otimes(\underset{k}{\underbrace{2,0,\dots,0}})\otimes (\underset{k}{\underbrace{3,0,\dots,0}})\\ =\left( 8,0,\dots,0\right) +3\left( 7,1,0,\dots,0\right) +5\left( 6,2,0,\dots,0\right) +5\left( 5,3,0,\dots,0\right) \\ +2\left( 4,4,0,\dots,0\right) +3\left( 6,1,1,0,\dots,0\right) +6\left( 5,2,1,0,\dots,0\right) +5\left( 4,3,1,0,\dots,0\right) \\ +3\left( 4,2,2,0,\dots,0\right) +2\left( 3,3,2,0,\dots,0\right) +\left( 5,1,1,1,0,\dots,0\right) \\ +2\left( 4,2,1,1,0,\dots,0\right) +\left( 3,3,1,1,0,\dots,0\right) +\left( 3,2,2,1,0,\dots,0\right) . \end{multline*} Thus we can see that the spectral decomposition of $\left( 1,\vec{0}\right) _{\infty}\otimes\left( 2,\vec{0}\right) _{\infty}\otimes\left( 2,\vec {0}\right) _{\infty}\otimes\left( 3,\vec{0}\right) _{\infty}$ is the same as that of order $k$ for $k\geq4$, with infinitely many zeroes at the end of each signature.
Note also that this corollary applied to the tensor product $\underset{r\text{ times}}{\underbrace{\left( 1,\vec{0}\right) _{\infty}\otimes\dots \otimes\left( 1,\vec{0}\right) _{\infty}}}$ together with the Schur--Weyl Duality Theorem for $\mathrm{U}\left( r\right) $ implies the generalized Schur--Weyl Duality Theorem proved by Kirillov for $\mathrm{U}\left( \infty\right) $ in \cite{Kir73}. \end{remark}
\begin{corollary} \label{CorFID.4}Let $V^{\left( \lambda_{1}\right) _{\infty}},\dots ,V^{\left( \lambda_{r}\right) _{\infty}}$ and $V^{\left( \mu\right) _{\infty}}$ be irreducible unitary representation of $H_{\infty}$. Let $V^{\left( \mu^{\checkmark}\right) _{\infty}}$ be the representation \textup{(}of $H_{\infty}$\textup{)} contragredient to $V^{\left( \mu\right) _{\infty}}$. Let $I^{\infty}$ denote the equivalence class of the identity representation of $H_{\infty}$. Then the multiplicity of $\left( \mu\right) _{\infty}$ in the tensor product $\left( \lambda_{1}\right) _{\infty} \mathop{\hat{\otimes}}\dotsb\mathop{\hat{\otimes}}\left( \lambda_{r}\right) _{\infty}$ is equal to the multiplicity of $I^{\infty}$ in the tensor product $\left( \lambda_{1}\right) _{\infty}\mathop{\hat{\otimes}}\dotsb\mathop {\hat{\otimes}}\left( \lambda_{r}\right) _{\infty}\mathop{\hat{\otimes} }\left( \mu^{\checkmark}\right) _{\infty}$. \end{corollary}
\begin{proof} To prove this corollary we apply Corollary \ref{CorFID.3} to $G_{\infty }=\underset{r}{\underbrace{H_{\infty}\times\cdots\times H_{\infty}}}$ and $G_{k}=\underset{r}{\underbrace{H_{k}\times\cdots\times H_{k}}}$, then apply Theorem \ref{ThmFID.2} to $G_{\infty}=\underset{r}{\underbrace{H_{\infty }\times\cdots\times H_{\infty}}}\times H_{\infty}^{\checkmark}$ and $G_{k}=\underset{r}{\underbrace{H_{k}\times\cdots\times H_{k}}}\otimes H_{k}^{\checkmark}$, and finally apply Theorem 2.1 of \cite{KlTo96} to obtain the desired result at order $k$. The main difficulty resides with the definition of the identity representation on $V^{\left( \lambda_{1}\right) _{\infty}}\mathop{\hat{\otimes}}\dotsb\mathop{\hat{\otimes}}V^{\left( \lambda_{r}\right) _{\infty}}\mathop{\hat{\otimes}}V^{\left( \mu ^{\checkmark}\right) _{\infty}}$, which we will construct below.
For each $k$ let $I^{k}$ denote the identity representation of $H_{k}$ on \linebreak $V^{\left( \lambda_{1}\right) _{k}}\mathop{\hat{\otimes}} \dotsb\mathop {\hat{\otimes}}V^{\left( \lambda_{r}\right) _{k}}\mathop{\hat{\otimes} }V^{\left( \mu^{\checkmark}\right) _{k}}$. This means that if $I^{k}$ occurs with multiplicity $d$ in $V^{\left( \lambda_{1}\right) _{k}}\mathop {\hat{\otimes}}\dotsb\mathop{\hat{\otimes}}V^{\left( \lambda_{r}\right) _{k}}\mathop{\hat{\otimes}}V^{\left( \mu^{\checkmark}\right) _{k}}$ then there exist $d$ nonzero vectors $f_{i,k}$, $i=1,\dots,d$, such that $R_{H_{k} }\left( u\right) f_{i,k}=f_{i,k}$ for all $u\in H_{k}$. By construction each $f_{i,k}$ is a polynomial function in $\mathcal{F}_{n\times k}$ for some $n$. Thus $f_{i,k}$ is an $H_{k}$-invariant polynomial in $\mathcal{F}_{n\times k} $. If $J_{i,k}$ denotes the one-dimensional subspace spanned by $f_{i,k}$, then for sufficiently large $k$ and for each fixed $i=1,\dots,d$ we have a chain of irreducible unitary representations $\left\{ H_{k},I^{k} ,J_{i,k}\right\} _{k}$. We can define the isomorphism $\psi_{k+1}^{k}\colon J_{i,k}\rightarrow J_{i,k+1}$ by $\psi_{k+1}^{k}\left( cf_{i,k}\right) =cf_{i,k+1}$, $c\in\mathbb{C}$; then obviously \[ \psi_{k+1}^{k}\left( R_{H_{k}}\left( u\right) f_{i,k}\right) =R_{H_{k+1} }\left( u\right) f_{i,k+1}=R_{H_{k+1}}\left( u\right) \psi_{k+1} ^{k}\left( f_{i,k}\right) , \] for all $u\in H_{k}$. Also for all $k$, $l$, $m$ with $k\leq l\leq m$ we have $\psi_{m}^{k}=\psi_{m}^{l}\circ\psi_{l}^{k}$. Thus we can define the inductive limit representation $\left\{ H_{\infty},I^{\infty},J_{i,\infty}\right\} $, where the action of $H_{\infty}$ on $J_{i,\infty}$ is defined as follows:
Let $u\in H_{\infty}$; then $u\in H_{k}$ for some $k$. If $f\in J_{i,l}$ for some $l$ then \[ R_{H_{\infty}}\left( u\right) f_{l}=R_{H_{k}}\left( u\right) \psi_{k} ^{l}f\text{\qquad for }l<k, \] and \[ R_{H_{\infty}}\left( u\right) f_{l}=R_{H_{k}}\left( u\right) \psi_{l} ^{k}f\text{\qquad for }k\leq l. \] Then it follows from Theorem \ref{ThmFID.1} that $\left\{ H_{\infty },I^{\infty},J_{i,\infty}\right\} $ is irreducible with signature $\left( \vec{0}\right) _{\infty}$. The only problem with this approach is that the isomorphism embedding $\psi_{k+1}^{k}$ is not the isomorphic embedding $i_{k+1}^{k}\colon\mathcal{F}_{n\times k}\rightarrow\mathcal{F}_{n\times \left( k+1\right) }$. To circumvent this difficulty we define the \emph{inverse} or \emph{projective limit} of the family $\left\{ H_{k} ,I^{k},J_{k}\right\} $ where $J_{k}$ denotes the subspace of all $H_{k} $-invariants in $V^{\left( \lambda_{1}\right) _{k}}\mathop{\hat{\otimes} }\dotsb\mathop{\hat{\otimes}}V^{\left( \lambda_{r}\right) _{k}}\mathop {\hat{\otimes}}V^{\left( \mu^{\checkmark}\right) _{k}}$, as follows: For each pair of indices $l$, $k$ with $l\leq k$ define a continuous homomorphism $\phi_{l}^{k}\colon J_{k}\rightarrow J_{l}$ such that\renewcommand{\alph{enumi} }{\roman{enumi}}
\begin{enumerate} \item \label{CorFID.4proof(1)}$\phi_{k}^{k}$ is the identity map on $J_{k}$,
\item \label{CorFID.4proof(2)}if $i\leq l\leq k$ then $\phi_{i}^{k}=\phi _{i}^{l}\circ\phi_{l}^{k}$. \end{enumerate}
\noindent Here we can take $\phi_{l}^{k}$ as the \emph{truncation} homomorphism, i.e., $\phi_{l}^{k}$ is defined on the generators $f_{i,k}$ by \[ \phi_{l}^{k}\left( f_{i,k}\right) =f_{i,l}. \] The \emph{projective limit} of the system $\left\{ H_{k},J_{k},\phi_{l} ^{k}\right\} $ is then formally defined by \[ J_{\infty_{\leftarrow}}:=\varprojlim J_{k}=\left\{ \left( f_{k}\right) \in\prod_{k}J_{k}:f_{l}=\phi_{l}^{k}\left( f_{k}\right) ,\;\forall\,l\leq k\right\} . \] Let $\pi_{k}\colon J_{\infty_{\leftarrow}}\rightarrow J_{k}$ denote the projection of $J_{\infty_{\leftarrow}}$ onto $J_{k}$. Let $I^{\infty _{\leftarrow}}$ denote the representation of $H_{\infty}$ on $J_{\infty _{\leftarrow}}$; then $\pi_{k}\left( I^{\infty_{\leftarrow}}f\right) =\pi_{k}\left( f\right) $. Recall that if $\mathcal{P}_{n\times k}$ denotes the subspace of all polynomial functions on $\mathbb{C}^{n\times k}$ the $\mathcal{P}_{n\times k}$ is dense in $\mathcal{F}_{n\times k}$. Let $\mathcal{P}_{n\times\infty}=\bigcup_{k=1}^{\infty}\mathcal{P}_{n\times k}$ denote the inductive limit of $\mathcal{P}_{n\times k}$; then clearly $\mathcal{P}_{n\times\infty}$ is dense in $\mathcal{F}_{n\times\infty}$. Let $\mathcal{P}_{n\times\infty}^{\ast}$ (resp.\ $\mathcal{F}_{n\times\infty }^{\ast}$) denote the \emph{dual} or \emph{adjoint} space of $\mathcal{P} _{n\times\infty}$ (resp.\ $\mathcal{F}_{n\times\infty}$). Then since $\mathcal{P}_{n\times\infty}$ is dense in $\mathcal{F}_{n\times\infty}$, $\mathcal{F}_{n\times k}^{\ast}$ is dense in $\mathcal{P}_{n\times\infty }^{\ast}$. By the Riesz representation theorem for Hilbert spaces, every element $f^{\ast}\in\mathcal{F}_{n\times\infty}^{\ast}$ is of the form $\ip{\,\cdot\,}{f}$ for some $f\in\mathcal{F}_{n\times\infty}$, and the map $f^{\ast}\rightarrow f$ is an anti-linear (or conjugate-linear) isomorphism. Thus we can identify $\mathcal{F}_{\infty}^{\ast}$ with $\mathcal{F}_{\infty }^{{}}$ and obtain the \emph{rigged Hilbert space} as the triple $\mathcal{P}_{n\times\infty}^{{}}\subset\mathcal{F}_{n\times\infty}^{{} }\subset\mathcal{P}_{n\times\infty}^{\ast}$ (see \cite{GeVi64} for the definition of rigged Hilbert spaces). However, generally an element of $J_{\infty_{\leftarrow}}$ does not belong to $\mathcal{P}_{n\times\infty }^{\ast}$, but can still be considered as a linear functional (not necessarily continuous) on $\mathcal{P}_{n\times\infty}$, and furthermore, in this context the identity representation $I^{\infty_{\leftarrow}}$ will respect the isomorphic embedding $i_{k+1}^{k}\colon\mathcal{F}_{n\times k}\rightarrow \mathcal{F}_{n\times\left( k+1\right) }$. \end{proof}
\section{\label{Con}Conclusion}
We have studied thoroughly several reciprocity theorems for some dual pairs of groups $\left( G_{n}^{\prime},G_{\infty}^{{}}\right) $ and $\left( H_{n}^{\prime},H_{\infty}^{{}}\right) $, where $G_{\infty}$ is the inductive limit of a chain $\left\{ G_{k}\right\} $ of compact groups, $H_{\infty}$ is the inductive limit of a chain $\left\{ H_{k}\right\} $ such that for each $k$, $H_{k}$ is a compact subgroup of $G_{k}$, and $G_{n}^{\prime}\subset H_{n}^{\prime}$ are finite-dimensional Lie groups. These theorems show, in particular, that the multiplicity of an irreducible unitary representation of $H_{\infty}$ with signature $\left( \mu\right) _{H_{\infty}}$ in the restriction to $H_{\infty}$ of an irreducible unitary representation of $G_{\infty}$ with signature $\left( \lambda\right) _{G_{\infty}}$ is always finite. This is extremely important in the problem of spectral decompositions of tensor products of irreducible unitary representations of inductive limits of compact classical groups. This type of problems arises naturally in Physics (cf.\ \cite{KaRa87}), and in \cite{HoTo98} tensor product decompositions of tame representations of $\mathrm{U}\left( \infty\right) $ are investigated. In \cite{Ol'90} Ol'shanskii generalized Howe's theory of dual pairs to some infinite-dimensional dual pairs of groups. This is the right context to generalize the reciprocity theorem \ref{ThmFID.2} for the infinite-dimensional dual pairs $\left( G_{\infty}^{\prime},G_{\infty}^{{}}\right) $ and $\left( H_{\infty}^{\prime},H_{\infty}^{{}}\right) $ which will be part of our work in a forthcoming publication.
\end{document} | arXiv |
\begin{document}
\title{An Approximate Method for the Optimization of Long-Horizon Tank Blending and Scheduling Operations}
\begin{abstract}
We address a challenging tank blending and scheduling problem regarding operations for a chemical plant. We model the problem as a nonconvex MIQCP, then approximate this model as a MILP using a discretization-based approach. We combine a rolling horizon approach with the discretization of individual chemical property specifications to deal with long scheduling horizons, time-varying quality specifications, and multiple suppliers with discrete arrival times. This approach has been evaluated using industry-representative data sets from the specialty chemical industry.
We demonstrate that the proposed approach supports fast planning cycles. \end{abstract}
\thispagestyle{firstpage}
\section{Introduction} \label{sec:introduction}
For speciality chemical manufacturing, companies purchase and blend supply streams from a variety of suppliers. Because the supply streams are derivatives of refinery operations, the chemical composition (specifications) of the streams may vary significantly by supplier as well as by season. Supply streams may have both desirable properties and less desirable properties that prevent their exclusive use during the production process. Furthermore, some streams might have limited available quantities or intermittent availability. Thus, supply streams of differing composition are often combined into intermediate storage tanks. The resulting compounds are then blended to meet demand for production feed (see Figure~\ref{fig_probdesc}).
For our motivating application, supply streams typically arrive in barges from suppliers and may remain at the inbound docks for several days before unloading into one or multiple storage tanks. The time windows for the arrival and departure of barges are committed several months in advance. As the supply barges are unloaded into the tanks, the volume and chemical composition of the compounds in the storage tanks are changed. The compounds from one or multiple storage tanks are then blended to create the desired quantity and specification of production feed. The production feed is planned in product campaigns (runs), typically spanning 3-14 days per product. Approximately 3 to 12 months of product campaigns are planned in advance due to the wide variety of spec requirements over time for very different non-concurrent production runs, the non-periodic nature of the supply acquisition process, and the limited capacity of the storage tanks.
Given the volume and characteristics of the compounds on the inbound barges, the time windows for the availability of barges, and the desired production feed (both quantity and specifications), the company must determine an assignment of supply barges to tanks, an unloading plan from barges to tanks, and a blending plan from tanks to production feed. Although similar problems are often inherently multi-objective, the most important objective by far is to satisfy production requirements and to unload supply. Hence, the objective is to determine an operational schedule for the given barge arrivals and production plan. Rapid solutions are required for long scheduling horizons to support effective planning decisions. Thus, we propose an efficient optimization-based approach to provide solutions for this assignment, mixing, and blending problem within ten minutes for a planning horizon of up to a year.
\begin{figure}\label{fig_probdesc}
\end{figure}
Within the context of supply chain management, this problem encompasses short-term decisions as highlighted in the Supply Chain Planning Matrix (Figure~\ref{fig_planningmatrix}, adapted from \cite{Rohde2000}).
\begin{figure}
\caption{Context of this work in terms of supply chain planning (adapted from \cite{Rohde2000}).}
\label{fig_planningmatrix}
\end{figure}
In this work, we employ a discrete-time mixed-integer linear program (MILP) approximate formulation of the core nonconvex mixed-integer quadratically-constrained program (MIQCP), formulated via direct tracking of specs, using single-day scheduling periods with scheduling horizons of 90 to 368 days. The `binary' version of this discretization combines aspects of multi-parametric disaggregation (MDT) \cite{Teles2011} with normalized MDT (NMDT, \cite{Castro2015c}), while avoiding the explicit tracking of spec qualities. The demand spec and spec ratio requirements are tightened to help ensure that the approximate representation of the specs does not lead to violations of the demand composition requirements. To satisfy the long-horizon scheduling goals within a reasonable computation time, we use a rolling-horizon solution approach.
After optimization of the model approximation, the scheduling solution is simulated to obtain the correct tank inventory and demand feed specs.
In the following sections, we provide a formal problem description, highlight the related research, formulate a mixed integer non-linear representation of the problem, propose approaches for solving the model, and compare the approaches with computational studies based on industry-representative data sets.
\section{Problem Description}
For a given a set $S$ of supply nodes (barges), assume each supply node $s$ contains a given volume $\underbar{v}_s$ of raw material, with a level for chemical property $q$ of $\underbar{\SPEC}_{s,q}$, where $q \in Q$. The levels of these chemical properties are often specified as a percentage of the volume for a material component of interest. The percentages may not necessarily sum to 100\% of the material, because some components may overlap and others may not be of interest. Each supply node $s$ is available during a set $T_s$ of consecutive time periods, from $t^{\min}_s$ to $t^{\max}_s$. With the availability of all supply nodes known, the set of supply nodes available each time period $t$ is denoted as $S_t$. The number of barges that are unloaded each day is limited to $\underbar{ul}^{\max}$. If a barge is unloaded in a given day, the minimum percentage of inventory that can be unloaded is $\underbar{ulp}^{\min}_s$. If a supply barge has less than $\underbar{ulp}^{\min}_s$ remaining, then the inventory is not unloaded and the barge departs.
A set $K$ of intermediate storage tanks is available to hold raw material or a blend of raw materials. Each tank $k$ has an initial inventory volume $v_{k,0}$, minimum inventory level $v_k^{\min}$, and maximum inventory level $v_k^{\max}$. The initial quality level for chemical property $q$ is denoted by $f_{k,q,0}$. Each tank can be supplied by the supply nodes in $K_s$. If the mixture in a tank $k$ is used to fulfill production feed, the minimum percentage of feed that can originate from the tank is $\underbar{tkp}^{\min}_k$. The demand for production feed is known for each time period $t \in T$. This demand is characterized by a volume $\underbar d_t$, bounds $\underbar{\SPEC}_{t,q}^{\min}$ and $\underbar{\SPEC}_{t,q}^{\max}$ on the quality level of the chemical properties, and bounds $\underbar{r}_{t,q_1,q_2}^{\min}$ and $\underbar{r}_{t,q_1,q_2}^{\max}$ on certain ratios of chemical properties. Production feed is planned in campaigns or runs of consecutive days. During a run, $\underbar{\SPEC}_{t,q}^{\min}$ and $\underbar{\SPEC}_{t,q}^{\max}$ remain constant, and we require the daily flow plan for this feed to remain constant.
In order to meet production feed $\underbar d_t$, each time period, the volume of material transferred needs to be determined: \begin{itemize} \item volume to unload from supply node $s$ to tank $k$ during time $t$, $y^{\text{in}}_{s,k,t}$ and \item volume to unload from tank $k$ to meet production feed in time $t$, $y^{\text{out}}_{k,t}$. \end{itemize}
In addition, the following assignment variables on these flows are needed to address various operational constraints: \begin{itemize}
\item if supply barge $s$ is unloaded during time $t$, then $\gamma_{s,t} = 1$. \item If tank $k$ provides mixture to meet production feed in time $t$, then $\sigma_{k,t} = 1$. \end{itemize}
To model our objective of meeting demand and unloading requirements, we maximize a weighted sum of the amount of demand met and product unloaded.
Additional operational constraints include the following. The number of times a barge can be unloaded is limited to two, and all unload operations for the barge must be performed within a range of $\underbar{ult}^{\max}$ time periods (e.g., seven days). Flow from tanks to meet production feeds must be continuous for each campaign (or demand run).
Conservation of flow must be maintained, such that the sum of the initial tank inventory level and the volume of supply that flows into the tank less the volume that flows out for production feed equals the final tank inventory level. The resulting levels of chemical properties from this flow must meet the specified composition requirements. Due the relatively small sizes of the storage tanks in this work, we allow storage tanks to both receive supply and provide material for production feed simultaneously. If this occurs in a given time period, for the sake of simplicity, we model that complete mixing of inflows occurs before any outflows within that time period. The problem and constraints are formally specified in Section~\ref{sec:Mathematical-Model}.
\section{Literature Review \label{sec_litReview}}
The blending and scheduling problems, also known as multi-period pooling problems, studied in the literature vary widely in network size, scope, objective, and detail. Such problems, similar to the standing pooling problem, are typically characterized as bilinear nonconvex MIQCP's due to the balance of specs (or properties) in mixing and splitting of product. This is referred to as `linear blending', and is often an approximation of what occurs in practice (as e.g. the mixing process may not finish completely before material is split into multiple streams). However, similar problems can also involve linear \cite{Li2009} representation for spec balance, or a more highly nonlinear \cite{Zhang2002,Singh2000,Misener2009,Cuiwen2012} representations for balance of some specs (`nonlinear blending', as in the extended pooling problem), depending on the nature of the operational network or the specifications to track.
The underlying resource networks can be small and separated into discrete layers, and can represent front-end acquisition, as in e.g. \cite{Mouret2010,Oddsdottir2013,Chen2014,Cerda2015,Castro2016} and the current work; back-end operations and sales, as in e.g. \cite{Cuiwen2012}; or the full process from acquisition to sales, as in \cite{Mouret2010,Xu2017}. The networks can also span larger, more complex supply chains, as in \cite{Pinto2000,Castillo2017}. The scope of the problem can include scheduling only, as in \cite{Moro2004,Mendez2006,Kolodziej2013b,Castro2016} and the current work, but can also extend to include more high-level acquisition \cite{Oddsdottir2013}, production planning decisions \cite{LUO2009,Lu2015,Torkaman2017}, or more detailed equipment control decisions \cite{Dias2016}. Some works, such as \cite{GutirrezLimn2014,GutirrezLimn2016}, integrate all three levels of planning, scheduling, and control in a single model.
Common examples of objective functions used in similar problems include maximization of profits \cite{Reddy2004a,Reddy2004b,Li2007,Kolodziej2012,Cerda2015,Castro2015b,Lotero2016,Castillo2017,Xu2017} or gross margins \cite{Mouret2010,Oddsdottir2013,Castro2014a,Gao2015,Castro2016}, or minimization of operating costs \cite{Mouret2010,Li2009,Li2010,Li2011,Castro2014a,Castro2016}. The objective function used in \cite{deAssis2017} in particular is similar to that in this work, but includes additional penalties for performing mixing operations, missing the demand composition requirements, and for not respecting the maintenance requirements. \cite{Cuiwen2012} maximizes the yields of final products.
Many problems have only a single type of supply source, which can be characterized either by discrete arrivals \cite{Mouret2010,Oddsdottir2013,Chen2014,Castro2014a,Castro2016} or continuous supply streams e.g. \cite{Li2009,Li2011,Cuiwen2012,Castro2015b,Gao2015,Lotero2016}. Some works require multiple types of supply sources. For example, in \cite{Reddy2004a,Reddy2004b,Li2007,Cerda2015,Xu2017}, there are both small barges with a single type of inventory and larger barges, referred to as very large crude carriers (VLCCs), that carry multiple packages with varied compositions. Some works explicitly include vessel scheduling decisions to be made at unloading points, such as docks \cite{Reddy2004a,Li2007}. Most works assume known supply and demand quantities or bounds, but some, such as \cite{Neiro2006,LUO2009,Cuiwen2012,Gupta2014,Chu2015,Dias2016,Ning2017}, explicitly take into account stochastic supply and demand.
Muti-period pooling problems classically come in the forms of P, Q, or PQ formulations. The Q formulation is source-based, the P formulation is spec-based, and the PQ is a hybrid of the two. That is, the Q formulation tracks compositions based on origin fractions or volumes, while the P formulation tracks compositions explicitly based on the concentrations of the physical qualities to track. See~\cite{Gupta2014} for a discussion of these models.
Some examples of spec-based formulations can be found in \cite{Cuiwen2012,Chen2014,Castro2015b}, while examples of source-based formulations can be found in \cite{Reddy2004a,Mouret2010,Gao2015,Cerda2015}. Comparing to the traditional pooling problem, a spec-based formulation mirrors the traditional P-formulation while a source-based formulation mirrors the Q or PQ-formulations. This choice of representation affects both the required number of bilinear terms and the MILP tightness of the formulation. Overall, the number of bilinear terms for a source-based formulation scales with the number of mixing or splitting nodes in the network and the number of distinct source compositions, while the number of bilinear terms for a spec-based formulation scales instead with the number of specs that require tracking. As such, this choice of representation can have a significant effect on both the tightness of the MIP and the number of required bilinear terms. While source-based formulations can be tighter \cite{Lotero2016}, they are only viable in situations where the number of distinct sources is relatively small. In this work, as many as 35 distinct source compositions can be represented within the scheduling horizon, rendering source-based formulations computationally expensive.
\subsection{Comparison to Similar Research} We highlight and compare three research streams that are similar to this current work. Note that many differences between this work and others stem from the specialty chemical nature of our problem, combined with the properties of the production site for the application of interest.
Castro~\cite{Castro2016} studies a similar problem motivated by crude oil operations in refineries. This problem is also addressed in~\cite{Lee1996,Mouret2009a,Mouret2010,Castro2014a}, among others. The problem involves barge-based supply where the barges carry a single type of crude, arrive within given time intervals, and unload according to the arrival sequence. The barges unload to storage tanks. The material in the storage tanks are mixed into charging tanks that supply crude oil into charging tanks that supply crude oil distillation units (CDUs).
The similarities of the core problem include a similar network structure, a barge-based supply stream, and the capacity of each storage tank ($\sim$1-2 barges worth). Both works solve front-end operational scheduling problems and determine feed plans for downstream operations. In \cite{Castro2016}, such feeds are routed through crude oil distillation units (CDUs). In this current work, the chemicals feed a single production feed.
However, the differences between the problems affect the modeling and solution approaches. In \cite{Castro2016}, the barges arrive at regular intervals and are unloaded in the order of arrival, with only one barge unloaded at a time. In the current work, multiple barges (up to a limit) may be unloaded simultaneously and the order of unloading needs to be determined.
The shorter scheduling horizon in \cite{Castro2016} ($\le$15 days, $\le$3 barges) allows for solution over the entire scheduling horizon at once, while the long scheduling horizon (up to 365 days, $\sim$30 barges) in the current work necessitates a heuristic such as a rolling horizon scheme.
At the same time, the larger network in \cite{Castro2016} reduces the length of the scheduling horizon over which the problem can be solved within a reasonable time. A significant observation of \cite{Castro2016} is that, for the problem considered, the choice of objective function significantly impacted the relative efficacy of the discrete-time and continuous-time formulations. Another key difference is that the production schedule for each CDU is not fixed in \cite{Castro2016}, but is chosen based on gross margins per crude or on operating costs. On the other hand, a CDU can be sourced from only a single charging tank at a time.
In Castro's work, the source-based choice is convenient because there are few composition types to track. In the current work, the source-based formulation is prohibitive due to the large amount of different possible composition types, and a spec-based formulation is used instead. As mentioned in \cite{Castro2016}, when viable, source-based formulations yield tighter MILP relaxations \cite{Lotero2016}, and can yield better computational performance \cite{Castro2015b}.
Castro \cite{Castro2015b} addresses a similar multi-period pooling problem with composition constraints on the flows to demand stream (tested on both per-flow and a mixture bases). Similar to the solution approach used in \cite{Castro2016}, Castro employs an iterated MILP-NLP approach, where the MILP stage is represented using base-10 NMDT. However, the utilized objective functions and formulations differ. In \cite{Castro2015b}, a discrete-time formulation is used, and both spec-based and source-based formulations are presented, and compared with commercial global solvers. The objective function is profit maximization as opposed to gross margin maximization or cost minimization. The problem instance tested has two source streams, up to four blending/storage tanks for which tank-to-tank transfer operations are allowed, and two demand streams, and are solved with a time horizon of up to four periods.
Oddsdottir \cite{Oddsdottir2013} addresses a similar network, with the same structure (barges to storage tanks to demand (CDUs)). The main differences in this respect involve barge availability (less restrictive time windows), the number of storage tanks (6), and the number of demand streams (2). Another common feature is the requirement of a much longer scheduling horizon (3 months) and in the flexibility in barge arrivals. However, the goal in \cite{Oddsdottir2013} is related to procurement planning instead of operational scheduling. As such, the choice of which barges to order and when is a problem decision, and a courser time-grid is used (three days per period), resulting in a total of 30 time periods to model. Additionally, in both works, multiple barges might be unloaded within a single time period (up to a limit), possibly contributing to the choice of a discrete-time formulation. Further, a given barge may unload to multiple storage tanks in a given time period. As with \cite{Castro2016}, however, the limited number of crude types in the system in \cite{Oddsdottir2013} allow for a reasonably compact source-based formulation, which would be unwieldy in the current work.
\subsection{Related Solution Methodologies}
Rolling planning aside, the solution approach selected in \cite{Oddsdottir2013} is comparable to the approach in \cite{Castro2016}, but with some key differences. As in \cite{Castro2016}, a source-based formulation is used and composition constraints are formulated with respect to crude volumes instead of crude volume fractions. However, instead of discretizing, \cite{Oddsdottir2013} applies a MILP approximation to the problem in which the constraints enforcing correct proportions for the outflow composition are reformulated in a manner that allows composition discrepancy in the outflows. A nonconvex MIQCP stage is then applied to resolve the composition discrepancy, yielding near-optimal feasible solutions in a much shorter time frame than the full nonconvex MIQCP: Running in GAMS with BARON, over 20 datasets, the average solution gap for this PRONODIS method (compared to the full nonconvex MIQCP) was 1.1\%, with an average computation time of 20s (compared to over 13000s for the full nonconvex MIQCP).
A wide variety of formulations and approaches have been developed to tackle these difficult scheduling problems, with varying time-scheduling formulations, spec representations, and algorithms to deal with the problem nonlinearity, which usually incorporate some form of MILP approximation to the basic nonconvex MIQCP.
A number of continuous-time and discrete-time scheduling formulations have been presented. The choice of time-formulation can have a significant impact on problem performance, as showcased in works such as \cite{Reddy2004b} and \cite{Mouret2009a,Mouret2010}. The preferred scheduling formulation can depend on the type of objective function used, as observed in \cite{Castro2016}, as well as on the types of operational constraints to be considered.
Compared to continuous-time formulations, discrete-time formulations frequently yield simpler representations, with a trade-off of more integer variables. Even so, problems with certain types of operational constraints, such as variable flow rates and constraints on the number of simultaneous operations (such as barge unloads), can be significantly easier to model and solve in discrete-time (as seen in e.g. \cite{Castro2016}). A discrete-time formulation is also used in our work.
A number of solution methodologies have emerged to deal with the handling of the nonlinear nature of these scheduling problems, particularly over long scheduling horizons. Such methods include single-step \cite{Wenkai2002,Mouret2010,Oddsdottir2013,Castro2014a,Castro2015b,Castro2016} and iterated MILP-NLP \cite{Kelly2003,Karuppiah2008,Cerda2015,deAssis2017,Xu2017} approaches, which have emerged as some of the most efficient methods to ensure small optimality gaps for relatively short scheduling horizons. Other methods include MILP-(smaller MINLP) iterations \cite{Oddsdottir2013,Lotero2016} and iterative MILP-refinement methods \cite{Reddy2004a,Reddy2004a,Mendez2006,Li2007,Kolodziej2013b}.
For longer scheduling horizons, rolling planning methods are often used to limit the number of integer variables. Such models typically yield good feasible solutions far faster than the full model, with far better scaling of computation times with the scheduling horizon. In general, a rolling horizon model operates on a discrete time grid by representing in detail only the `present' segment $\{t^h, \dots, t^h+H^{\mathrm{roll}}-1\}$ of the scheduling horizon $\{1,\dots,H\}$ at each iteration, then tanking a step forwards in time of size $t^{\mathrm{step}} \le t^h$ for the next iteration, freezing the decisions made in the interval $\{t^h,\dots,t^h+t^{\mathrm{step}}-1\}$ and proceeding until the full horizon is reached.
In some rolling planning models, such as those in \cite{Silvente2015,Cuiwen2012,Oddsdottir2013,Stolletz2014,Lu2015} and heuristic H2 in \cite{Torkaman2017}, all aspects of the `past' decisions in $\{1,\dots,t^h-1\}$ are frozen; in others, such as \cite{Cerda2015} and heuristics H1, H3, and H4 in \cite{Torkaman2017}, only certain variables are frozen (most commonly the binary/integer decisions). The treatment of the `future' portion of the scheduling horizon $\{t+H^{\mathrm{roll}},\dots,H\}$ varies considerably between models: Some models exclude the `future' from the current model entirely, as in \cite{Silvente2015,Cuiwen2012,Stolletz2014,Marquant2015,Cerda2015}, whereas some relax complicating constraints \cite{LUO2009,Torkaman2017} and/or treat integer variables as continuous \cite{Lu2015,Torkaman2017}. Some works, such as \cite{Cuiwen2012,Oddsdottir2013,Champion2017,Silvente2018}, deal with uncertain events by updating uncertain data or events at each (or certain) rolling planning steps(s). For bi-level integrated planning and scheduling models, it is common to solve the scheduling subproblem only for the `present' segment, while including (a subset of) the future segments in the planning subproblem, as is done in \cite{Li2010,Zondervan2014,Silvente2015,Silvente2018}. Some works use an alternate framework based on supply arrivals to determine the time steps for each rolling iteration, as in \cite{Reddy2004a,Reddy2004b,Li2007,Cerda2015}; \cite{Li2007} in particular allows for some backtracking to correct spec quality discrepancies in the current plan.
Finally, many ideas for MILP approximations to deal with bilinear terms have emerged in recent years, especially for the pooling problem. Some methods are either binary-free, or add a number of binary variables that scales linearly with the desired level of precision. These include various linearization techniques \cite{Joly2003,Moro2004}, piecewise-linear approximation \cite{Pham2009,Gao2015}, outer-approximation methods \cite{Xu2017}, and McCormick or piecewise McCormick envelopes \cite{McCormick1976,Bergamini2005,Mouret2009b,Castro2015a,Castro2015b,deAssis2017}. \cite{Gounaris2009} in particular gives a numerical comparison for fifteen different piecewise McCormick-related schemes, split into three classes of schema. For other methods, often utilizing digit-based representations for a single variable, the number of binaries scales logarithmically with the desired precision. Such methods include multi-parametric disaggregation \cite{Teles2011,Kolodziej2013a,Castro2014a,Castro2014b,Castro2015b,Castro2015c,Castro2016} and other techniques \cite{Vielma2009,Misener2011,Gupte2013,Kolodziej2013b,Gupte2016}. Approximation via generalized disjunctive programming (GDP) is another, far more expressive, approach that has been used to tackle pooling-type problems. Castro and Grossmann et. al. have derived a MILP-based formulation for variants of multi-parametric disaggregation as a disjunctive program \cite{Grossmann2011,Kolodziej2013a,Castro2014a,Castro2015b,Castro2015c}, and have also applied it separately \cite{Ruiz2010,Castro2016,Lotero2016}. In this work, we choose to directly discretize the inventory specs of each tank using a formulation based on normalized multi-parametric disaggregation (NMDT), as set forth in \cite{Castro2015c}, yielding a relatively simple and highly efficient model in conjunction with a rolling planning approach.
We were unable to find any other work in the literature which combines a version of the multi-parametric disaggregation technique with a version of rolling planning; this combination is a key novelty of this work. The primary results of this work include the speed of the method achieved while yielding near-optimal solutions without compromising the qualities of the specs in the demand feeds, combined with the incorporation of constraints on the ratios of specs in the demand feeds.
\section{Mathematical Model} \label{sec:Mathematical-Model}
We describe a nonconvex MIQCP model in this section that introduces variables to handle the constraints. Data in the problem is written with an underscore, continuous variables are written with lower-case letters, binary variables are written with Greek letters, and sets are written with capital letters. \subsection{Terminology} \begin{tabular}{ll}
Run & Production run, consisting of a number of consecutive days for which the production feed\\
& \hspace{.2cm} must be constant in terms of volume.\\
Feed & Flow from a storage tank to the downstream production processes. \\
Mixing & The mixing of material from a number of different sources to a single destination.\\
Splitting & The splitting of material from a single source to a number of different destinations.\\
Spec & Concentration of a certain type of chemical property. \end{tabular} \subsection{Sets and Parameters}
\underline{Sets}\\ \begin{tabular}{ll}
$S$ &: Set of supply nodes (barges with raw material), indexed by $s$.\\
$S_k$ &: Set of supply barges allowed to unload to tank $k \in K$.\\
$S_t$ &: Set of supply barges available during time period $t \in T$.\\
$K$ &: Set of tanks, indexed by $k$.\\
$K_s$ &: Set of tanks to which supply barge $s \in S$ is allowed to unload. \\
$Q$ &: Set of chemical properties, indexed by $q$.\\
$Q_{\text{rat}}$ &: Pairs of specification for which ratio bounds are to be enforced.\\
$R$ &: Set of non-overlapping demand runs (consecutive days for which the production feed is constant). \\
$T$ &: Set of time periods (days), such that $T$ $\vcentcolon=\{1, 2, \dots, H\}$.\\
$T_r$ &: Time periods corresponding to run $r$ where runs do not overlap, such that $T_r = \{t^{i}_r, t^{i}_r+1, \dots, t^f_r\} \subseteq T$. \\
$T_s$ &: Time periods during which supply $s \in S$ can be unloaded, such that $T_s = \{t^{\min}_s,t^{\min}_s+1,\dots,t^{\max}_s\}$. \\
$T_d$ &: Time periods where there is some demand, such that $T_d= \bigsqcup_{r \in R} T_r $.\\ \end{tabular}
\underline{Parameters}\\ \begin{longtable}{ll}
$\underbar{c}^{\mathrm{inb}}_s$ &: Penalty cost per volume of not unloading supply from supply node $s$.\\
$\underbar{c}^{\mathrm{dmd}}_t$ &: Penalty cost per volume of not meeting production feed volume for time period $t$.\\
$\underbar d_t$ &: Demand to meet in time step $t \in T$. Must be constant throughout each run.\\
& \hspace{.2cm} Thus, $\underbar d_t = \underbar d_{t'}$ for all $t,t' \in T_r$ for $r \in R$. Also, $\underbar d_t = 0$ for $t \notin T_d$.\\
$\underbar{\SPEC}_{s,q}$ &: Value of chemical characteristic $q \in Q$ in supply source $s \in S$.\\
$\underbar{\SPEC}_{k,q,0}$ &: Initial value of chemical characteristic $q \in Q$ in tank $k \in K$.\\
$\underbar{\SPEC}^{\min}_{q,t}, \underbar{\SPEC}^{\max}_{q,t}$ &: Bounds on the value of chemical characteristic $q \in Q$ for the demand to be filled \\
& \hspace{.2cm} in time step $t \in T$.\\
$H$ &: Scheduling time horizon (implicit in the definition of $T$).\\
$\underbar{r}^{\min}_{q_1,q_2,t}, \underbar{r}^{\max}_{q_1,q_2,t}$ &: Bounds on the ratio of chemical characteristics $f_{q_1}$ and $f_{q_2}$,$(q_1,q_2) \in Q_{\text{rat}}$ for the demand \\
& \hspace{.2cm} to be filled in time step $t \in T$.\\
$\underbar{tkp}^{\min}_k$ &: Minimum percentage of feed each day that can originate from \\
& \hspace{.2cm} tank $k$ (if used).\\
$\underbar{ul}^{\max}$ &: Maximum number of supply barges that can be unloaded each day.\\
$\underbar{ulp}^{\min}_s$ &: Minimum percentage of inventory from supply barge $s$.\\
& \hspace{.2cm} that can be unloaded each day that unloading occurs for $s$.\\
$\underbar{ult}^{\max}$ &: Maximum permissible gap between the first and last unloads \\
& \hspace{.2cm} for each supply barge.\\
$\underbar{v}_s$ &: Volume of supply available for unloading from supply node $s \in S$. \\
$\underbar{v}_{k,0}$ &: Initial inventory volume of tank $k \in K$.\\
$[\underbar{v}^{\min}_k,\underbar{v}^{\max}_k]$ &: Bounds on permissible inventory for each tank $k \in K.$ \end{longtable}
\underline{Variables} (All variables are non-negative)\\ \begin{longtable}{lll}
Assignment: & $\gamma_{s,t}$ &: $ = \begin{cases}
1: & \text{supply barge } s \text{ is unloaded in time step } t\\
0: & \text{otherwise}
\end{cases}$ : $s \in S, t \in T$\\
~ & $\sigma_{k,t}$ &: $ = \begin{cases}
1: & \text{tank } k \text{ provides material for production feed in time step }t\\
0: & \text{otherwise}
\end{cases}$ : $k \in K, t \in T$ \\
Transfer: & $y^{\text{in}}_{s,k,t}$ &: Volume of raw material to transfer from supply $s \in S$ to\\
& & \hspace{0.2cm} tank $k \in K$ in time step $t \in T$\\
& $y^{\text{out}}_{k,t}$ &: Volume of material to feed to demand from node $k \in K$ \\
& & \hspace{0.2cm} in time step $t \in T$\\
Resource: & $v^{\text{end}}_{k,t}$ &: Volume of inventory in tank $k \in K$ at the end of time step $t \in T$\\
& $v^{\text{mid}}_{k,t}$ &: Volume of inventory in tank $k \in K$ after material is supplied from supply barges,\\
& & \hspace{0.2cm} but before feed operations, in time step $t \in T$\\
Demand: & $v^{\text{prod}}_t$ &: Total volume fed to production in time step $t \in T$\\
Tank specs: & $f_{k,q,t}$ &: Value of chemical characteristic $q \in Q$ in tank $k \in K$ at the end of time step $t \in T$\\
Final specs: & $\SPEC^{\text{prod}}_{q,t}$ &: Value of chemical characteristic $q \in Q$ at production at the end of time step $t \in T$\\
Unload time: & $t^{\text{first}}_{s}$ &: Earliest unload date for supply barge $s$\\
~ & $t^{\text{last}}_{s}$ &: Latest unload date for supply barge $s$\\
Slack: & $v^{\text{unused}}_s$ &: Volume of material that is not unloaded from supply barge $s \in S$\\
~ & $d^{\text{unmet}}_t$ &: Volume of production demand missed in time step $t \in T$\\ \end{longtable}
\underline{Objective:} Our objective is to minimize the amount of material that is not unloaded from barges ($v^{\text{unused}}_s$) and the demand that is unmet ($d^{\text{unmet}}_t$). To accomplish this, we maximize the sum of the value of the raw material unloaded from the barges and the value of the production feed demand that is met. The amount of material unloaded for a barge $s$ is exactly $\underbar{v}_s - v^{\text{unused}}_s$, and the amount of demand met at time $t$ is $\underbar d_t - d^{\text{unmet}}_t$. Hence, we have
\begin{equation}
\begin{array}{ll}
\max & \ds\sum_{s \in S} \underbar{c}^{\mathrm{inb}}_s \cdot (\underbar{v}_s - v^{\text{unused}}_s) + \sum_{t \in T} \underbar{c}^{\mathrm{dmd}}_t \cdot (\underbar d_t - d^{\text{unmet}}_t)
\end{array}
\end{equation}
\subsection{Constraints} The following constraints limit the solution space of the assignment, blending, and mixing problem. In the following discussion, note that variables $y^{\text{in}},y^{\text{out}}$ correspond to flows, while the variables $v$ correspond to stationary volumes at certain times including volumes on barges $v_s$, volumes in tanks $v_{k,t}$, and volumes required for production feed $v_t$. \\
\textit{\underline{Initial Conditions}}: The volume of material in the tanks and the chemical characteristics of the material are initialized. Specifically, \eqref{eq_invInit-tnk} enforces that the initial volumes in each storage tank are correctly represented, and \eqref{eq_specInv} enforces that the initial chemical characteristics in each storage tank are specified. \begin{subequations} \begin{align}
v^{\text{end}}_{k,0} &= \underbar{v}_{k,0} & \,& k \in K \label{eq_invInit-tnk}\\
f_{k,q,0} &= \underbar{\SPEC}_{k,q,0} & \,& k \in K, q \in Q \label{eq_specInv} \end{align} \end{subequations}
\textit{\underline{Flow}}: For these flow constraints, we assume that inbound supply material is unloaded and blended in the tanks before any material is drawn from the demand tanks to meet production feed on any given day. The model assumes that in the first half of a time period, barges unload to tanks \eqref{eq_invinb}, adding to the volume previously in the tank and creating an intermediate volume $v^{\text{mid}}_{k,t}$. In the second half of the time period, material is unloaded from the tanks \eqref{eq_outflow} to support production \eqref{eq_invProduction}, while within required volume bounds at production \eqref{eq_invlb} and \eqref{eq_invub}.
\begin{subequations} \begin{align}
\ds\sum_{s \in S_k} y^{\text{in}}_{s,k,t} + v^{\text{end}}_{k,t-1} &= v^{\text{mid}}_{k,t}
& \,& k \in K, t \in T \label{eq_invinb}\\
v^{\text{mid}}_{k,t}
- y^{\text{out}}_{k,t} &= v^{\text{end}}_{k,t} & \,& k \in K, t \in T \label{eq_outflow}\\
\ds\sum_{k \in K} y^{\text{out}}_{k,t} &= v^{\text{prod}}_{t} & \,& t \in T \label{eq_invProduction}\\
v^{\text{end}}_{k,t} &\ge \underbar{v}^{\min}_k & \,& k \in K, t \in T \label{eq_invlb}\\
v^{\text{mid}}_{k,t} &\le \underbar{v}^{\max}_k & \,& k \in K, t \in T \label{eq_invub} \end{align} \end{subequations}
\textit{\underline{Chemical Characteristics}}: \framebox{Bi-linear} The following constraints capture the blending and mixing of chemical characteristics. Specifically, \eqref{eq_infspec} ensures the characteristics of the material in a tank at the end of the previous time period are combined with the material from supply barges to create a mix. Expression \eqref{eq_feedspecinv} determines the characteristics of the material provided from tanks to meet production feed.
\begin{subequations} \begin{align}
f_{k,q,t} \cdot v^{\text{mid}}_{k,t}
&= \displaystyle \sum_{s \in S_k} \underbar{\SPEC}_{s,q} \cdot y^{\text{in}}_{s,k,t} + f_{k,q,t-1} \cdot v^{\text{end}}_{k,t-1} & \,& k \in K, q \in Q, t \in T\label{eq_infspec}\\
\SPEC^{\text{prod}}_{q,t} \cdot v^{\text{prod}}_{t} &= \ds\sum_{k \in K} f_{k,q,t} \cdot y^{\text{out}}_{k,t} & \,& q \in Q, t \in T \label{eq_feedspecinv} \end{align} \end{subequations}
Thus, while supply unloads and production feeds can occur in the same time period (day), they are not considered to occur simultaneously.\\
\textit{\underline{Demand}}: The following constraint set captures the production feed volume that is not met each time period, $d^{\text{unmet}}_t$.
\begin{align}
\displaystyle v^{\text{prod}}_{t} &= \underbar d_t - d^{\text{unmet}}_t & t \in T \label{eq_demand} \end{align}
\textit{\underline{Feed Characteristics}}: Both the chemical characteristics \eqref{eq_feedspec} and ratios of characteristics \eqref{eq_feedspecrat} need to be within the required bounds. Note that in practice for \eqref{eq_feedspecrat} the denominator would be multiplied through, as numerical issues can arise when the denominator is small.
\begin{subequations}
\begin{align}
\underbar{\SPEC}^{\min}_{q,t} &\le \SPEC^{\text{prod}}_{q,t} \le \underbar{\SPEC}^{\max}_{q,t} & \,& t \in T,~q \in Q \label{eq_feedspec}\\
\underbar{r}^{\min}_{q_1,q_2,t} &\leq \frac{\SPEC^{\text{prod}}_{q_1,t}}{\SPEC^{\text{prod}}_{q_2,t}} \leq \underbar{r}^{\max}_{q_1,q_2,t} & \, & t \in T,~(q_1,q_2) \in Q_{\text{rat}} \label{eq_feedspecrat} \end{align}
\end{subequations}
\textit{\underline{Total Inflow}}: Ideally, the volume of supply on each barge is fully unloaded. Expression \eqref{eq_tot_inflow} captures the volume of supply that is not unloaded ($v^{\text{unused}}_s$). In addition, \eqref{eq_correcttime_inflow} ensures no material is unloaded from a supply barge when the barge is not available.
\begin{subequations}
\begin{align}
\displaystyle \sum_{k \in K_s} \sum_{t \in T_s} y^{\text{in}}_{s,k,t} &= \underbar{v}_s - v^{\text{unused}}_s & &\quad \, && s \in S \label{eq_tot_inflow}\\
y^{\text{in}}_{s,k,t} &= 0 & &\quad \, && s \in S, k \in K_s, t \in T \backslash T_s \label{eq_correcttime_inflow} \end{align}
\end{subequations}
\textit{\underline{Continuous Feed During Runs}}: The volume of supply provided by each tank to meet production feed needs to remain constant during a production run. \begin{align}
\displaystyle y^{\text{out}}_{k,t} &= y^{\text{out}}_{k,t-1} & r \in R, k \in K,~t \in T_r \backslash \{t^{i}_r\} \label{eq_contfeed} \end{align}
\textit{\underline{Tank Feed Share Constraints}}: If production feed is supplied by tank $k$ during time period $t$ (such that $\sigma_{k,t} = 1$), then the volume of production feed ($y^{\text{out}}_{k,t}$) must be at least $\underbar{tkp}^{\min}_k$ of the demand $\underbar d_t$ at time $t$, as enforced by \eqref{eq_feedshare_lb}. If $\sigma_{k,t} = 0$, then \eqref{eq_feedshare_ub} ensures no feed is provided by tank $k$ for time $t$.
\begin{subequations} \begin{align}
&\displaystyle y^{\text{out}}_{k,t} \ge (\underbar{tkp}^{\min}_k \cdot \underbar d_t) \cdot \sigma_{k,t} & \,& k\in K, t \in T \label{eq_feedshare_lb}\\
&\displaystyle y^{\text{out}}_{k,t} \le \underbar d_t \cdot \sigma_{k,t} & \,& k \in K, t \in T \label{eq_feedshare_ub} \end{align} \end{subequations} \textit{\underline{Supply Limitations}}: Although it is preferred that barges are completely unloaded in one day, due to tank storage capacity, this may need to be done over several days. Hence, we constrain the number of unloads to be bounded by $\underbar{bul}^{\max}$, which are $2$ in all of our instances \eqref{eq_barge_num_unloads}. The number of times any barge is unloaded within a time period is also limited \eqref{eq_day_num_unloads}. Expression \eqref{eq_tighten_correctperiod_unloads} serves as a tightening constraint to ensure that barges are not unloaded when they are not available. If a supply barge is not unloaded during a time period (such that $\gamma_{s,t}$ is zero), then no volume is unloaded from the barge \eqref{eq_unl_ub}. \begin{subequations} \begin{align}
\displaystyle \sum_{t \in T_s} \gamma_{s,t} &\le \underbar{bul}^{\max} & \,& s \in S \label{eq_barge_num_unloads}\\
\gamma_{s,t} &= 0 & \,& s \in S, t \in T\backslash T_s \label{eq_tighten_correctperiod_unloads}\\
\displaystyle \sum_{s \in S} \gamma_{s,t} &\le \underbar{ul}^{\max} & \,& t \in T \label{eq_day_num_unloads}\\
y^{\text{in}}_{s,k,t} &\le \underbar{v}_{s}\cdot \gamma_{s,t} & \,& s \in S, t \in T,~k \in K_s \label{eq_unl_ub} \end{align} \end{subequations}
\textit{\underline{Unload Inventory Percentage Constraints}}: \eqref{eq_minunl_pct} enforces that the percent of inventory unloaded per supply barge remains above required minimum $\underbar{ulp}^{\min}_s$ each day, if unloading is to occur that day.
\begin{align}
\displaystyle \sum_{k \in S_k} y^{\text{in}}_{s,k,t} &\ge ( \underbar{v}_s \cdot \underbar{ulp}^{\min}_s) \cdot \gamma_{s,t} & s \in S, t \in T \label{eq_minunl_pct} \end{align}
\textit{\underline{Unload Time Gap Constraints}}: In an effort to minimize the cost associated with barges remaining in the area for an extended time, these constraints limit the time duration for a supply barge to remain in the unloading area. To accomplish this, constraints are included to limit the interval $[t^{\text{first}}_s,t^{\text{last}}_s]$. Expression \eqref{eq_minunltime} ensures that $t^{\text{first}}_s$ is no later than the first period that any inventory is unloaded for barge $s$. Expression \eqref{eq_maxunltime} enforces that $t^{\text{last}}_s$ is no earlier than the last period that any inventory is unloaded for barge $s$. Finally, \eqref{eq_unlwindow} enforces that the difference between the first and last unloading times for each barge is no larger than the allotted time gap. Although $t^{\text{first}}_s$ may assume a value earlier than the actual earliest unload time and $t^{\text{last}}_s$ may assume a value after the last unload time, if the interval is \textit{shorter} than the required length, then the actual interval will also be shorter. Thus, this set of constraints is sufficient.
\begin{subequations} \begin{align}
&t^{\text{first}}_s \le t \cdot \gamma_{s,t} + H(1 - \gamma_{s,t})& \,& s \in S, t \in T \label{eq_minunltime}\\
&t^{\text{last}}_s \ge t \cdot \gamma_{s,t} -H(1-\gamma_{s,t})& \,& s \in S, t \in T \label{eq_maxunltime}\\
&t^{\text{last}}_s - t^{\text{first}}_s \le \underbar{ult}^{\max} \label{eq_unlwindow} & \,& s \in S \end{align} \end{subequations}
All of the constraints described above are linear except for the chemical characteristic constraints described by \eqref{eq_infspec},\eqref{eq_feedspecinv}, which contain bilinear terms. MILP approximations for these constraints are formulated in Section~\ref{ssec_MILP-approx}.
\subsection{Reformulation without spec variables $\SPEC^{\text{prod}}_{q,t}$ \label{ssec_prod}} For the implementation of this model, we adjust the model to eliminate terms consisting of only a single $\SPEC^{\text{prod}}$ variable from the model, restricting the model to product terms $f \cdot v$ or $\SPEC^{\text{prod}} \cdot v$, referred to as spec-volume terms. In this way, after discretization, we no longer have need to explicitly keep track of these specs at all, relying solely on these spec-volume product terms. In addition, we eliminate the $\SPEC^{\text{prod}}_{q,t}$ and $v^{\text{prod}}_{t}$ variables.
The initial constraint \eqref{eq_specInv} is replaced with
\begin{align}
f_{k,q,0} \cdot v^{\text{end}}_{k,0} &= \underbar{\SPEC}_{k,q,0} \cdot \underbar{v}_{k,0} & k \in K,~ q \in Q \label{eq_specinv_2}\tag{\ref{eq_specInv}-prod}
\end{align}
By multiplying \eqref{eq_outflow} by $f_{k,q,t}$ we obtain \begin{align}
f_{k,q,t} \cdot v^{\text{mid}}_{k,t} - f_{k,q,t} \cdot y^{\text{out}}_{k,t} &= f_{k,q,t} \cdot v^{\text{end}}_{k,t} & k \in K,~ t \in T
\label{eq_specOutflow} \tag{\ref{eq_outflow}-prod}
\end{align}
Note that the $v^{\text{mid}}_{k,t}$ variables could also be eliminated via the constraint \eqref{eq_outflow}. However, as this elimination choice can impact the quality of the MILP approximation, we defer this discussion to a later section.
We reformulate the constraints \eqref{eq_feedspec},\eqref{eq_feedspecrat} by multiplying through by volume (and any denominators) to obtain the following:
\begin{subequations} \begin{align}
\underbar{\SPEC}^{\min}_{q,t} \cdot v^{\text{prod}}_{t} &\le
\SPEC^{\text{prod}}_{q,t} \cdot v^{\text{prod}}_{t} \le
\underbar{\SPEC}^{\max}_{q,t} \cdot v^{\text{prod}}_{t} &
\,& t \in T,~q \in Q
\label{eq_feedspec_2}\tag{\ref{eq_feedspec}-prod}\\
\underbar{r}^{\min}_{q_1,q_2,t} \cdot\SPEC^{\text{prod}}_{q_2,t} \cdot v^{\text{prod}}_{t} &\leq \SPEC^{\text{prod}}_{q_1,t}\cdot v^{\text{prod}}_{t} \leq
\underbar{r}^{\max}_{q_1,q_2,t} \cdot\SPEC^{\text{prod}}_{q_2,t} \cdot v^{\text{prod}}_{t}
& \, & t \in T,~(q_1,q_2) \in Q_{\text{rat}} \label{eq_feedspecrat_2}\tag{\ref{eq_feedspecrat}-prod} \end{align} \end{subequations}
Finally, we substitute the definitions of $v_{t}$ and $f_{k,q,t} \cdot v_{t}$ in \eqref{eq_invProduction} and \eqref{eq_feedspecinv} into constraints \eqref{eq_demand}, \eqref{eq_feedspec_2},\eqref{eq_feedspecrat_2} to obtain:
\begin{subequations} \begin{align}
\ds\sum_{k \in K} y^{\text{out}}_{k,t} &= \underbar d_t - d^{\text{unmet}}_t & \, & t \in T \label{eq_demand_2}
\tag{\ref{eq_demand}-sub}\\
\underbar{\SPEC}^{\min}_{q,t} \cdot \ds\sum_{k \in K} y^{\text{out}}_{k,t} &\le
\ds\sum_{k \in K} f_{k,q,t} \cdot y^{\text{out}}_{k,t} \le
\underbar{\SPEC}^{\max}_{q,t} \cdot \ds\sum_{k \in K} y^{\text{out}}_{k,t} &
\,& t \in T,~q \in Q \label{eq_feedspec_3}\\%\tag{\ref{eq_feedspec_2}-sub}\\
\underbar{r}^{\min}_{q_1,q_2,t}\cdot \ds\sum_{k \in K} f_{k,q_2,t} \cdot y^{\text{out}}_{k,t} &\leq
\ds\sum_{k \in K} f_{k,q_1,t} \cdot y^{\text{out}}_{k,t} \leq
\underbar{r}^{\max}_{q_1,q_2,t}\cdot \ds\sum_{k \in K} f_{k,q_2,t} \cdot y^{\text{out}}_{k,t}
& \, & t \in T,~(q_1,q_2) \in Q_{\text{rat}} \label{eq_feedspecrat_3} \end{align} \end{subequations}
To summarize, the MINLP-Mixing reformulation is the initial model, but with \eqref{eq_specInv}, \eqref{eq_outflow}, \eqref{eq_invProduction}, \eqref{eq_feedspecinv}, \eqref{eq_demand}, \eqref{eq_feedspec}, and \eqref{eq_feedspecrat} replaced by \eqref{eq_specinv_2}, \eqref{eq_specOutflow}, \eqref{eq_demand_2}, \eqref{eq_feedspec_3},\eqref{eq_feedspecrat_3}.
\subsection{Model MINLP-SPLIT: Volume-based reformulation \label{ssec_split}} The model presented above can be reformulated to keep track of volumes of chemical characteristics instead of percentages $f_{k,q,t}$, removing $f_{k,q,t}$ from the model. This reformulation results in a linear representation for the mixing constraints in \eqref{eq_feedspec_3},\eqref{eq_feedspecrat_3},\eqref{eq_infspec} and modified flow balance constraint in \eqref{eq_specOutflow}. However, there is now a nonlinear representation for the splitting process, in which some volume is split into multiple parts (e.g. future inventory and demand feed). In the original model, the definition and usage of $f_{k,q,t}$ ensures that the outflow characteristics are correct. However, after reformulating without $f_{k,q,t}$, this is not longer the case, so we must ensure that the outflow characteristics are correct.
To enact the reformulation, we make the following substitutions:
\begin{subequations} \begin{align}
f_{k,q,t} \cdot v^{\text{end}}_{k,t} & = \vsf^{\fsf,\text{end}}_{k,q,t} & \, & k \in K, q \in Q, t \in T\\
f_{k,q,t} \cdot v^{\text{mid}}_{k,t} & = \vsf^{\fsf,\text{mid}}_{k,q,t} & \, & k \in K, q \in Q, t \in T\\
f_{k,q,t} \cdot y^{\text{out}}_{k,t} & = \ysf^{\fsf,\text{out}}_{k,q,t} & \, & k \in K, q \in Q, t \in T \end{align} \end{subequations}
Thus, \eqref{eq_specOutflow}, \eqref{eq_infspec}, \eqref{eq_feedspec_3} and \eqref{eq_feedspecrat_3} are written as
\begin{subequations} \begin{align}
\vsf^{\fsf,\text{end}}_{k,q,t} &= \vsf^{\fsf,\text{mid}}_{k,q,t} - \ysf^{\fsf,\text{out}}_{k,q,t} & \,& k \in K, t \in T
\label{eq_specOutflow_2}\\% \tag{\ref{eq_specOutflow}-vol}\\
\vsf^{\fsf,\text{mid}}_{k,q,t} &= \displaystyle \sum_{s \in S_k} \underbar{\SPEC}_{s,q} \cdot y^{\text{in}}_{s,k,t} + \vsf^{\fsf,\text{end}}_{k,q,t-1}
& \,& k \in K, q \in Q, t \in T\label{eq_infspec_2}\tag{\ref{eq_infspec}-vol}\\
\underbar{\SPEC}^{\min}_{q,t} \cdot \ds\sum_{k \in K} y^{\text{out}}_{k,t} &\le
\ds\sum_{k \in K} \ysf^{\fsf,\text{out}}_{k,q,t} \le
\underbar{\SPEC}^{\max}_{q,t} \cdot \ds\sum_{k \in K} y^{\text{out}}_{k,t} &
\,& t \in T,~q \in Q \label{eq_feedspec_4}\\%\tag{\ref{eq_feedspec_3}-vol}\\
\underbar{r}^{\min}_{q_1,q_2,t}\cdot \ds\sum_{k \in K} \ysf^{\fsf,\text{out}}_{k,q_2,t} &\leq
\ds\sum_{k \in K} \ysf^{\fsf,\text{out}}_{k,q_1,t} \leq
\underbar{r}^{\max}_{q_1,q_2,t}\cdot \ds\sum_{k \in K} \ysf^{\fsf,\text{out}}_{k,q_2,t}
& \, & t \in T,~(q_1,q_2) \in Q_{\text{rat}} \label{eq_feedspecrat_4} \end{align} \end{subequations}
In addition, the initial condition \eqref{eq_specinv_2} is replaced with
\begin{align}
\vsf^{\fsf,\text{end}}_{k,q,0} &= \underbar{\SPEC}_{k,q,0} \cdot \underbar{v}_{k,0} & k \in K,~ q \in Q \label{eq_specinv_3}
\end{align}
Now, to enforce the requirement that the chemical characteristics sent to production feed match match the chemical characteristics in the tank; that is:
\begin{subequations} \begin{align}
f_{k,q,t} = \dfrac{\vsf^{\fsf,\text{mid}}_{k,q,t}}{v^{\text{mid}}_{k,t}} = \dfrac{\ysf^{\fsf,\text{out}}_{k,q,t}}{y^{\text{out}}_{k,t}} \end{align} \end{subequations}
we add the following nonlinear constraint
\begin{subequations} \begin{align}
\vsf^{\fsf,\text{mid}}_{k,q,t} \cdot y^{\text{out}}_{k,t} &= \ysf^{\fsf,\text{out}}_{k,q,t} \cdot v^{\text{mid}}_{k,t} & \, & k \in K,~ q \in Q,~ t \in T
\label{eq_split} \end{align} \end{subequations}
To summarize, the MINLP-Splitting model is the initial model, but with \eqref{eq_specInv},\eqref{eq_outflow},\eqref{eq_invProduction},\eqref{eq_infspec},\eqref{eq_feedspecinv},\eqref{eq_demand},\eqref{eq_feedspec},\eqref{eq_feedspecrat} replaced by \eqref{eq_specinv_3}, \eqref{eq_specOutflow_2}, \eqref{eq_demand_2}, \eqref{eq_feedspec_4}, \eqref{eq_feedspecrat_4}, and \eqref{eq_split}.
\section{MILP Approximation Methods} We consider approaches of changing the nonlinear portions of the model in Section~\ref{ssec_prod} to an MILP to more efficiently solve the problem and more easily interact with a rolling planning approach. The core change is that we will replace all products $y^{\text{in}}\cdot f$ and $y^{\text{out}}\cdot f$ with approximations. The new models, given sufficiently accurate approximations, will produce feasible flows $y^{\text{in}}$ and $y^{\text{out}}$ that completely determine the actions of the system.
For a spec variable $f \in [l,u]$ we describe it by a discete part $\dot{f} \in \{l, l + \epsilon, l+2\epsilon, \dots, l+ \zeta \epsilon\}$ and a continuous part $\Delta f \in [0,\epsilon]$, where we choose $\epsilon$ such that $u - \epsilon < 1 + \zeta \epsilon \leq u$. The discrete part $\dot{f}$ will be modeled with binary variables in a way known as Normalized Multi-Parametric Disaggregation Techniques (NMDT). For the products with $\Delta f$, we will consider two options - restricting the value to a midpoint or using the McCormick relaxation. That is, the two main ideas considered are the models:
\begin{align} x \cdot \tilde f &= \underbrace{x \cdot \dot{f}}_{\text{NMDT}} + \underbrace{x \cdot \Delta f}_{\Delta f \leftarrow \frac{\hat \epsilon}{2}} & \label{approx-model}\tag{Center}\\
x \cdot \tilde f &= \underbrace{x \cdot \dot{f}}_{\text{NMDT}} + \underbrace{x \cdot \Delta f}_{\text{McCormick}} & \label{NMDT-model} \tag{McCormick} \end{align}
When using \eqref{approx-model}, we relax \eqref{eq_infspec} to maintain flexibility on inflow volumes. We do this by replacing the left-hand side with its upper and lower bounds with respect to $\Delta f$. These choices, along with various relaxations of redundant constraints are described in detail in this section.
We will first describe common linearizations that we will consider, then describe our two competing models, and then have a discussion of rolling planning approaches.
\subsection{Linearization}
We define here two types of linearizations that we will consider. First, the McCormick envelope is a 3-dimensional polytope $\mathcal{M}(x,\beta)$ that is the smallest convex set containing the equation $\mathsf{x}^\mathsf{\beta} = x \cdot \beta$ with upper and lower bounds on $\beta$ and $x$. We will assume $0 \leq \beta \leq 1$ and $\underbar{x}^{\min} \leq x \leq \underbar{x}^{\max}$. The McCormick envelope is \begin{equation} \mathcal{M}(x, \beta) = \left\{ (x, \beta, \mathsf{x}^\mathsf{\beta}) \in [\underbar{x}^{\min},\underbar{x}^{\max}]\times [0,1] \times \bb R \ : \
\begin{aligned}
\underbar{x}^{\min} \cdot \beta \le & \, \mathsf{x}^\mathsf{\beta} \le \underbar{x}^{\max}\cdot \beta \\
x - \underbar{x}^{\max}\cdot (1-\beta) \le &\, \mathsf{x}^\mathsf{\beta} \le x - \underbar{x}^{\min}\cdot (1-\beta)
\end{aligned}
\right\}.
\label{eq:McCormick} \end{equation}
More generally, suppose we have $\beta_0, \dots, \beta_m \in \{0,1\}$ such that $\sum_{j=0}^m \beta_j = 1$ hence $\sum_{j=0}^m x\cdot \beta_j = x$. Defining $\mathsf{x}^{\mathsf{\beta}_j} = x \cdot \beta_j$, we can then model the convex hull of all such solutions as \begin{equation} \mathcal{M}^m(x, \beta) = \left\{ (x, \beta, \mathsf{x}^\mathsf{\beta}) \in [\underbar{x}^{\min},\underbar{x}^{\max}]\times [0,1]^{m+1} \times \bb R^{m+1} \ : \
\begin{aligned}
\sum_{j=0}^m \beta_j=& \, 1, \ \ \ \,\,
\sum_{j=0}^m \mathsf{x}^{\mathsf{\beta}_j}= \, x\\
\underbar{x}^{\min} \cdot \beta_j \le & \, \mathsf{x}^{\mathsf{\beta}_j} \le \underbar{x}^{\max}\cdot \beta_j \\
x - \underbar{x}^{\max}\cdot (1-\beta_j) \le &\, \mathsf{x}^{\mathsf{\beta}_i} \le x - \underbar{x}^{\min}\cdot (1-\beta_i)
\end{aligned}
\right\}.
\label{eq:McCormick-general} \end{equation} In the special case where $m=1$, the set $\mathcal M^1(x,\beta)$ is a bijective lifting of $\mathcal M(x,\beta)$.
\subsection{Discretization} \label{ssec_MILP-approx} We discuss two methods of linearizing or relaxing bilinear products: Normalized Multiparametric Descretization Technique (NMDT)~\cite{Castro2015c} and McCormick envelopes. The discrete variable $\dot{f}$, contained in the interval $[l,u]$, is converted into binary variables as \begin{equation} \dot{f} = \lambda_0 + \varepsilon \sum_{i=1}^n \sum_{j=0}^{m} (j\, \lambda_i)\cdot \alpha_{ij}, \ \hspace{1cm} \, \sum_{j=0}^m \alpha_{ij} =1 \text{ for all } i=1, \dots, n, \label{eq_disc_idea} \end{equation} where $\alpha_{ij} \in \{0,1\}$ for all $i,j$ and $\lambda_i$ are appropriately chosen scalars. Given a desired level of precision $0 < \hat{\varepsilon} \le u-l$, the choices of $\lambda_i$ (and resulting choices for $\varepsilon$, $m$, and $n$) will affect the number of binary variables $\alpha_{ij}$ that are needed to reach the desired precision.
\begin{enumerate}
\item {Multiparametric Disaggregation Technique} with base $b$ provided $l \geq 0$: \begin{equation}
\lambda_0 = 0, \lambda_i = b^{i-1}, m=b-1; \rightarrow \
\varepsilon = b^{\floor{\log_{b}(\hat{\varepsilon})}},
n = \ceil{\log_{b}(\tfrac{u}{\varepsilon})}. \tag{MDT} \label{eq:MDT} \end{equation}
\item {Normalized Multiparametric Disaggregation Technique, with base $b$}:
\begin{equation}
\lambda_0 = l, \lambda_i = b^{i-1}, m=b-1;
n = \ceil{\log_b \lrp{\tfrac{u-l
}{\hat{\varepsilon}}}}, \varepsilon = (u-l) b^{-n}. \tag{NMDT} \label{eq:NMDT}
\end{equation}
\item Monolithic Disaggregation:
\begin{equation}
\lambda_0 = l, n=1, \lambda_1 = 1;
m = \ceil{\tfrac{u-l}{\hat{\varepsilon}}}, \varepsilon = (u-l)/m. \tag{Mono} \label{eq:mono}
\end{equation} \end{enumerate} \eqref{eq:MDT} discretizes $\dot{f}$ independent of the lower bound $l$, while \eqref{eq:NMDT} and \eqref{eq:mono} use the lower bound and can be interpreted as discretizing $\dot{f}$ after normalizing the bounds $[l,u]$ to $[0,1]$. Incorporating the lower bound is crucial for reducing the number of binary variables needed (and hence the complexity of the model) to obtain a certain precision $\varepsilon$. Furthermore, the exponential approach of \eqref{eq:NMDT} is vastly preferred over the unary approach of \eqref{eq:mono}, allowing only ($\mathcal{O}(\log(1/\varepsilon))$ as opposed to $\mathcal{O}(1/\varepsilon)$).
In the originating work for \eqref{eq:NMDT} \cite{Castro2015c}, the same number of digits $n$ (and hence, precision $\varepsilon$) is applied to all discretized variables (to maintain the same precision in the \textit{normalized} space), while in the version presented here, $n$ is computed based on the desired precision $\hat{\varepsilon}$ for each individual discretized variable (to maintain similar precision in the \textit{original} space).
\begin{equation} \mathcal{D}(x,f, \alpha) = \left\{
\begin{aligned}(x,f,\dot{f},\Delta f,\\
\alpha, \mathsf{x}^\mathsf{f}, \mathsf{x}^\mathsf{\alpha})\
\end{aligned}:\ \begin{aligned}
\, & f = \dot{f} + \Delta f\\
\,&\dot{f} = \lambda_0 + \varepsilon \sum_{i=1}^n \sum_{j=1}^{m} (j \,\lambda_i)\cdot \alpha_{ij}& \\
\,&\sum_{j=0}^m \alpha_{ij} = 1 &\, i=1, \dots, n\\
\,&\mathsf{x}^\mathsf{f} = x \cdot f =\lambda_0 x + x \cdot \Delta f + \varepsilon \sum_{i=1}^n \sum_{j=0}^{m} (j \,\lambda_i)\cdot \mathsf{x}^{\mathsf{\alpha}_{ij}}& \\
\,&\mathsf{x}^{\mathsf{\alpha}_{ij}} = x \cdot \alpha_{ij} & i=1,\dots, n,\, j = 0, \dots, m\\
\, & \underbar{x}^{\min} \leq x \leq \underbar{x}^{\max}\\
\, & \alpha_{ij} \in \{0,1\} & i=1,\dots, n,\, j = 0, \dots, m
\end{aligned}\right\}. \end{equation}
Most of the nonlinear products are now linearized and we drop the variables $\dot{f}$ and $f$. In this way, $f$ is now only recorded implicitly (approximately) through product terms $\mathsf{x}^{\mathsf{\alpha}_{ij}}$, $\mathsf{x}^\mathsf{f}$, and $\mathsf{x}^{\Delta \mathsf{f}}$. This produces the set
\begin{equation} \mathcal{\tilde D}(x,f,\alpha) = \left\{ (x, \alpha, \mathsf{x}^\mathsf{f}, \mathsf{x}^{\Delta \mathsf{f}}, \mathsf{x}^\mathsf{\alpha}) \ :\ \begin{aligned}
\,& \mathsf{x}^\mathsf{f} = \lambda_0 \cdot x + \mathsf{x}^{\Delta\mathsf{f}}+ \varepsilon \sum_{i=1}^n \sum_{j=0}^{m} (j \,\lambda_i)\cdot \mathsf{x}^{\mathsf{\alpha}_{ij}}& \\
\, & (x,\alpha_i, \mathsf{x}^{\alpha_i}) \in \mathcal M^m(x,\alpha_i)\\
\, & \alpha_{ij} \in \{0,1\} & i=1,\ldots, n,\, j = 0, \ldots, m
\end{aligned}\right\},
\label{eq:tilde-D} \end{equation} where $\lambda_0$, $n$, and $\varepsilon$ are computed from a specified $\hat \varepsilon$, $\underbar{x}^{\min}$ and $\underbar{x}^{\max}$.
The remaining nonlinear product $x \cdot \Delta f$, approximated by $\mathsf{x}^{\Delta \mathsf{f}}$, will be dealt with in two different ways.
The binarization in \cite{Gupte2016} is similar to \eqref{eq:NMDT} with $b=2$ after projecting out the $\alpha_{i0}$ variables using the equation $\sum_{j=0}^1 \alpha_{ij} = 1$, i.e., $\alpha_{i0} + \alpha_{i1} = 1$.
\subsubsection{Choosing base $b=2$ reduces variables.} We suggest the best model uses $b = 2$ since it
uses asymptotically fewer binary variables than any other choice $b > 2$ (as $\hat{\varepsilon} \to 0^+)$.
\begin{proposition} Let $z_{\hat{\varepsilon}}(b) \vcentcolon= (b-1) \cdot n = (b-1) \ceil{\log_b\lrp{\dfrac{u-l}{\hat{\varepsilon}}}}$ be the number of binary variables used to represent $\dot{f}$, and let $b_1,b_2$ be two different bases. Then $z'(b_1,b_2) \vcentcolon= \displaystyle \lim_{\hat{\varepsilon} \to 0^+} \dfrac{z_{\hat\varepsilon}(b_1)}{z_{\hat\varepsilon}(b_2)} = \dfrac{(b_1-1)\ln(b_2)}{(b_2-1) \ln(b_1)}$. \end{proposition} \begin{proof} Asymptotically, as $\hat{\varepsilon} \to 0^+$, we can remove the ceiling function in the definition of $z_{\hat{\varepsilon}}$. Hence \begin{equation*}
z'(b_1,b_2) = \displaystyle \lim_{\hat{\varepsilon} \to 0^+} \dfrac
{(b_1-1) \log_{b_1} \lrp{\dfrac{u-l}{\hat{\varepsilon}}}}
{(b_2-1) \log_{b_2} \lrp{\dfrac{u-l}{\hat{\varepsilon}}}}
= \dfrac{(b_1-1)}{(b_2-1)}\log_{b_1}(b_2) = \dfrac{(b_1-1)\ln(b_2)}{(b_2-1) \ln(b_1)}. \end{equation*}
\end{proof}
Because $z_{\hat \varepsilon}(b)$ is an increasing function of $b$, $z'(b,2)>1$, for all $b>2$. Using the proposition, we compute $z'(b,2)$ for various values of $b$ in~\ref{tab:my_label}. In~\cite{Castro2015c}, Castro uses $b=10$. As the table shows, this binarization will use nearly three times as many binary variables. \begin{table}[H]
\centering
\begin{tabular}{c|ccccccccccccccc}
$b$ & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\
\hhline{-|-----------}
$z'(b,2)$
& 1.26186
& 1.5
& 1.72271
& 1.93426
& 2.13724
& 2.33333
& 2.52372
& 2.70927
\end{tabular}
\caption{Asymptotic number of binary variables required for base $b$, as a multiple of the number required for base $2$.}
\label{tab:my_label} \end{table} In summary, we suggest to use NMDT with a base 2 because it will require the fewest number of binary variables to achieve a desired accuracy.
\subsubsection{Implied and redundant equations}
\label{sssec_MILP_var_elim}
Constraint~\eqref{eq_outflow} (and subsequently~\eqref{eq_specOutflow}) couples the variables $v^{\text{mid}}_{k,t},y^{\text{out}}_{k,t}$, and $v^{\text{end}}_{k,t}$. For convenience, let us write $x_1 = v^{\text{mid}}_{k,t}$, $x_2 = y^{\text{out}}_{k,t}$, $x_3 = v^{\text{end}}_{k,t}$, and $f = f_{k,q,t}$.
That is, \begin{align}
x_1&=x_2+x_3. \label{wcouple} \end{align} As a consequence of \eqref{wcouple}, by multiplying through by $f, \Delta f$, and $\alpha_i$, we obtain three sets of valid equations
\begin{subequations} \begin{align}
\mathsf{x}^\mathsf{f}_1&=\mathsf{x}^\mathsf{f}_2+\mathsf{x}^\mathsf{f}_3, \label{vcouple}\\
\mathsf{x}^{\Delta \mathsf{f}}_1 &= \mathsf{x}^{\Delta \mathsf{f}}_2 + \mathsf{x}^{\Delta \mathsf{f}}_3,\label{ucouple_delta}\\
\mathsf{x}^{\mathsf{\alpha}_i}_1 &= \mathsf{x}^{\mathsf{\alpha}_i}_2 + \mathsf{x}^{\mathsf{\alpha}_i}_3 & i=1,\dots, n. \label{ucouple} \end{align} \end{subequations}
Consider the approximation of $f$ and the resulting product form of the equations
\begin{subequations} \begin{align}
f &= l + \Delta f+ \varepsilon \ds\sum_{i=1}^n 2^{i-1} \cdot \alpha_i\\
\mathsf{x}^\mathsf{f}_{{\kappa}} &=l \cdot x_{{\kappa}} + \mathsf{x}^{\Delta \mathsf{f}}_\kappa + \varepsilon \ds\sum_{i=1}^n 2^{i-1}\cdot \mathsf{x}^{\mathsf{\alpha}_{i}}_{\kappa} && {\kappa} =1,2,3 \label{vreps}\\
\mathsf{x}^{\mathsf{\alpha}_{i}}_{{\kappa}} &= x_{{\kappa}} \cdot \alpha_i && i =1,\dots,n,\ {\kappa} = 1,2,3 \label{udef}
\end{align} \end{subequations}
Of key importance here for model reduction and a stronger formulation are the observations: (1) Bilinear terms can be eliminated using the above equations (2) Applying McCormick relaxations to these bilinear terms before variable elimination produces a tighter formulation.
First, although other model choices are possible, we project out the variables associated with $\kappa = 1$. Conveniently, equations \eqref{vreps} and \eqref{udef}, for $\kappa = 1$, are both implied by all other equations. In fact, variables $\mathsf{x}^{\mathsf{f}}_1$, $\mathsf{x}^{\Delta \mathsf{f}}_1$ and $\mathsf{x}^{\mathsf{\alpha}_i}_1$ by substitution, can be projected out of the model using equations \eqref{wcouple}, \eqref{ucouple_delta}, and \eqref{ucouple}, then the equations \eqref{vreps} and \eqref{udef} become tautologies. This observation provides convenient reduction choices in the model in terms of both variables and constraints.
Second, applying the projection/linear relaxation in the right order can improve the model. In particular, let $\mathcal F$ represent the feasible set given by the above equations, let $\mathcal M^{123}(x,\alpha_\kappa)$ be the McCormick relaxation in the space of all variables (similarly, $M^{23}(x,\alpha_\kappa)$ in the space of variables associated with $\kappa = 2,3$), $\proj_{23}$ be the projecting into the space of variables with $\kappa = 2,3$, and $\relax$ denote the removal of bilinear equations \eqref{udef}.
Then we have the strict inclusion \begin{equation} \mathrm{relax}\left(\proj_{23}\left(\mathcal F \cap \bigcap_{\kappa = 1,2,3}\mathcal{M}^{123}(x, \alpha_\kappa)\right)\right)\, \subsetneq \, \mathrm{relax}\left(\proj_{23}(\mathcal F) \cap \bigcap_{\kappa = 2,3}\mathcal{M}^{23}(x, \alpha_\kappa)\right). \end{equation}
That is, using the inequalities from the McCormick relaxation from the $\kappa=1$ variables improves the formulation.
\subsection{MILP Models} We will now outline the two models we propose to consider and then discuss how to integrate them with rolling planning. \subsubsection{Center Approximation} \label{sssec_approx_method} We create a mixed integer linear approximation to the main problem that will only approximately track constraints on specifications. To check feasibility of the resulting flows $y^{\text{in}}$ and $y^{\text{out}}$, these solutions must be plugged into the main model to recover spec values and ensure the constraints are met.
The approximation error will be guided by how fine the discrete approximation of the spec variables is. For the model, a prevision $\hat{\varepsilon}_{q}$ is chosen such that, given an accurate discretization, we should have $|f_{k,q,t} - \dot{f}_{k,q,t}| \leq \hat{\varepsilon}$. The choice of $\hat{\varepsilon}_{q}$, as discussed earlier, defines the choices of $\epsilon_q, n_q$ and $m_q$ which are implied parameters in the sets $\mathcal{D}(x,f_{k,q,t}, \alpha)$. Thus, for each spec $q$, there are (potentially) unique $\epsilon, n$ and $m$ that are used for the set $\mathcal{D}(x,f_{k,q,t}, \alpha)$. We write all products in terms of single variables and we relax \eqref{eq_infspec} to increase the number of feasible solutions to the approximate model.
We thus model \eqref{eq_specOutflow},\eqref{eq_infspec},\eqref{eq_feedspec_3},\eqref{eq_feedspecrat_3} as
\begin{subequations} \begin{align} \vsf^{\fsf,\text{mid}}_{k,q,t} - \ysf^{\fsf,\text{out}}_{k,q,t} &= \vsf^{\fsf,\text{end}}_{k,q,t} & \,& k \in K, t \in T, \label{eq_specOutflow_disc}\\%\tag{\ref{eq_specOutflow}-appx}\\
\vsf^{\fsf,\text{mid}}_{k,q,t} - \frac{\varepsilon_{q}}{2} \cdot v^{\text{mid}}_{k,q,t}
&\le \displaystyle \sum_{s \in S_k} \underbar{\SPEC}_{s,q} \cdot y^{\text{in}}_{s,k,t} + \vsf^{\fsf,\text{end}}_{k,q,t-1} & \,& k \in K, q \in Q, t \in T,\label{eq_infspec_disc_ub}\\% \tag{\ref{eq_infspec}-relax-ub}\\
\vsf^{\fsf,\text{mid}}_{k,q,t} + \frac{\varepsilon_{q}}{2} \cdot v^{\text{mid}}_{k,q,t}
&\ge \displaystyle \sum_{s \in S_k} \underbar{\SPEC}_{s,q} \cdot y^{\text{in}}_{s,k,t} + \vsf^{\fsf,\text{end}}_{k,q,t-1} & \,& k \in K, q \in Q, t \in T,\label{eq_infspec_disc_lb}\\%\\\tag{\eqref{eq_infspec}-relax-lb}\\
\underbar{\SPEC}^{\min}_{q,t} \cdot \ds\sum_{k \in K} y^{\text{out}}_{k,t} &\le
\ds\sum_{k \in K} \ysf^{\fsf,\text{out}}_{k,q,t} \le
\underbar{\SPEC}^{\max}_{q,t} \cdot \ds\sum_{k \in K} y^{\text{out}}_{k,t} &
\,& t \in T,~q \in Q, \label{eq_feedspec_3_disc}\\%\tag{\ref{eq_feedspec_3}-appx}\\
\underbar{r}^{\min}_{q_1,q_2,t}\cdot \ds\sum_{k \in K} \ysf^{\fsf,\text{out}}_{k,q_2,t} &\leq
\ds\sum_{k \in K} \ysf^{\fsf,\text{out}}_{k,q_1,t} \leq
\underbar{r}^{\max}_{q_1,q_2,t} \cdot\ds\sum_{k \in K} \ysf^{\fsf,\text{out}}_{k,q_2,t}
& \, & t \in T,~(q_1,q_2) \in Q_{\text{rat}}, \label{eq_feedspecrat_3_disc} \end{align} \end{subequations}
The remaining constraints are generated using $\mathcal{\tilde D}$ with $\lambda_0 = \underbar{\SPEC}^{\min}_{k,q}$ and $\Delta f_{k,q,t} = \frac{\varepsilon_{q}}{2}$ and $b=2$,
\begin{subequations} \begin{align}
&(v^{\text{mid}}_{k,q,t}, \alpha_{k,q,t}, \vsf^{\fsf,\text{mid}}_{k,q,t},\vsf^{\Delta\fsf,\text{mid}}_{k,q,t}, \vsf^{\alpha,\text{mid}}_{k,q,t} ) \in \mathcal{\tilde D}(v^{\text{mid}}_{k,q,t},f_{k,q,t}, \alpha_{k,q,t} ) & k \in K, q \in Q, t \in T,\label{eq_tildeD_vmid} \\
&(v^{\text{end}}_{k,q,t}, \alpha_{k,q,t}, \vsf^{\fsf,\text{end}}_{k,q,t},\vsf^{\Delta\fsf,\text{end}}_{k,q,t}, \vsf^{\alpha,\text{end}}_{k,q,t} ) \in \mathcal{\tilde D}(v^{\text{end}}_{k,q,t},f_{k,q,t}, \alpha_{k,q,t} ) & k \in K, q \in Q, t \in T,\label{eq_tildeD_vend}\\
&(y^{\text{out}}_{k,q,t}, \alpha_{k,q,t}, \ysf^{\fsf,\text{out}}_{k,q,t}, \ysf^{\Delta\fsf,\text{out}}_{k,q,t}, \ysf^{\alpha,\text{out}}_{k,q,t} ) \in \mathcal{\tilde D}(y^{\text{out}}_{k,q,t},f_{k,q,t}, \alpha_{k,q,t})& k \in K, q \in Q, t \in T. \label{eq_tildeD_yout} \end{align} \end{subequations}
Note that each $\alpha_{k,q,t}$ is a vector of $n_{k,q,t}$ binary variables, which may vary in number depending on the accuracy used to approximate $f_{k,q,t}$.
\subsubsection{Redundant equations} Defining both sets of constraints in \eqref{eq_tildeD_vmid}, \eqref{eq_tildeD_vend} creates some redundant inequalities. In particular, after omitting the $k,q,t$ indices for convenience, the inequalities
\begin{subequations} \begin{align} & \vsf^{\alpha,\text{mid}}_{i,j} \ge \underbar{v}^{\min}_k \cdot \alpha_{i,j} & i \in I_{k,q,t}, j \in \{0,1\} \label{eq_AVOLSPLdef_lb}\\ & \vsf^{\alpha,\text{end}}_{i,j} \le \underbar{v}^{\max}_k \cdot \alpha_{i,j} & i \in I_{k,q,t}, j \in \{0,1\} \label{eq_AVOLINVdef_ub} \end{align} \end{subequations}
are redundant due to \eqref{eq_outflow} and the non-negativity of all volume variables.
\subsubsection{Tightening feasible region to produce feasible solutions\label{sssec_tighten}} Because the proposed model is an approximation of the original problem, the model might produce a schedule with flows that result in infeasible specs - violating either the spec bounds or spec ratio bounds. However, in most cases, the magnitude of these violations we observed to be within $\hat{\varepsilon}/2$ for the proposed discretization methods.
Thus, to mitigate this error, we add a buffer of $\hat{\varepsilon}/2$ to each spec requirement, so that the interval $[\underbar{\SPEC}^{\min}_{t,q}, \underbar{\SPEC}^{\max}_{t,q}]$ is replaced with $[\underbar{\SPEC}^{\min}_{t,q}+\hat{\varepsilon}_q/2, \underbar{\SPEC}^{\max}_{t,q}-\hat{\varepsilon}_q/2]$.
As for the spec ratios, we used differentials of the ratio to obtain an upper approximation for the magnitude of the buffer to use: If the spec ratio to control is $\dfrac{f_{q_1}}{f_{q_2}}$, where $f_{q_1}$ and $f_{q_2}$ are spec qualities to control, the buffer to use is given via the differential: \begin{equation}
\Delta\lrp{\dfrac{f_{q_1}}{f_{q_2}}} = \dfrac{1}{f^{\min}_2} \Delta{f_{q_1}} + \dfrac{f^{\max}_1}{\lrp{f^{\min}_2}^2} \Delta{f_{q_1}}, \end{equation} where $\Delta f_{q_1}$ and $\Delta f_{q_2}$ are the computed buffers $\hat{\varepsilon}/2$ for $f_{q_1}$ and $f_{q_2}$, respectively. Thus, the interval $[\underbar{r}^{\min}_{q_1,q_2,t},\underbar{r}^{\max}_{q_1,q_2,t}]$ enforced for $\lrp{\frac{f_{q_1}}{f_{q_2}}}$ is replaced by $[\underbar{r}^{\min}_{q_1,q_2,t} + \Delta\lrp{\frac{f_{q_1}}{f_{q_2}}},\underbar{r}^{\max}_{q_1,q_2,t} - \Delta\lrp{\frac{f_{q_1}}{f_{q_2}}}]$. These buffers proved effective in largely resolving the issue of incorrect feed specs and spec ratios for the problems tested. The buffers could be reduced if they would result in possible ranges for specs or spec ratios that are smaller than the range provided by a single discretization bin (based on initial tank and supply inventories and the parameter $\hat{\varepsilon}_q$ specified by the user). However, for the purposes of this work, we will assume that $\hat{\varepsilon}_q$ is small enough that the resulting ranges for the demand specs are sufficiently large (with width $\ge \hat{\varepsilon}$ after tightening) that feasibility issues do not become problematic for the proposed discretization scheme.
These buffers result in a replacement of the parameters $\underbar{\SPEC}^{\max}_{t,q}$, $\underbar{\SPEC}^{\min}_{t,q}$, $\underbar{r}^{\max}_{t,q}$, and $\underbar{r}^{\min}_{t,q}$ in Section~\ref{sec:Mathematical-Model} with their `buffered' counterparts, tightened further by the possible feed specs based on initial tank inventory and inbound supply specs.
\subsubsection{McCormick Based Approximation} \label{sec:NMDT+McCormick} In a similar way, the final model for the original NMDT method (with McCormick envelopes around the continuous product terms) is given as follows:
\begin{subequations} \begin{align}
\vsf^{\fsf,\text{mid}}_{k,q,t} - \ysf^{\fsf,\text{out}}_{k,q,t} &= \vsf^{\fsf,\text{end}}_{k,q,t} & \,& k \in K, t \in T \label{eq_specOutflow_disc_NMDT}\\
\vsf^{\fsf,\text{mid}}_{k,q,t}
&=\displaystyle \sum_{s \in S_k} \underbar{\SPEC}_{s,q} \cdot y^{\text{in}}_{s,k,t} + \vsf^{\fsf,\text{end}}_{k,q,t-1} & \,& k \in K, q \in Q, t \in T \tag{\eqref{eq_infspec}-relax}\label{eq_infspec_disc_NMDT}\\
\underbar{\SPEC}^{\min}_{q,t} \cdot \ds\sum_{k \in K} y^{\text{out}}_{k,t} &\le
\ds\sum_{k \in K} \ysf^{\fsf,\text{out}}_{k,q,t} \le
\underbar{\SPEC}^{\max}_{q,t} \cdot \ds\sum_{k \in K} y^{\text{out}}_{k,t} &
\,& q \in Q, t \in T \label{eq_feedspec_3_disc_NMDT}\\
\ds\sum_{k \in K} \ysf^{\fsf,\text{out}}_{k,q_2,t} \cdot \underbar{r}^{\min}_{q_1,q_2,t} &\leq
\ds\sum_{k \in K} \ysf^{\fsf,\text{out}}_{k,q_1,t} \leq
\ds\sum_{k \in K} \ysf^{\fsf,\text{out}}_{k,q_2,t} \cdot \underbar{r}^{\max}_{q_1,q_2,t}
& \, & t \in T,~(q_1,q_2) \in Q_{\text{rat}} \label{eq_feedspecrat_3_disc_NMDT} \end{align} \end{subequations}
Where, for all $k \in K, q \in Q, t \in T$,
\begin{subequations} \begin{align}
&(v^{\text{mid}}_{k,q,t},\Delta f_{k,q,t}, \alpha_{k,q,t}, \vsf^{\fsf,\text{mid}}_{k,q,t}, \vsf^{\alpha,\text{mid}}_{k,q,t} ) \in \mathcal{\tilde D}(v^{\text{mid}}_{k,q,t},f_{k,q,t}, \alpha_{k,q,t} ) & \KQT \\
&(v^{\text{end}}_{k,q,t},\Delta f_{k,q,t}, \alpha_{k,q,t}, \vsf^{\fsf,\text{end}}_{k,q,t}, \vsf^{\alpha,\text{end}}_{k,q,t} ) \in \mathcal{\tilde D}(v^{\text{end}}_{k,q,t},f_{k,q,t}, \alpha_{k,q,t} ) & \KQT\\
&(y^{\text{out}}_{k,q,t},\Delta f_{k,q,t}, \alpha_{k,q,t}, \ysf^{\fsf,\text{out}}_{k,q,t}, \ysf^{\alpha,\text{out}}_{k,q,t} ) \in \mathcal{\tilde D}(y^{\text{out}}_{k,q,t},f_{k,q,t}, \alpha_{k,q,t})& \KQT\\
& \Delta \vsf^{\fsf,\text{mid}}_{k,q,t} \in [0, \varepsilon \cdot \underbar{v}^{\max}_{k,q,t}] & \KQT\\
& \Delta \vsf^{\fsf,\text{end}}_{k,q,t} \in [0, \varepsilon \cdot \underbar{v}^{\max}_{k,q,t}] & \KQT\\
& \Delta \ysf^{\fsf,\text{out}}_{k,q,t} \in [0, \varepsilon \cdot \underbar d_{k,q,t}] & \KQT \end{align} \end{subequations}
Finally, the continuous product terms $\Delta \vsf^{\fsf,\text{mid}}_{k,q,t}$, $\Delta \vsf^{\fsf,\text{end}}_{k,q,t}$, and $\Delta \ysf^{\fsf,\text{out}}_{k,q,t}$ are modeled via their McCormick envelopes, for all $k\in K, q \in Q, t \in T$,
\begin{subequations} \begin{align} \Delta f_{k,q,t} &\in [0, \varepsilon] & & \KQT\\
(\Delta f_{k,q,t}/\varepsilon,v^{\text{mid}}_{k,q,t},\Delta \vsf^{\fsf,\text{mid}}_{k,q,t}) &\in \mathcal{M}(\Delta f_{k,q,t}, v^{\text{mid}}_{k,q,t}) & & \KQT\\
(\Delta f_{k,q,t}/\varepsilon,v^{\text{end}}_{k,q,t},\Delta \vsf^{\fsf,\text{end}}_{k,q,t}) &\in \mathcal{M}(\Delta f_{k,q,t}, v^{\text{end}}_{k,q,t}) & & \KQT\\
(\Delta f_{k,q,t}/\varepsilon,y^{\text{out}}_{k,q,t},\Delta \ysf^{\fsf,\text{out}}_{k,q,t}) &\in \mathcal{M}(\Delta f_{k,q,t}, y^{\text{out}}_{k,q,t}) & & \KQT \end{align} \end{subequations}
\subsubsection{Error propagation in time horizon}
From a perspective of numerical error propagation, the McCormick-based approach results in an exactly-represented tank-inflow blending process (constraint \eqref{eq_infspec}) for our problem: since only two `spec $\cdot$ volume' terms contain variable specs, the volumes of the specs are coupled correctly. On the other hand, the splitting processes in \eqref{eq_feedspec} are inexact, since the McCormick envelopes around the $\Delta f$ product terms effectively allow the specs leaving a tank to differ slightly from the specs propagated to the next time period. However, due to the tightness of the McCormick envelopes, once a post-blending value $\tilde{f}$ is chosen for a tank, the represented specs will remain in the interval $[\tilde{f}, \tilde{f}+\varepsilon]$ until the next blending operation in the tank, allowing a maximum possible spec error in the interval $[\varepsilon/2, \varepsilon]$ to accrue between blending operations.
On the other hand, the Center approximation presented here incurs an error of up to $\varepsilon/2$ immediately after blending into the tank, but then accrues no additional numerical error within that tank until the next blending operation. This allows the model to remain closer to feasibility for cases in which each tank does not receive supply from barges very frequently.
\subsection{Rolling Horizon Approach}
As mentioned in Section~\ref{sec:introduction}, we need a blending and scheduling optimizer that can efficiently optimize over a scheduling horizon of at least 3 months and up to 12 months. For the instances of interest, the target time for this optimization is typically preferred to be no more than five minutes (with a ten minute maximum). However, even the linear discrete approximation above is extremely slow even for a scheduling horizon of 120 days, requiring several hours of computation time in Gurobi 8.1, even for extremely course spec discretization (with a total of roughly \textit{six} binary variables per tank per day for only three tanks).
As such, we employ a rolling planning approach.
We present three major alternatives for rolling planning. These alternatives are characterized treatment of the `past', `present', and `future', coupled with the choice of time periods (`present') to use for each rolling planning step. The `past' is the portion of the planning horizon before the current rolling horizon window, the `present' is the current horizon window, and the `future' is the portion of the planning horizon after the current rolling horizon window. For the considered schemes, the `future' is split into two partitions: the `near future' and the `far future'. We consider two alternatives for the treatment of the `past' and `future', and two major schemes for the choice of time periods.
We will compare two schedules for rolling planning - \emph{fixed-length} periods and \emph{run-based} periods. In both schemes, periods are chosen to be disjoint since previous performed test with overlapping periods performed slower.
Our fixed-length periods are simply equal length periods of some length $\Delta t$, except the last period may be truncated to not exceed the horizon length $H$. See Figure~\ref{fig:planning1}. \begin{figure}
\caption{This figure shows specified demand runs, the barge time windows, and the periods that we use in the rolling horizon calculation for the fixed-length periods scheme (and considering a horizon $H= 30$ days).}
\label{fig:planning1}
\end{figure} Although this period schedule is consistent, it subdivides runs. This makes modeling consistent volume feed throughout a run \eqref{eq_contfeed} complicated in the rolling planning process and can lead to poor choices early in the rolling planning. To alleviate this issue, we propose a 'run-based' schedule.
The run-based periods either \begin{itemize} \item Continue in a run to the end of the run or, \item Encompass all periods and gaps (days between runs) that are completely contained within a time change of $\Delta t$ from the beginning of period. \end{itemize} This choice makes it so that periods and gaps are never subdivided. This is depicted in Figure~\ref{fig:planning2}, where a sample partition resulting from this procedure, using $\Delta t=7$, is illustrated. Formally, our choice of periods is described by Algorithm~\ref{algorithm-run-based}.
\renewcommand{\textbf{Input: }}{\textbf{Input: }}
\renewcommand{\textbf{Output: }}{\textbf{Output: }} \begin{algorithm} \textbf{Input: }{Desired approximate length $\Delta t$ for a single rolling planning sub-period}\\ \textbf{Output: }{Sub-periods $P_i$ for rolling planning.} \caption{Rolling Planning with Run Based Sub-periods.}\label{alg:fixed-period} \begin{algorithmic}
\State $t_0\leftarrow 0$, $i \leftarrow 0$ \Comment{Initialize beginning of current rolling planning step and index.}
\While{$t_i<H$}
\State $t^{\text{end}} \leftarrow $ first ending time period of a run that is $\geq t_i$
\State $t^{\text{change}} \leftarrow $ latest ending time period of a run or gap between runs that is $\leq t_i + \Delta t$
\State $t_{i+1} \leftarrow \max(t^{\text{end}},t^{\text{change}}) + 1$
\State $P_i \leftarrow [t_i, t_{i+1})$
\State $i \leftarrow i + 1$
\EndWhile \end{algorithmic} \label{algorithm-run-based} \end{algorithm}
\begin{figure}
\caption{This figure shows specified demand runs, the barge time windows, and the periods that we use in the rolling horizon calculation for the `run-based periods' scheme (and considering $H = 30$ days).}
\label{fig:planning2}
\end{figure}
Next, we will discuss the major schemes for the treatment of the past, present, near future, and future segments of the scheduling horizon. These schemes include a full-horizon scheme and a partial-horizon scheme.
\paragraph{Full-Horizon Scheme} \newcommand{\text{NF}}{\text{NF}}
Given the current rolling horizon period $P_i = [t_i, t_i+1)$ and a near future horizon $H^{\text{NF}}$, let $t_i^{\text{NF}} = \min(H,t_i + H^{\text{NF}} - 1)$. In our examples, we use $H^{\text{NF}} = 90$.
\begin{enumerate}
\item In the past, we freeze the binary decisions made, but allow all continuous variables to continue to vary. This allows the model some additional ability to compensate for poor early decisions, which (as we will see) helps to improve the optimality gap substantially.
\item In the present, the full model is used.
\item In the near future, the binary variables related to demand decisions and discretized tank compositions are relaxed to the interval $[0,1]$, while the binary variables related to supply decisions are enforced to be binary. All other constraints are included.
\item In the far future, all binary variables are relaxed to the interval $[0,1]$. All other constraints are included. \end{enumerate}
The decision not to relax supply decision binaries in the `near future' is due to the inclusion of the `limited unloading window' constraints for barges.
These constraints rendered the inclusion of small objective function terms based on supply unloading decisions computationally prohibitive, due to poor branching performance of the solver. Before these were included, small objective function terms had been added to slightly favor earlier unloading times (all else equal), encouraging barges to be unloaded more quickly (and allowing for a model that more consistently avoided small unnecessary supply misses). After the inclusion of these constraints, the solver performance with these small objective terms became extremely poor, requiring at least an order of magnitude more computational time to solve to the same optimality gap in preliminary tests via Gurobi 8.1. When these terms were removed, the model began to exhibit very large extraneous supply misses, due in part to the poor linear relaxation of the `limited unloading window' constraints, and in part to relaxations of the constraints limiting the number of unloads per barge. These poor relaxations caused the rolling planning model to favor unloading during the relaxed-binary periods, a decision which the model could not recover from. However, by including the supply-based binaries in the `near future' (defined to end 90 days after the start of the `present' period), preliminary testing showed that these unnecessary misses became small enough that their cost was considered to be acceptable, while the inclusion of these extra binary variables had only a moderate impact on computational cost (close to a $50\%$ increase).
For all rolling horizon methods, we use 0.5\% as the MIP gap. For the Full-Horizon scheme, though experimental results we learned that increasing this, on average, does not significantly reduce computation time, but may have a large effect on resulting objective values. In particular, we want to optimize as much as we can in each step since, due to the way we relax variables, the objective value in each subsequent step can only decrease. In particular, since we are targeting the best objective value possible, we attempt to make optimal decisions at every step.
\paragraph{Partial-Horizon Scheme} \begin{enumerate}
\item In the past, we fix all decisions made, and use them to re-compute accurate values for spec compositions to use for the next rolling planning period. Possible $[l,u]$ for the specs in each tank are re-computed, and the number of remaining unloads (and remaining volumes) for barges that have been unloaded in the past are reduced accordingly.
\item In the present, the full model is enforced.
\item In the near future, the binary variables related to demand decisions and discretized tank compositions are relaxed to the interval $[0,1]$, while the binary variables related to supply decisions are enforced to be binary. All other constraints are included.
\item In the far future, all variables and constraints are omitted. \end{enumerate}
\begin{table}[H] \begin{center}
\begin{tabular}{|c|c|c|c|c|} \multicolumn{5}{c}{Full Horizon Scheme}\\ \hline Vars & Past & Present & Near & Far\\ &$\left[0, t_i \right)$ & $P_i$ & $\left[t_{i+1}, t_i^{\text{NF}}\right]$ & $\left(t_i^{\text{NF}},H\right]$\\ \hline $y, v$ & active & active & active& active\\ $ \mathsf{y}, \mathsf{v}$ & active & active & active& active\\ $d$ & active & active & active& active\\ $t_s$ & active & active & active& active\\ $\gamma$ & fixed & active & active& relaxed\\ $\sigma$ & fixed & active & relaxed& relaxed\\ $\alpha$ & fixed & active & relaxed& relaxed\\ \hline \end{tabular} \ \
\begin{tabular}{|c|c|c|c|c|} \multicolumn{5}{c}{Partial Horizon Scheme}\\ \hline Vars & Past & Present & Near & Far\\ &$\left[0, t_i \right)$ & $P_i$ & $\left[t_{i+1}, t_i^{\text{NF}}\right]$ & $\left(t_i^{\text{NF}},H\right]$\\ \hline $y, v$ & fixed & active & active & omitted\\ $ \mathsf{y}, \mathsf{v}$ & fixed & active & active & omitted\\ $d$ & fixed & active & active& omitted\\ $t_s$ & fixed & active & active& omitted\\ $\gamma$ & fixed & active & active& omitted\\ $\sigma$ & fixed & active & relaxed& omitted\\ $\alpha$ & fixed & active & relaxed& omitted\\ \hline \end{tabular} \end{center} \caption{Description of how variables are either fixed, fully active, relaxed to be continuous, or omitted in the two versions of the rolling planning horizon schemes.} \end{table}
In the partial-horizon scheme, we note that it is possible that only a small part the arrival window, even just a single period, for a single barge overlaps with the `present' and `near future'. To compensate for this, we pro-rate the inventory to be unloaded from such barges: If only $10\%$ of the volume on a barge overlaps with the currently-considered period, then we enforce that only $10\%$ of the volume from the barge is to be unloaded in the current rolling planning step. In this way, as the model rolls forwards, more of the barge's unloading window enters the visible horizon window, allowing for improved decisions regarding the volume on the barge.
\subsection{Post-processing and Error Mitigation} After solving the approximation model, we check feasibility of the scheduled flows in terms of required spec quality bounds. To do so, using the flow variables $y^{\text{in}}, y^{\text{out}}$, we plug their computed values into the original model from Section~\ref{sec:Mathematical-Model} to uniquely determine the spec values $f_{k,q,t}$.
After the optimization of the discretized model with rolling planning, we compute specs $f_{k,q,t}$ from simulate the scheduling solution to reveal the true specs throughout the scheduling horizon, and report the true tank and feed volumes and compositions throughout the horizon in addition to the basic scheduling plan.
During preliminary computational testing, after simulating the scheduling solution to reveal the true specs throughout the scheduling horizon, we found that, without tightening the spec requirements, the MILP discretization techniques for the model often resulted in solutions for which the actual spec and spec ratios for the demand feeds were not within the specified ranges. However, this was largely remedied after tightening the feasible region as described in Section~\ref{sssec_tighten}
\section{Numerical Results} \begin{table}
\centering
\begin{tabular}{c|c|c|c}
& Tank 1 & Tank 2 & Tank 3\\
Total Volume & 1179 metric tons & 2948 metric tons & 1225 metric tons\\
Minimum Volume & 158 metric tons & 272 metric tons & 136 metric tons
\end{tabular}
\caption{The capacities of the three tanks we consider. Minimums are about 10\% of total volumes. Note that barges hold between 1200 - 1400 metric tons. Hence a barge is comparable to our small tanks, and our large tank can hold slightly more than 2 barges.}
\label{tab:tanks-and-barges} \end{table}
\begin{table}[]
\centering
\begin{tabular}{c|cccc}
Barge Type & Type 1 & Type 2 & Type 3 & Type 4 \\
\hline
Volume & 1240 & 1360 & 1182 & 1360 \\
$\underbar{c}^{\mathrm{inb}}$ & 1000 & 800 & 800 & 1000
\end{tabular}
\caption{Example data for barge types, their capacities, and related objective values.}
\label{table_barge_types} \end{table}
To evaluate the performance of the proposed methods, we constructed a realistic problem with a 368-day scheduling horizon from real operational data with a 119-day scheduling horizon, extended to the desired horizon in a periodic fashion. The resulting problem had a total of 33 supply arrivals over the scheduling horizon, three available storage tanks, two specs ($S_1$ and $S_2$) to track, and one spec ratio to control. Note that the range of possible spec qualities was relatively narrow for some tanks, resulting in relatively few binary variables used to approximate the specs for each tank. See \eqref{tab:tanks-and-barges} for example data on the storage tanks. Other relevant parameters include $\underbar{ul}^{\max} = 2$, $\underbar{ult}^{\max} = 7$, $\underbar{tkp}^{\min}_k = 10\%$ for each tank, and $\underbar{ulp}^{\min} = 10\%$ for each supply barge. For the objective coefficients, we globally use $\underbar{c}^{\mathrm{dmd}}_t = 3000$. $\underbar{c}^{\mathrm{inb}}_t$ depends on the barge type, and is either \$800 or \$1000 for each type, as specified in Table~\ref{table_barge_types}. Figure \ref{fig:barge_windows_150} showcases the unloading and arrival windows for a sample instance with $H=150$, while Figures~\ref{fig:sln_profiles},\ref{fig:load_unload}, showcase sample solution schedules with $H=40$. Note that the high variability in bounds for the $S1/S2$ ratios is due to the substantially different requirements for product produced in different runs.
Most computational experiments are performed on a Dell desktop with an Intel Xeon 3.2GHz CPU with 8 cores and 16 threads, and 32 GB of RAM, using Gurobi 8.1 in Python 3 on a Windows 10 operations system. The computational experiments in Section~\ref{ssec_grbcomp} are performed on a Dell desktop with an Intel Core i7-7820X 3.6GHz CPU with 8 cores and 16 threads, and 32 GB of RAM, using Gurobi 9.0.1 in Python 3 on a Windows 10 operations system.
\afterpage{
\begin{landscape} \begin{figure}
\caption{Example of time windows for barge arrivals and dates of runs on a 150 day time horizon.}
\label{fig:barge_windows_150}
\end{figure}
\begin{figure}
\caption{Example of solution for schedules of loading to tanks and unloading (feed) from tanks to production for a 40 day time horizon.}
\label{fig:load_unload}
\end{figure}
\end{landscape}
}
\begin{figure}
\caption{Volume profiles with some bounds describing allowable ranges of spec and spec ratios.}
\label{fig:sln_profiles}
\end{figure}
\subsection{Comparison with McCormick} To perform our computational tests, we obtained three years of demand data for the same system, and generated randomized supply data for those three years. We generated ten different such randomizations, and performed each computational test with five different starting dates (separated by half a year), obtaining a total of fifty test problems for each scheduling horizon.
In this section, we compare the effectiveness of two different methods for handling the product terms $v\cdot \Delta f$: McCormick and the proposed Center method, each using a base-2 binarization of the specs. The tests were repeated using several different values of $H$, and with several values for $\hat{\varepsilon}$ (chosen to be the same for all specs). For $\hat{\varepsilon} = 1$, we used $H \in \{10, 20, 30, 45, 70, 90\}$, and for $\hat{\varepsilon} = 0.25$, we used $H \in \{10, 20, 30, 45, 60\}$ (since McCormick became computationally prohibitive over longer scheduling horizons). A total time limit of $600$ seconds was enforced for all computations in this section. The average-case results are shown in Figure~\ref{fig_benchmarking-NMDT}. In this section, we investigate performance in terms of computation time and percentage of value lost (\%loss) in the objective, with respect to solution values $v^{\text{unused}}_s$ and $d^{\text{unmet}}_t$, computed via
\begin{subequations}
\begin{align}
val^{\text{target}} &= \ds\sum_{s \in S} \underbar{c}^{\mathrm{inb}}_s \cdot \underbar{v}_s + \sum_{t \in T} \underbar{c}^{\mathrm{dmd}}_t \cdot \underbar d_t \label{eq_pctloss_target}\\
val^{\text{missed}} &= \ds\sum_{s \in S} \underbar{c}^{\mathrm{inb}}_s \cdot v^{\text{unused}}_s + \sum_{t \in T} \underbar{c}^{\mathrm{dmd}}_t \cdot d^{\text{unmet}}_t \label{eq_pctloss_mis}\\
\%\text{loss} &= \dfrac{val^{\text{missed}}}{val^{\text{target}}} \cdot 100\% \label{eq_pctloss}
\end{align} \end{subequations}
In all cases, we see that the proposed Center method performs much faster than McCormick, with a difference of more than an order of magnitude for longer scheduling horizons. However, this performance improvement came at a cost: as shown in Figure~\ref{fig_benchmarking-NMDT} (b), the average optimality gap for $\hat{\varepsilon} = 1$ yielded by the Center method is consistently quite a bit worse that that yielded by McCormick. As showcased in Figure~\ref{fig_benchmarking-NMDT-profile}(c), the majority of this difference seems to be caused by significantly worse solutions in roughly a quarter of the test cases. However, this disadvantage disappears at higher fidelity; with $\hat{\varepsilon}=0.25$, the optimality results are very close (well within Gurobi's default optimality gap of $10^{-4}$), with the Center method taking a slight edge over McCormick. At the same time, the Center method proved to yield similar feasibility results as McCormick, with each becoming infeasible for no more than one out of the 50 trials in all test cases.
Figure~\ref{fig_benchmarking-NMDT-profile} gives another perspective on the performance advantage of the proposed Center method, based on the number of instances solved by a given time for the longest scheduling horizons tested: for $\hat{\varepsilon}=1$ and $H=90$, the advantage is overwhelming: almost all of the 50 test instances have converged for the Center method before any have converged for McCormick. The results for $\varepsilon=0.25$, $H=60$ also show a strong advantage for the proposed method, though this advantage is somewhat less pronounced due to the shorter scheduling horizon.
These results suggest that, for longer scheduling horizons, the new Center idea fares far better computationally than does the McCormick-style discretization in NMDT for the same level of precision.
\begin{figure}
\caption{Comparison of computational performance of the Center approximation vs. the McCormick relaxation. Averaged over 50 similar instances with different randomized supply parameters. (a) and (c): Computational time using step sizes of $\hat{\varepsilon} = 1$ and $0.25$. (b) and (d): \% Loss computed as fraction of upper bound on objective value not obtained (see \eqref{eq_pctloss}), where model accuracy is set to $\hat{\varepsilon} = 1$ and $0.25$.}
\label{fig_benchmarking-NMDT}
\end{figure}
\begin{figure}
\caption{Time-completion (a,b) and \% value missed (c) profiles for the comparison of computational performance for McCormick vs. the proposed approximate method, using the largest values of $H$ tested for (a,c) $\hat{\varepsilon}=1$, and (b) $\hat{\varepsilon}=0.25$.}
\label{fig_benchmarking-NMDT-profile}
\end{figure}
Without rolling planning, the difference is far less pronounced: using 30-day and 40-day scheduling horizons with fidelity of $\hat{\varepsilon} = 1$, we find little difference between the performance of the two methods, as showcased in figure Figure~\ref{fig_benchmarking-NMDT-noRP-profile}. As a significant performance difference between the methods had already manifested by such small scheduling horizons in the cases with rolling planning, we find that, for these problem instances, the primary advantage with the Approx method lies in its interaction with rolling planning.
\begin{figure}
\caption{Time-completion profiles for the comparison of computational performance for NMDT vs. the proposed approximate method, using $\varepsilon = 1$ tested for (a) $H=30$ and (b) $H=40$.}
\label{fig_benchmarking-NMDT-noRP-profile}
\end{figure}
\subsection{Comparison to Nonconvex MIQCP in Gurobi 9.0 }\label{ssec_grbcomp} In this section, we compare the computational performance of the methods presented, using $\varepsilon=1$ and without rolling planning, to solving the models MINLP-Mix and MINLP-Split defined in Section~\ref{ssec_prod,ssec_split} with Gurobi 9.0, using scheduling horizons of $H=20$, $H=30$, and $H=40$. The MINLP-Mix model timed out (5-minute limit) for $H=30$ on all instances but one, so we omit it from our comparisons beyond the first plot with $H = 20$.
The results are shown in Figure~\ref{fig_benchmarking-GRB-noRP-profile}. Note that the MINLP-Mix results are omitted from (b), as many instances also didn't finish for $H=20$.
The performance of MINLP-Mix is far inferior to that of MINLP-Split: For $H=20$, $\sim$44 instances finished for MINLP-Split within 50s, while only 10 had finished by that point for MINLP-Mix, with only 19 finishing within 5 minutes. For $H=30$, no more than a single instance finish within 5 minutes for MINLP-Mix.
It is interesting to note that, in quite a few instances, Gurobi is able to solve the MINLP-Split model very quickly ($\sim$26 for $H=20$, $\sim$21 for $H=30$, and $\sim$15 for $H=40$. The results show that more instances get stuck with long solution times than with the NMDT-based options, with the slowest $40$-$60\%$ of solutions slower than the NMDT-based methods. However, the solutions it returns are precise in terms of spec feasibility, while the level of solution precision $\varepsilon=1$ for the discrete options is quite coarse. It seems likely that, using to $\varepsilon=0.5$ or $\varepsilon=0.25$, the Gurobi-based solver may overtake the NMDT-based variants in both solution time and quality for these methods. As a result, one might imagine that a rolling planning scheme based on this MINLP-split model, e.g. relaxing the bilinear constraint \eqref{eq_split} for the `future' portion of the horizon, could be quite fruitful. We leave this exploration as future work.
The \%loss results show that, even at such course fidelity, the approximation scheme without rolling planning consistently yields near-optimal results. Indeed, the relative difference between the solutions is consistently within the MIP-gap, 0.5\%, used for the optimization. \begin{figure}
\caption{Time-completion (a,c,e) and percent-loss (b,d,f) profiles for the comparison of computational performance for Gurobi 9.0.1 vs. the proposed NMDT-based methods, using $\varepsilon = 1$ tested for (a,b) $H=20$, (c,d) $H=30$, and (e,f) $H=40$.}
\label{fig_benchmarking-GRB-noRP-profile}
\end{figure}
\subsection{Performance Enhancement} We explore how to enhance the performance of the method via two simple changes to the Center model that preserve the mixed-integer feasible solutions. The first of these is to add the coupling constraints described in Section~\ref{sssec_MILP_var_elim}
\begin{subequations}
\begin{align}
& \vsf^{\alpha,\text{mid}}_{i,j} = \vsf^{\alpha,\text{end}}_{i,j} + \ysf^{\alpha,\text{out}}_{i,j} & i \in I_{k,q,t}, j \in \{0,1\} \label{eq_AVOL_couple}
\end{align} \end{subequations}
where $I_{k,q,t}$ is defined as in Section~\ref{sssec_approx_method}. We then remove constraints \eqref{eq_AVOLSPLdef_lb} and \eqref{eq_AVOLINVdef_ub}, since they are now made redundant in the associated LP by \eqref{eq_AVOL_couple} combined with the nonnegativity of $\vsf^{\alpha,\text{mid}}_{p,i,1}$, $\vsf^{\alpha,\text{end}}_{p,i,1}$, and $\ysf^{\alpha,\text{out}}_{p,i,1}$.\\
The second of these is to relax the MILP approximation of the $\vsf^{\alpha,\text{mid}}_{p,i,j}$ terms. This corresponds then removing constraints \eqref{eq_AVOLSPLdef_lb}, \eqref{eq_AVOLINVdef_ub}, and the constraints
\begin{subequations} \begin{align} & \vsf^{\alpha,\text{end}}_{i,j} \ge \underbar{v}^{\min}_k \cdot \alpha_{i,j} & i \in I_{k,q,t}, j \in \{0,1\}, \label{eq_AVOLINVdef_lb}\\ & \ysf^{\alpha,\text{out}}_{i,j} \le \underbar d_t \cdot \alpha_{i,j} & i \in I_{k,q,t}, j \in \{0,1\}, \label{eq_AVOLFEEDdef_ub} \end{align} \end{subequations}
which are part of the representation of \eqref{eq_tildeD_vmid} and \eqref{eq_tildeD_yout}. The viability of the removal of \eqref{eq_AVOLFEEDdef_ub} is shown in Section~\ref{sssec_MILP_var_elim}: since any one of the three associated variables $\vsf^{\alpha,\text{mid}}_{p,i,1}$, $\vsf^{\alpha,\text{end}}_{p,i,1}$, and $\ysf^{\alpha,\text{out}}_{p,i,1}$ can be eliminated completely without compromising the MILP, a valid model is still obtained when relaxing any number of constraints related to the definition of $\ysf^{\alpha,\text{out}}_{p,i,1}$. Constraint $\eqref{eq_AVOLINVdef_lb}$ is always safe to remove, as it is implied by the fact that only one $\vsf^{\alpha,\text{mid}}_{i,j}$ variable can nonzero for each $i$, combined with the fact that $\vsf^{\alpha,\text{mid}}_{i,j}$ sum to $v^{\text{mid}}_{k,t}$, which is bounded below by $\underbar{v}^{\min}_k$. Note that either \eqref{eq_AVOLINVdef_ub} or \eqref{eq_AVOLFEEDdef_ub} must be included if the coupling constraints \eqref{eq_AVOL_couple} are not included. In this case, we choose to remove \eqref{eq_AVOLFEEDdef_ub}.
The results, displayed in Figure~\ref{fig_benchmarking-relax-profile}, show that while both modeling changes make a large difference in computational cost, the simple relaxation of the MILP yields a larger improvement in computational time. This improvement is increased further when the coupling constraints are added, yielding a method in which the majority of instances finish within the desired 10 minutes when using a scheduling horizon of a full year with $\hat{\varepsilon} = 1$.
\begin{figure}
\caption{Time-completion profiles using different combinations of relaxing (constraints) and adding (coupling constraint), with $H=365$ and a 7-1-1 fixed-step-size rolling planning scheme.}
\label{fig_benchmarking-relax-profile}
\end{figure}
\subsection{Comparison of Rolling Planning Ideas} In this section, we compare the computational performance of different rolling planning regimes. We restrict ourselves here to the full-horizon scheme, and compare the performance obtained using two different ideas. For the first, we use fixed-length periods with $H^{\text{int}} = 7$. For the second, we use run-based periods using $\tilde{H}^{\text{int}} = 4$ to generate the periods, with $n^H=\Delta n = 2$ as our stepping parameters. We compare the methods using a time limit of $1800s$ using $\hat{\varepsilon} = 1$ and scheduling horizons of $H \in \{60, 75, 90, 129, 180, 365\}$ (in days).
The results, as shown in Figure~\ref{fig_benchmarking-rolling}, indicate that the regime with a fixed step size yields a more consistent, and often lower, optimality gap on average while incurring a higher computational cost. This outcome is emphasized in Figure~\ref{fig_benchmarking-RP_profiles}: when $H=365$, the 7-1-1 fixed-step scheme results in nearly all test cases finishing within about a 5-10 minute time window, while incurring high optimality gaps in a handful of instances. On the other hand, for the 4-2-2 run-dependent-step scheme, most instances finish within about a 10-20 minute time window, but optimality gap for the worst instances is far more controlled.
\begin{figure}
\caption{Comparison of fixed-step-length and run-dependent-step rolling planning methods. Averaged over 50 similar instances with different randomized supply parameters, using spec step size $\hat{\varepsilon}=1$. (a): Computational cost, (b) Optimality gap.}
\label{fig_benchmarking-rolling}
\end{figure}
\begin{figure}
\caption{Time-completion profiles of fixed-step-length and run-dependent-step rolling planning methods, using $H = 365$ days and $\hat{\varepsilon}=1$. Run using both the relaxation and coupling things. (a) Computational cost, (b) Optimality gap.}
\label{fig_benchmarking-RP_profiles}
\end{figure}
\section{Conclusion} In this work, we have developed an approximation method for the tank mixing and scheduling problem that combines rolling horizon planning with two different discretization schemes, yielding reasonably high-quality solutions very quickly over long scheduling horizons, while with high probability ensuring that demand specs remain within the required ranges. Due to this performance, the model could be especially useful for obtaining rough, fast plan feasibility results during the planning of supply acquisitions and product production runs in an industry setting (counting solutions with no (or only small) supply and demand inventory misses as `feasible').\\
Comparing the in-house ``Center'' discretization scheme to the original ``McCormick''-based NMDT scheme \cite{Castro2015c}, we find that the in-house method yields much faster performance when combined with a rolling horizon scheme, while sometimes yielding worse solutions. However, without a rolling horizon scheme, the methodologies yield comparable performance, suggesting that the ``Center'' scheme may yield relaxations more compatible with a rolling horizon scheme. Further, when comparing rolling horizon schemes, we find that a scheme using run-based rolling horizon periods yields better solutions than a fixed-duration scheme, at some cost to performance. To extend this work, we plan to explicitly incorporate acquisition planning of barges, and possibly planning of production runs.
\end{document} | arXiv |
What is the typical temperature of an airliner's hull during flight?
Some high-speed military aircraft like the SR-71 had real heating problems, but airliners also travel almost at the speed of sound, use most of their fuel to make up for frictional losses, so I would presume that their hulls heat up. They are also cooled by the airflow, but at what temperature does equilibrium set in during cruise? I remember that airliners don't seem to be very hot when you touch them after landing, but they have had time to cool down in the slow winds during descent.
aerodynamics airliner temperature
Rodrigo de Azevedo
yippy_yayyippy_yay
$\begingroup$ use most of their fuel to make up for frictional losses - what gave you this idea? Since efficiency is < 50%, most of the fuel is simply burnt for no return. The majority of what is left is burnt to overcome drag which is a consequence of creating lift. I don't have a number, but the overall amount of fuel used to overcome friction is going to be a small fraction. $\endgroup$ – Simon Feb 27 '16 at 9:38
There are two primary factors that affect the skin temperature of an aircraft in flight: the air temperature, and the speed of the aircraft.
The air temperature where airliners cruise is relatively cold, around -54 °C at 35,000 feet.
As a body like an aircraft moves through air, it compresses the air, which causes the air temperature to rise. The maximum temperature rise will be if the air is completely stopped, such as at a leading edge. This is called the total air temperature, and the amount that the temperature rises is called the ram rise.
Using a simple formula to find the ram rise:
$$RR = \frac{V^2}{87^2}$$
… where $RR$ is in Kelvin, and $V$ is the true airspeed in knots.
Using a typical airliner cruising speed of 500 knots gives a temperature of 33 degrees. This brings the total air temperature to -22 °C, which is still quite cold. At places other than the leading edge, the temperature rise will be less. This is why cargo holds will need heaters to be safe for live animals, even being insulated and pressurized. Airliners just don't fly fast enough to produce a significant amount of heating.
On the other hand, the SR-71 could fly at over 1,910 kts, which gives a ram rise of 482 °C. The air doesn't get much colder as you climb to the altitudes where the SR-71 flew, so this gives a total air temperature of over 400 °C. Speed makes a huge difference.
fooot♦fooot
$\begingroup$ Where does the 87 come from in that equation? $\endgroup$ – Holloway Dec 18 '15 at 11:35
$\begingroup$ @Holloway If you follow the linked reference you'll see it is an empirical approximation (for the specific system we're discussing) lumping in the heat capacity and recovery factors in the analytical equation. The = should probably be an ≈ in this instance. $\endgroup$ – J... Dec 18 '15 at 11:55
$\begingroup$ @J... Thanks, I imagined it was a mix of constants but wasn't sure which. $\endgroup$ – Holloway Dec 18 '15 at 11:56
$\begingroup$ alternately, you can use the isentropic flow relations and get $T \approx T_\infty(1+\frac{v^2}{531.6\cdot T_\infty})$, where $T_\infty$ is the air temperature in Kelvins and $v$ is the speed in knots $\endgroup$ – costrom Dec 18 '15 at 16:15
$\begingroup$ This answer calculates temperature rise due to compression, however it does not address the friction between the air and the surface of the aircraft. I'm assuming the frictional heating is negligible, but someone might want to elaborate on it. $\endgroup$ – Ian Dec 18 '15 at 17:24
Local air temperature
In fast aircraft, the maximum heating is at the stagnation point. Here the kinetic energy of the flow is completely converted into pressure, which heats the air and, consequently, the structure. Due to the low local speed and the high pressure at and near the stagnation point, the rate of heat transfer is high, too, adding to the heat load.
The formula for the stagnation point temperature $T_s$ of an ideal gas of the temperature $T_{\infty}$ hitting an object with the Mach number Ma is $$T_s = T_{\infty} + T_{\infty}\cdot\frac{(\kappa-1)\cdot Ma^2}{2}$$ For air the ratio of specific heats $\kappa$ is 1.4. The tip of the fuselage nose of an airliner flying at Mach 0.85 will see air temperature to rise by 14.45%. If the air at altitude has a temperature of 220°K (-53.15°C), the air temperature at the stagnation point will be 251.8°K (-21.36°C).
But past the stagnation point the air will accelerate and become faster than flight speed. Now pressure and, consequently, temperature need to drop sufficiently to encourage the flow to stay attached and follow the curvature of the forward fuselage. This acceleration will cool the air, so the flow right above the windshield will be cooler than the ambient air.
Along the cylindrical portion of the fuselage, we find roughly flight speed again, but now friction will change the temperature close to the wall. Again the kinetic energy is converted, but the heating is caused by friction. See the boundary layer plots below:
Frictional and thermal boundary layer (picture source)
The temperature close to the wall is now called recovery temperature and is different from the stagnation point temperature because there is a small speed component normal to the surface which carries away some of the heat. The air temperature depends on the ratio between viscous diffusion and thermal diffusion, which is expressed by the Prandtl number Pr. If Pr>1, the air temperature at the wall is higher than the stagnation temperature and for Pr<1, it is colder. The Prandtl number of air is 0.72, so the air surrounding the fuselage is slightly colder than the stagnation temperature.
Fuselage temperature
The fuselage temperature is determined by the equilibrium between thermal conductivity, radiation and convection.
Conductivity: Here it is important how much the internal temperature of the fuselage can heat the skin. Cabin temperature is likely around 20°C, so some heating can be expected. However, since most airliners have isolation mats between the outer skin and the internal wall panels, conductivity from the inside is not dominant and will likely raise the skin temperature by a few degrees or less. The low heat conductivity of air (0.0204 W per m² and Kelvin) means that the heating from the inside dominates conductivity.
Radiation: Since the top of the fuselage is pointing into space, its far-field radiation budget is negative at night and where it points away from the sun, so radiation will cool it. The lower fuselage, however, is facing either the ground or clouds below, which both are likely hotter than the ambient air. Radiation will not cool it much and is more likely to heat it up. The part of the fuselage in direct sunlight will be significantly hotter again, depending on its color.
Convection: This is the dominant factor due to the high speed of the air around the fuselage. Here the air and the fuselage exchange heat by near-field radiation, and since the layer of air is replenished quickly and continuously, the air temperature is impressed on the fuselage.
I did not go to the effort of calculating the end result, but tried to list the main contributors and their magnitude. In general, the fuselage temperature is slightly below the stagnation temperature, and a dark fuselage in bright sunlight or one with little insulation and a hot interior will be several degrees hotter than the stagnation temperature.
Peter KämpfPeter Kämpf
$\begingroup$ You distinguish between the kinetic energy of the flow converted into pressure at the stagnation point and 'friction' caused by the airflow against the hull - isn't the first friction too? In both cases, the body of the aircraft transforms the uniform speed of air molecules into heat. $\endgroup$ – yippy_yay Dec 19 '15 at 12:11
$\begingroup$ @yippy_yay: No, in the first case it is the pressure rise which heats the flow reversibly, and friction heating is irreversible and isobaric. $\endgroup$ – Peter Kämpf Dec 19 '15 at 12:22
$\begingroup$ Okay, but once the heat inside the stagnation point flows into the hull, the process is irreversible. You would still distinguish between friction and this process? $\endgroup$ – yippy_yay Dec 19 '15 at 12:29
$\begingroup$ @yippy_yay: Yes, because there is no friction involved. Compression heat occurs in an ideal gas, too. $\endgroup$ – Peter Kämpf Dec 19 '15 at 16:34
Not the answer you're looking for? Browse other questions tagged aerodynamics airliner temperature or ask your own question.
Temperature of airplane while flying?
How do fast-flying aircraft avoid overheating?
What is the difference between OAT, RAT, TAT, and SAT?
Do aircraft use the majority of their fuel to overcome friction?
Does cargo heat failure require a diversion? What about if there are live animals in cargo?
Why is the temperature of the cabin so low during a flight?
What is the relation between airspeed and generated heat at constant drag?
What is the temperature of intake air at several speed of sound intervals from 1x to 10x?
What type of variation in temperature does the wing undergo?
Is the altimeter setting corrected for temperature?
What is the fuel temperature in the tanks during a flight?
What is the temperature of the brakes after a typical landing?
What's the formula to calculate static air temperature?
Why do aircraft stall warning systems use angle-of-attack vanes rather than detecting airflow separation directly?
How to compute the change in static temperature using only altitude?
What are the equation linking static air temperature with true airspeed & altitude? | CommonCrawl |
Comparison of varieties of numerical methods applied to lid-driven cavity flow: coupling algorithms, staggered grid vs. collocated grid, and FUDS vs. SUDS
A. A. Boroujerdi ORCID: orcid.org/0000-0003-4098-01301 &
M. Hamzeh1
The effectiveness of different methods, schemes, algorithms, and approaches is of substantial challenging problems in numerical modeling of transport phenomena. In the present paper, a lid-driven cavity problem is modeled via two basically different approaches of spatial discretization: collocated and staggered. The non-dimensionalized governing equations are semi-discretized by using a finite volume approach. Then, the full discretization is performed in each of collocated and staggered grids, separately. Upwind and central difference schemes are implemented in order to discretize the convective and diffusion terms of equations, respectively. After mesh independency study, the performances of collocated and staggered grids in comparison with the reference benchmark are presented. Next, the effectiveness of the first and the second order upwind schemes are presented, as well as different coupling algorithms of SIMPLE, SIMPLEC, and SIMPLER. Finally, an overall comparison of the methods is provided and acceptable agreements with benchmark are attained.
As a classic problem, lid-driven cavity flow is widely implemented in order to validate, compare, and investigate numerical methods and schemes.
Staggered grid has been extensively used for the numerical modeling of lid-driven cavity flows. A two-dimensional computational model was developed to study the fluid dynamic behavior in a square cavity driven by an oscillating lid using staggered grid-based finite volume method (Indukuri and Maniyeri, 2018). The numerical simulations were performed for the case of top wall oscillations for various combinations of Reynolds number and frequencies. From these simulations, an optimum frequency was chosen and then the vortex behavior for the cases of parallel wall oscillations was explored. McDonough (2007) investigated lid-driven cavity problem by use of a new form of large-eddy simulation at moderate Reynolds number to demonstrate the ability of the procedure to automatically predict transition to turbulence. They reported parallel speedups observed on a general-purpose symmetric multiprocessor employing MPI for parallelization. Gutt and Groşan (2015) analyzed the motion of an incompressible viscous fluid through a porous medium located in a two-dimensional square lid-driven cavity flow described by a generalized Darcy–Brinkman model. The effect of inertia and rheology parameters on the flow of viscoplastic fluids inside a lid-driven cavity is investigated using a stabilized finite element approximation (dos Santos et al. 2011). Patil, Lakshmisha, and Rogg (2006) presented the results for deep cavities with aspect ratios of 1.5–4, and Reynolds numbers of 50–3200. Several features of the flow, such as the location and strength of the primary vortex, and the corner-eddy dynamics were investigated and compared with previous findings from experiments and theory.
Direct numerical simulations about the transition process from laminar to chaotic flow in square lid-driven cavity 2D incompressible flow with increasing Reynolds numbers flows were considered by (Peng, Shiau, and Hwang, 2003). The spatial discretization consisted of a seventh-order upwind-biased method for the convection term and a sixth-order central method for the diffusive term.
Collocated grid arrangement is also implemented in order to simulate lid-driven cavity flows. Peng (Ding, 2017) employed the SIMPLE algorithm and its variants to solve the driven-cavity problem at Re < 10,000 by proposing a new segregated solver for determining the solution of incompressible flow on structured collocated grid systems. Case studies of steady incompressible flow in a 2D lid-driven square cavity were investigated for 100 < Re < 5000 by AbdelMigid et al. (Tamer, Khalid, Mohamed, and Ahmed, 2017). Collocated grid arrangement along with a uniform structured Cartesian grid was used. Yapici, Karasozen, and Uludag (2009) presented a finite volume technique for the numerical solution of steady laminar flow of Oldroyd-B fluid in a lid-driven square cavity over a wide range of Reynolds and Weissenberg numbers. Second-order central difference (CD) scheme was used for the convection part of the momentum equation while first-order upwind approximation was employed to handle viscoelastic stresses. In another work, a numerical collocated finite volume method was presented to study Buongiorno's nanofluid model for MHD mixed convection of a lid-driven cavity filled with nanofluid (Elshehabey and Ahmed, 2015).
Aim and design for the study
The principal aim of the following research is to compare different computational methods for simulation of a cavity fluid flow driven by a moving lid and to introduce appropriate schemes. The model is based on the finite volume method of the governing equation semi-discretization in two over ally different formulations of staggered and collocated grid systems. The full discretization is made by both the first-order and the second-order upwind schemes. Three different coupling algorithms of SIMPLE, SIMPLEC, and SIMPLER are developed. Afterward, the results of all the methods are compared.
Description of the methodology
Semi-discretization of governing equations
The physical and computational domains of the problem area square cavity with the dimensions of L whose upper lid is moving rightward with the velocity of u0. The origin of the Cartesian coordinate is located at the left lower corner of the cavity.
In order to simplify the model, we consider the following assumptions:
Working fluid is incompressible.
The shear stress tensor is proportional to the deformation rate (Newtonian fluid)
There are no external body forces.
The flow is laminar.
The governing equations are continuity and momentum equations, which for a steady laminar flow of incompressible fluid with constant viscosity and no external force are as follows:
$$ \frac{\partial {u}^{\ast }}{\partial {x}^{\ast }}+\frac{\partial {v}^{\ast }}{\partial {y}^{\ast }}=0 $$
$$ \frac{\partial }{\partial {x}^{\ast }}\left({u^{\ast}}^2\right)+\frac{\partial }{\partial {y}^{\ast }}\left({u}^{\ast }{v}^{\ast}\right)=-\frac{1}{\rho^{\ast }}\frac{\partial {P}^{\ast }}{\partial {x}^{\ast }}+\frac{\mu^{\ast }}{\rho^{\ast }}\left(\frac{\partial^2{u}^{\ast }}{\partial {x^{\ast}}^2}+\frac{\partial^2{u}^{\ast }}{\partial {y^{\ast}}^2}\right) $$
$$ \frac{\partial }{\partial {x}^{\ast }}\left({u}^{\ast }{v}^{\ast}\right)+\frac{\partial }{\partial {y}^{\ast }}\left({v^{\ast}}^2\right)=-\frac{1}{\rho^{\ast }}\frac{\partial {P}^{\ast }}{\partial {y}^{\ast }}+\frac{\mu^{\ast }}{\rho^{\ast }}\left(\frac{\partial^2{v}^{\ast }}{\partial {x^{\ast}}^2}+\frac{\partial^2{v}^{\ast }}{\partial {y^{\ast}}^2}\right) $$
Making governing equations non-dimensional enables us to incorporate some fluid and geometric parameters and to generalize the results of the simulation. We scale the x and y coordinates by the dimension of cavity L, the velocities by lid velocity, and pressure by dynamic pressure as described below:
$$ x=\frac{x^{\ast }}{L^{\ast }}\kern0.5em ,\kern0.75em y=\frac{y^{\ast }}{L^{\ast }}\kern0.5em ,\kern0.5em u=\frac{u^{\ast }}{u_0^{\ast }}\kern0.75em ,\kern1em v=\frac{v^{\ast }}{u_0^{\ast }}\kern1em ,\kern0.75em P=\frac{P^{\ast }-{P}_0^{\ast }}{\frac{1}{2}{\rho}^{\ast }{u_0^{\ast}}^2}\kern0.75em $$
Substituting the definitions, one can attain non-dimensionalized equations:
$$ \frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}=0 $$
$$ \frac{\partial }{\partial x}\left({u}^2\right)+\frac{\partial }{\partial y}(uv)=-\frac{\partial P}{\partial x}+\frac{1}{\mathit{\operatorname{Re}}}\left(\frac{\partial^2u}{\partial {x}^2}+\frac{\partial^2u}{\partial {y}^2}\right) $$
$$ \frac{\partial }{\partial x}(uv)+\frac{\partial }{\partial y}\left({v}^2\right)=-\frac{\partial P}{\partial y}+\frac{1}{\mathit{\operatorname{Re}}}\left(\frac{\partial^2v}{\partial {x}^2}+\frac{\partial^2v}{\partial {y}^2}\right) $$
The fluid properties are incorporated into Reynolds number:
$$ \mathit{\operatorname{Re}}=\frac{\rho^{\ast }{u}_0^{\ast }{L}^{\ast }}{\mu^{\ast }} $$
The next step is to discretize the computational domain. Whether the grid system is collocated or staggered, integrating governing Eqs. (5)–(7) over an arbitrary control volume depicted in Fig. 2 gives semi-discretized Eqs. (9)–(11).
$$ \left[{A}_e{\hat{u}}_e-{A}_w{\hat{u}}_w\right]+\left[{A}_n{\hat{v}}_n-{A}_s{\hat{v}}_s\right]=0 $$
$$ \left[{A}_e{\hat{u}}_e{u}_e-{A}_w{\hat{u}}_w{u}_w\right]+\left[{A}_{\mathrm{n}}{\hat{v}}_n{u}_n-{A}_s{\hat{v}}_s{u}_s\right]=-{\left.\frac{\partial P}{\partial x}\right|}_P{A}_P{\Delta x}_P+\frac{1}{\mathit{\operatorname{Re}}}\left[{A}_e{\left.\frac{\partial u}{\partial x}\right|}_e-{A}_w{\left.\frac{\partial u}{\partial x}\right|}_w+{A}_n{\left.\frac{\partial u}{\partial y}\right|}_n-{A}_s{\left.\frac{\partial u}{\partial y}\right|}_s\right] $$
$$ \left[{A}_n{\hat{v}}_n{v}_n-{A}_s{\hat{v}}_s{v}_s\right]+\left[{A}_e{\hat{u}}_e{v}_e-{A}_w{\hat{u}}_w{v}_w\right]=-{\left.\frac{\partial P}{\partial y}\right|}_P{A}_P{\Delta y}_P+\frac{1}{\mathit{\operatorname{Re}}}\left[{A}_e{\left.\frac{\partial v}{\partial x}\right|}_e-{A}_w{\left.\frac{\partial v}{\partial x}\right|}_w+{A}_n{\left.\frac{\partial v}{\partial y}\right|}_n-{A}_s{\left.\frac{\partial v}{\partial y}\right|}_s\right] $$
The velocities with that are convecting velocities, which convey mass or momentum of fluid parcels. Considering uniform Cartesian grid and Δx = Δy, divide the last three equations by cross-sectional area, we have
$$ \left[{\hat{u}}_e-{\hat{u}}_w\right]+\left[{\hat{v}}_n-{\hat{v}}_s\right]=0 $$
$$ \left[{\hat{u}}_e{u}_e-{\hat{u}}_w{u}_w\right]+\left[{\hat{v}}_n{u}_n-{\hat{v}}_s{u}_s\right]=\left({P}_w-{P}_e\right)+\frac{1}{\mathit{\operatorname{Re}}}\left[{\left.\frac{\partial u}{\partial x}\right|}_e-{\left.\frac{\partial u}{\partial x}\right|}_w+{\left.\frac{\partial u}{\partial y}\right|}_n-{\left.\frac{\partial u}{\partial y}\right|}_s\right] $$
$$ \left[{\hat{v}}_n{v}_n-{\hat{v}}_s{v}_s\right]+\left[{\hat{u}}_e{v}_e-{\hat{u}}_w{v}_w\right]=\left({P}_s-{P}_n\right)+\frac{1}{\mathit{\operatorname{Re}}}\left[{\left.\frac{\partial v}{\partial x}\right|}_e-{\left.\frac{\partial v}{\partial x}\right|}_w+{\left.\frac{\partial v}{\partial y}\right|}_n-{\left.\frac{\partial v}{\partial y}\right|}_s\right] $$
The above equations govern all the control volumes and are applicable for both collocated and staggered grid. In the following section, the discretization of the equations for the collocated and staggered grid will be presented in details.
Discretization in staggered grid
The schematic of staggered grid arrangement is depicted in Fig. 2. Apparently, the u, v, and scalar (pressure) control volumes are staggered with respect to each other. Firstly, consider x-momentum Eq. (13) for non-boundary u-control volumes shown in Fig. 2. Approximate the diffusion terms of viscous stresses by central difference scheme.
$$ \left[{\hat{u}}_e{u}_e-{\hat{u}}_w{u}_w\right]+\left[{\hat{v}}_n{u}_n-{\hat{v}}_s{u}_s\right]=\left({P}_w-{P}_e\right)\frac{1}{\Delta xRe}\left[\left({u}_E-{u}_P\right)-\left({u}_P-{u}_W\right)+\left({u}_N-{u}_P\right)-\left({u}_P-{u}_S\right)\right] $$
Approximate convecting velocities by the central linear scheme as follows:
$$ {\hat{u}}_e=\frac{u_P+{u}_E}{2}=\frac{u_{i,J}+{u}_{i+1,J}}{2} $$
$$ {\hat{u}}_w=\frac{u_W+{u}_P}{2}=\frac{u_{i-1,J}+{u}_{i,J}}{2} $$
$$ {\hat{v}}_n=\frac{v_{I-1,j}+{v}_{I,j}}{2} $$
$$ {\hat{v}}_s=\frac{v_{I-1,j+1}+{v}_{I,j+1}}{2} $$
The upwind scheme is implemented for convected velocities. In order to generalize the formulation, the relation is derived for the second order upwind. The scheme can be simply changed to the first order upwind only by setting the coefficients 1.5 and 0.5 to 1.0 and 0.0 respectively.
$$ \kern3.5em {\hat{u}}_e{u}_e=\kern0.5em \left[1.5\max \left({\hat{u}}_e,0\right){u}_P\kern1.25em -0.5\max \left({\hat{u}}_e,0\right){u}_{W\kern1em }\right]-\left[1.5\max \left(-{\hat{u}}_e,0\right){u}_E-0.5\max \left(-{\hat{u}}_e,0\right){u}_{EE}\right] $$
$$ \kern3.5em {\hat{u}}_w{u}_w=\kern0.5em \left[1.5\max \left({\hat{u}}_w,0\right){u}_W\kern1.25em -0.5\max \left({\hat{u}}_w,0\right){u}_{WW\kern1em }\right]-\left[1.5\max \left(-{\hat{u}}_w,0\right){u}_P-0.5\max \left(-{\hat{u}}_w,0\right){u}_E\right] $$
Note that at the vertex points (s and n for x-momentum and w and e for y-momentum), term uv is approximated by central difference scheme as follows:
$$ {\hat{v}}_n{u}_n={\hat{v}}_n\frac{u_P+{u}_N}{2} $$
$$ {\hat{v}}_s{u}_s={\hat{v}}_s\frac{u_S+{u}_P}{2} $$
The pressure force exerted on the faces of u-control volume does not need to be approximated because the faces of u-control volume coincide with pressure node, so that
$$ \left({P}_w-{P}_e\right)=\left({P}_{I-1,J}-{P}_{I,J}\right) $$
Performing the aforementioned approximations for x-momentum equation, we attained the following algebraic linear equation based on the central node and neighboring node values of u-velocity.
$$ {a}_P{u}_P={a}_{WW}{u}_{WW}+{a}_W{u}_W+{a}_E{u}_E+{a}_{EE}{u}_{EE}+{a}_S{u}_S+{a}_N{u}_N+\left({P}_{I-1,J}-{P}_{I,J}\right) $$
The Eq. (25) whose coefficients are as follows governs all the control volumes except for boundary volumes.
$$ {a}_P=1.5\max \left({\hat{u}}_e,0\right)+1.5\max \left(-{\hat{u}}_w,0\right)+\frac{\left({\hat{v}}_n-{\hat{v}}_s\right)}{2}+\frac{4}{\Delta xRe} $$
$$ {a}_{WW}=-0.5\max \left({\hat{u}}_w,0\right) $$
$$ {a}_W=0.5\max \left({\hat{u}}_e,0\right)+1.5\max \left({\hat{u}}_w,0\right)+\frac{1}{\Delta xRe} $$
$$ {a}_E=1.5\max \left(-{\hat{u}}_e,0\right)+0.5\max \left(-{\hat{u}}_w,0\right)+\frac{1}{\Delta xRe} $$
$$ {a}_{EE}=-0.5\max \left(-{\hat{u}}_e,0\right) $$
$$ {a}_S=\frac{{\hat{v}}_s}{2}+\frac{1}{\Delta xRe} $$
$$ {a}_N=-\frac{{\hat{v}}_n}{2}+\frac{1}{\Delta xRe} $$
The boundary conditions of x-momentum equation are
Near left wall: for the first column of half-volumes whose centers located at x = 0, the no-penetration condition must be satisfied.
$$ {u}_P\equiv {u}_{i,J}=0\kern5.25em i=1\kern0.5em ,\kern0.75em J=1\kern0.5em to\ n $$
For the second column of volumes, in the second-order upwind scheme, uww does not exist, so the equations must be modified based on the first order upwind as follows:
$$ {\hat{u}}_w{u}_w=\left[\max \left({\hat{u}}_w,0\right){u}_W\ \right]-\left[1.5\max \left(-{\hat{u}}_w,0\right){u}_P-0.5\max \left(-{\hat{u}}_w,0\right){u}_E\right] $$
$$ i=2\kern0.5em ,\kern0.75em J=1\kern0.5em to\ n\kern1.25em $$
Near right wall: for the last column of half-volumes whose centers are located at x = 1, no-penetration condition must be satisfied.
$$ {u}_P\equiv {u}_{i,J}=0\kern5.25em i=n+1\kern0.5em ,\kern0.75em J=1\kern0.5em to\ n $$
For one to the last column of volumes, the discretization scheme must be amended.
$$ {\displaystyle \begin{array}{c}{\hat{u}}_e{u}_e=\kern0.5em \left[1.5\max \left({\hat{u}}_e,0\right){u}_P-0.5\max \left({\hat{u}}_e,0\right){u}_W\right]-\left[\max \left(-{\hat{u}}_e,0\right){u}_E\right]\\ {}i=n\kern0.5em ,\kern0.75em J=1\kern0.5em to\ n\end{array}} $$
Lower wall control volumes (the first row), i.e., in the vicinity of y = 0: in these volumes, the viscous term of ∂u/∂y must be approximated by forward scheme rather than central scheme.
$$ {\left.\frac{\partial u}{\partial y}\right|}_s=\frac{\left({u}_P-0\right)}{\Delta y/2} $$
Where Uy = 0 is the velocity of the lower wall. Similarly, for the upper wall at the vicinity of y = 1 the viscous term of ∂u/∂y must be approximated by backward scheme rather than central scheme
$$ {\left.\frac{\partial u}{\partial y}\right|}_n=\frac{\left({u}_0-{u}_P\right)}{\Delta y/2} $$
For four corners, a combination of the mentioned boundary conditions must be used.
Similarly, consider y-momentum Eq. (14) for non-boundary v-control volumes depicted in Fig. 2. To discretize diffusion terms for those CVs, we use the CDS.
$$ \left[{\hat{v}}_n{v}_n-{\hat{v}}_s{v}_s\right]+\left[{\hat{u}}_e{v}_e-{\hat{u}}_w{v}_w\right]=\left({P}_s-{P}_n\right)\frac{1}{\Delta xRe}\left[\left({v}_E-{v}_P\right)-\left({v}_P-{v}_W\right)+\left({v}_N-{v}_P\right)-\left({v}_P-{v}_S\right)\right] $$
Approximate convecting velocities at four faces by CDS.
$$ {\hat{u}}_e=\frac{u_{i+1,J-1}+{u}_{i+1,J}}{2} $$
$$ {\hat{u}}_w=\frac{u_{i,J-1}+{u}_{i,J}}{2} $$
$$ {\hat{v}}_n=\frac{v_P+{v}_N}{2}=\frac{v_{I,j}+{v}_{I,j+1}}{2} $$
$$ {\hat{v}}_s=\frac{v_S+{v}_P}{2}=\frac{v_{I,j-1}+{v}_{I,j}}{2} $$
The convected velocities at vertical faces are discretized by the second-order upwind scheme. To alter the scheme to the first order, the coefficients 1.5 and 0.5 should be changed to 1.0 and 0.0 respectively.
$$ \kern3.5em {\hat{v}}_n{v}_n=\kern0.5em \left[1.5\max \left({\hat{v}}_n,0\right){v}_P\kern1.25em -0.5\max \left({\hat{v}}_n,0\right){v}_{S\kern1em }\right]-\left[1.5\max \left(-{\hat{v}}_n,0\right){v}_N-0.5\max \left(-{\hat{v}}_n,0\right){v}_{NN}\right] $$
$$ \kern3.5em {\hat{v}}_s{v}_s=\kern0.75em \left[1.5\max \left({\hat{v}}_s,0\right){v}_S\kern1.25em -0.5\max \left({\hat{v}}_s,0\right){v}_{SS\kern1em }\right]-\left[1.5\max \left(-{\hat{v}}_s,0\right){v}_P-0.5\max \left(-{\hat{v}}_s,0\right){v}_N\right] $$
Note that at the vertex points w and e, the term uv is approximated by central difference scheme as follows:
$$ \kern3.5em {\hat{u}}_e{v}_e={\hat{u}}_e\frac{v_P+{v}_E}{2} $$
$$ \kern3.5em {\hat{u}}_w{v}_w={\hat{u}}_w\frac{v_W+{v}_P}{2} $$
The pressures at vertical faces coincide with their nodal points as follows:
$$ \left({P}_s-{P}_n\right)=\left({P}_{I,J-1}-{P}_{I,J}\right) $$
The final form of discretized y-momentum equation based on central and neighboring node values is as follows:
$$ {a}_P{v}_P={a}_{SS}{v}_{SS}+{a}_S{v}_S+{a}_N{v}_N+{a}_{NN}{v}_{NN}+{a}_W{v}_W+{a}_S{v}_S+\left({P}_{I,J-1}-{P}_{I,J}\right) $$
The above equation dominates all the v-velocity CVs except for boundaries. The relations of the coefficients for the second order upwind are
$$ {a}_P=1.5\max \left({\hat{v}}_n,0\right)+1.5\max \left(-{\hat{v}}_s,0\right)+\frac{\left({\hat{u}}_e-{\hat{u}}_w\right)}{2}+\frac{4}{\Delta xR\mathrm{e}} $$
$$ {a}_{SS}=-0.5\max \left({\hat{v}}_s,0\right) $$
$$ {a}_S=0.5\max \left({\hat{v}}_n,0\right)+1.5\max \left({\hat{v}}_s,0\right)+\frac{1}{\Delta xRe} $$
$$ {a}_N=1.5\max \left(-{\hat{v}}_n,0\right)+0.5\max \left(-{\hat{v}}_s,0\right)+\frac{1}{\Delta xRe} $$
$$ {a}_{NN}=-0.5\max \left(-{\hat{v}}_n,0\right) $$
$$ {a}_W=\frac{{\hat{u}}_w}{2}+\frac{1}{\Delta xRe} $$
$$ {a}_E=-\frac{{\hat{u}}_e}{2}+\frac{1}{\Delta xRe} $$
Boundary conditions of y-momentum equation are as follows:
Near the bottom wall: the no-penetration condition must be satisfied for the first row.
$$ {v}_P\equiv {v}_{I,j}=0\kern5.25em j=1\kern0.5em ,\kern0.75em I=1\kern0.5em to\ n $$
For the second row of v-velocity CVs, the second-order upwind must be replaced by the first one at positive flow.
$$ {\displaystyle \begin{array}{c}{\hat{v}}_s{v}_s=\left[\max \left({\hat{v}}_s,0\right){v}_S\right]-\left[1.5\max \left(-{\hat{v}}_s,0\right){v}_P-0.5\max \left(-{\hat{v}}_s,0\right){v}_N\right]\\ {}j=2\kern0.5em ,\kern0.75em I=1\kern0.5em to\ n\ \end{array}} $$
Near the top wall: the no-penetration condition must be satisfied for the last row (y = 1)
$$ {v}_P\equiv {v}_{I,j}=0\kern5.25em j=n+1\kern0.5em ,\kern0.75em I=1\kern0.5em \mathrm{to}\ n $$
For one to the last CV, the negative flow direction must be approximated by the first-order upwind.
$$ {\displaystyle \begin{array}{c}{\hat{v}}_n{v}_n=\kern0.5em \left[1.5\max \left({\hat{v}}_n,0\right){v}_P-0.5\max \left({\hat{v}}_n,0\right){v}_{S\kern1em }\right]-\left[\kern0.5em \max \left(-{\hat{v}}_n,0\right){v}_N\ \right]\\ {}j=n\kern0.5em ,\kern0.75em I=1\kern0.5em to\ n\end{array}} $$
In the leftmost CVs (near x = 0), use forward difference scheme instead of CDS for discretization of viscous stress as
$$ {\left.\frac{\partial v}{\partial x}\right|}_w=\frac{\left({v}_P-0\right)}{\Delta x/2} $$
Where Vx = 0 is the left wall velocity.
For the rightmost CVs (near x = 1), implement the backward scheme to discretize the diffusion term.
$$ {\left.\frac{\partial v}{\partial x}\right|}_e=\frac{\left(0-{v}_P\right)}{\Delta x/2} $$
At four corners of the computational domain, a combination of the described boundary conditions must be utilized.
In order to implement the family of SIMPLE algorithms, manipulate the continuity equation to attain an algebraic equation in terms of nodal pressures to use in couple with the momentum equations.
$$ \left[{u}_e-{u}_w\right]+\left[{v}_n-{v}_s\right]=0 $$
The velocities at the faces of scalar CVs are their nodal values so
$$ \left[{u}_{i+1,J}-{u}_{i,J}\right]+\left[{v}_{I,j+1}-{v}_{I,j}\right]=0 $$
According to the SIMPLE algorithm, the first step is to guess the pressure field. Nevertheless, due to the non-linearity of the momentum equation, the velocity field must be guessed also. Consider the discretized momentum equations for exact solution u and approximate solution u*:
$$ {a}_P{u}_P=\sum {a}_{nb}{u}_{nb}+\left({P}_{I-1,J}-{P}_{I,J}\right) $$
$$ {a}_P{u}_P^{\ast }=\sum {a}_{nb}{u}_{nb}^{\ast }+\left({P}_{I-1,J}^{\ast }-{P}_{I,J}^{\ast}\right) $$
Subtracting the last two equations gives
$$ {a}_P\left({u}_P-{u}_P^{\ast}\right)=\sum {a}_{nb}\left({u}_{nb}-{u}_{nb}^{\ast}\right)+\left[\left({P}_{I-1,J}-{P}_{I-1,J}^{\ast}\right)-\left({P}_{I,J}-{P}_{I,J}^{\ast}\right)\right] $$
Define the exact pressure and velocity as the summation of approximate plus correction.
$$ P={P}^{\ast }+{P}^{\prime } $$
$$ u={u}^{\ast }+{u}^{\prime } $$
Note that a relaxation factor is used for pressure correction.
Now, the x-momentum equation can be written as follows:
$$ {a}_P{u}_P^{\prime }=\sum {a}_{nb}{u}_{nb}^{\prime }+\left[{P}_{I-1,J}^{\prime }-{P}_{I,J}^{\prime}\right] $$
SIMPLE algorithm ignores the first term of right-hand side, whereas SIMPLEC uses the assumption of u'nb = u'P
$$ {u}_P^{\prime }={d}_P^u\left[{P}_{I-1,J}^{\prime }-{P}_{I,J}^{\prime}\right] $$
Where, \( {d}_P^u \) for algorithms SIMPLE and SIMPLEC equals \( \frac{1}{ap} \) and \( \frac{1}{ap-\sum anb\ } \), respectively.
Similarly, for velocity v, we have the following relations:
$$ v={v}^{\ast }+{v}^{\prime } $$
$$ {v}_P^{\prime }={d}_P^v\left[{P}_{I,J-1}^{\prime }-{P}_{I,J}^{\prime}\right] $$
Incorporating Eqs. (71) and (69), and Eqs. (73) and (72) and then put the resulting relations in the continuity equation yields the following algebraic equations:
$$ \left({d}_{i,J}^u+{d}_{i+1,J}^u+{d}_{i,J}^v+{d}_{i,J+1}^v\right){P}_P^{\prime }={d}_{i,J}^u{P}_W^{\prime }+{d}_{i+1,J}^u{P}_E^{\prime }+{d}_{i,J}^v{P}_S^{\prime }+{d}_{i,J+1}^v{P}_N^{\prime }+b $$
Where the source term of b is
$$ b=\left[{u}_{i,J}^{\ast }-{u}_{i+1,J}^{\ast}\right]+\left[{v}_{I,j}^{\ast }+{v}_{I,j+1}^{\ast}\right] $$
In SIMPLER algorithm, the predictive pressure P* itself is calculated in the initial stage by virtual velocity field. To perform this, guess pressure and velocities and put velocities in pressure-free momentum Eqs. (76) and (77) to attain pseudo-velocities.
$$ \overset{\sim }{u}=\frac{\sum {a}_{nb}^u{u}_{nb}^{\ast }+b}{a_P^u} $$
$$ \overset{\sim }{v}=\frac{\sum {a}_{nb}^v{v}_{nb}^{\ast }+b}{a_P^v} $$
Insert the pseudo-velocities into relations (78) and (79), and then replace the real velocities in the continuity equation. The result is an algebraic equation for the predictive pressure P*. From this stage onwards, SIMPLER algorithm is similar to SIMPLE algorithm.
$$ u=\overset{\sim }{u}+{d}^u\left({P}_{I-1,J}-{P}_{I,J}\right) $$
$$ v=\overset{\sim }{v}+{d}^v\left({P}_{I,J-1}-{P}_{I,J}\right) $$
Discretization in collocated grid
In collocated grid (see Fig. 1), faces and nodes of all the variables are the same. The formulation of the collocated grid is simpler than that of the staggered grid and of course, is similar. One difference is in estimating the convecting velocities.
Arbitrary control volume
We again recall semi-discretized conservation Eqs. (12)–(14). According to Rhie and Chow, the convecting velocity on a surface equals the average of two neighboring velocities on nodes, plus the third derivative of pressure gradient function as follows:
$$ {\hat{u}}_e=\frac{u_P+{u}_E}{2}+\left[-{d}_e^u{\left(\frac{\partial P}{\partial x}\right)}_e+\frac{1}{2}{d}_P^u{\left(\frac{\partial P}{\partial x}\right)}_P+\frac{1}{2}{d}_E^u{\left(\frac{\partial P}{\partial x}\right)}_E\right]\varDelta x $$
$$ {\hat{v}}_n=\frac{v_P+{v}_N}{2}+\left[-{d}_n^v{\left(\frac{\partial P}{\partial y}\right)}_n+\frac{1}{2}{d}_P^v{\left(\frac{\partial P}{\partial y}\right)}_P+\frac{1}{2}{d}_N^v{\left(\frac{\partial P}{\partial y}\right)}_N\right]\varDelta y $$
Similarly, for \( \hat{u} \) w and \( {\hat{\mathrm{v}}}_{\mathrm{s}} \), similar relationships can be derived.
For the non-boundary control volumes, we estimate the pressure gradient terms of Eqs. (80) and (81) with CDS as well as backward or forward linear approximation (depending on the flow direction) for boundary control volumes. For non-boundary surfaces,
$$ {\hat{u}}_e=\frac{u_P+{u}_E}{2}+\left[\frac{1}{2}\left({d}_P^u+{d}_E^u\right)\left({P}_P-{P}_E\right)-\frac{1}{4}\left({d}_P^u\right)\left({P}_W-{P}_E\right)-\frac{1}{4}\left({d}_E^u\right)\left({P}_P-{P}_{EE}\right)\right] $$
$$ {\hat{u}}_w=\frac{u_W+{u}_P}{2}+\left[\frac{1}{2}\left({d}_W^u+{d}_P^u\right)\left({P}_W-{P}_P\right)-\frac{1}{4}\left({d}_W^u\right)\left({P}_{WW}-{P}_P\right)-\frac{1}{4}\left({d}_P^u\right)\left({P}_W-{P}_E\right)\right] $$
$$ {\hat{v}}_n=\frac{v_P+{v}_N}{2}+\left[\frac{1}{2}\left({d}_P^v+{d}_N^v\right)\left({P}_P-{P}_N\right)-\frac{1}{4}\left({d}_P^v\right)\left({P}_S-{P}_N\right)-\frac{1}{4}\left({d}_N^v\right)\left({P}_P-{P}_{NN}\right)\right] $$
$$ {\hat{v}}_s=\frac{v_S+{v}_P}{2}+\left[\frac{1}{2}\left({d}_S^v+{d}_P^v\right)\left({P}_S-{P}_P\right)-\frac{1}{4}\left({d}_S^v\right)\left({P}_{SS}-{P}_P\right)-\frac{1}{4}\left({d}_P^v\right)\left({P}_S-{P}_N\right)\right] $$
Pressures forces in the momentum equations are
$$ \left({P}_w-{P}_e\right)=\frac{P_W+{P}_P}{2}-\frac{P_P+{P}_E}{2}=\frac{P_W-{P}_E}{2} $$
$$ \left({P}_s-{P}_n\right)=\frac{P_S+{P}_P}{2}-\frac{P_P+{P}_N}{2}=\frac{P_S-{P}_N}{2} $$
The final forms of the momentum equations for the non-boundary control volumes are obtained as follows:
$$ {a}_P{u}_P=\sum {a}_{nb}{u}_{nb}+\left({P}_W-{P}_E\right)/2 $$
$$ {a}_P{v}_P=\sum {a}_{nb}{v}_{nb}+\left({P}_S-{P}_N\right)/2 $$
After calculating the velocities from the momentum Eqs. (88) and (89), couple velocity and pressure by incorporating Eqs. (82)–(85) into continuity Eq. (12). Subsequently, we obtain an algebraic equation in terms of pressure:
$$ {a}_P{P}_P={a}_{WW}{P}_{WW}+{a}_W{P}_W+{a}_E{P}_E+{a}_{EE}{P}_{EE}+{a}_{SS}{P}_{SS}+{a}_S{P}_S+{a}_N{P}_N+{a}_{NN}{P}_{NN}+b $$
Where source term is
$$ b=\sum {u}_{\mathrm{in}}^{\mathrm{CDS}}-\sum {u}_{\mathrm{out}}^{\mathrm{CDS}} $$
Applying boundary conditions on the continuity equation requires a little care. The coefficients of Eq. (90) are presented in Tables 1 and 2.
Table 1 Coefficients of pressure correction equation which vary in x-direction
Table 2 Coefficients of pressure correction equation which vary in y-direction
In order to apply the boundary conditions in the collocated grid, we go through the same way as for the staggered grid. It is noteworthy that in the collocated grid, more inner CVs are influenced by the boundary conditions than in staggered grid.
Vorticity calculation: we calculate vorticity at nodal points. According to Fig. 2, the vorticity for non-boundary nodes in the staggered grid is calculated by CDS:
$$ \omega =\frac{\partial v}{\partial x}-\frac{\partial u}{\partial y}=\frac{v_{I,j}-{v}_{I-1,j}}{\Delta x}-\frac{u_{i,J}-{u}_{i,J-1}}{\Delta y} $$
Staggered grid arrangement
In the collocated grid, velocity quantity on surfaces is calculated by averaging, and then vorticity is obtained by CDS approximation as follows:
$$ \omega =\frac{\partial v}{\partial x}-\frac{\partial u}{\partial y}=\frac{\frac{v_{I,J-1}+{v}_{I,J}}{2}-\frac{v_{I-1,J-1}+{v}_{I-1,J}}{2}}{\Delta x}-\frac{\frac{u_{I-1,J}+{u}_{I,J}}{2}-\frac{u_{I-1,J-1}+{u}_{I,J-1}}{2}}{\Delta x} $$
Results and discussions
Staggered grid independency
First, we need to examine the grid independence of the results. Grid independence study is carried out with FUDS, for Reynolds numbers of 10, 100, and 1000. The v-velocity profiles at y = 1/2 and the vorticity profiles at x = 0.9 (due to the formation of the secondary vertex in the lower right corner of the area) are implemented to check grid independency, though the results of the latter are presented.
For Reynolds number of 10, the vorticity profiles at x = 0.9 are illustrated in Fig. 3. Four different grid systems of 10 × 10, 20 × 20, 40 × 40, and 60 × 60 are investigated. It is clear from the figure that the grid independency is obtained by grid 40 × 40. Figure 4 displays the vorticity profiles at x = 0.9 for grids 20 × 20, 40 × 40, 60 × 60, and 80 × 80 at Reynolds number of 100. Obviously, the grid independency for Re = 100 is obtained by grid 60 × 60. The vorticity profiles of grids 40 × 40, 60 × 60, 80 × 80, 100 × 100, 120 × 120, and 140 × 140 for Re = 1000 are presented in Fig. 5. The mesh independency in this case is obtained by grid 120 × 120.
Staggered grid independency; vorticity profiles at x = 0.9 for Re = 10
Staggered grid independency; vorticity profiles at x = 0.9 for Re = 100
Staggered grid independency; vorticity profiles at x = 0.9 for Re = 1000
Collocated grid independency
Similar to those done for the staggered grid, we use the vorticity profiles at x = 0.9 for investigating the grid independence. The vorticity profiles at x = 0.9 and Reynolds number 10 is presented for grids 20 × 20, 40 × 40, and 60 × 60 in Fig. 6. It is clear from the figure that grid independence is obtained by grid 40 × 40. Figure 7 presents the vorticity profiles at x = 0.9 and Reynolds number 100 for grids 40 × 40, 60 × 60, and 80 × 80. This figure reveals that grid 60 × 60 is appropriate for independent results at Re = 100. The vorticity profiles at x = 0.9 with Reynolds number 1000 are depicted for grids 60 × 60, 80 × 80, 100 × 100, 120 × 120, and 140 × 140 in Fig. 8. It is clear from the figure that grid independency is obtained for Re = 1000 by grid 120 × 120.
Collocated grid independency; vorticity profiles at x = 0.9 for Re = 10
Collocated grid independency; vorticity profiles at x = 0.9 for Re = 100
Collocated grid independency; vorticity profiles at x = 0.9 for Re = 1000
Consequently, collocated and staggered grids require the same grid size to attain independent results at each Reynolds number.
Staggered versus collocated
To compare the results of staggered and the collocated grids and compare them with benchmark of (Ghia, Ghia, and Shin, 1982), Figs. 9, 10, and 11 are provided. The u-velocity profiles at x = 1/2 are shown for the three Reynolds numbers of 10, 100, and 1000 obtained by FUDS. The results of (Ghia et al. 1982) are presented as benchmark for Re = 100, 1000.
u-velocity profiles at x = 1/2 for Re = 10
u-velocity profiles at x = 1/2 for Re = 100
u-velocity profiles at x = 1/2 for Re = 1000
The results prove that in cases of coarse grid (less number of nodes), the difference of the results of collocated grid and staggered grid is larger. Moreover, the values obtained with staggered grid are slightly more accurate than collocated ones by comparison with the reference values. It should be pointed out that the error of two implemented approaches is higher in higher Reynolds numbers, especially in the regions with steep velocity gradients (see Fig. 11 for u at y = 0.17 and y = 0.85). This problem can be resolved by implementing a TVD scheme instead of the upwind scheme.
FUDS versus SUDS
To compare the results of the second order UDS, with the results presented in the previous sections, which are obtained by the first order USD, simulations have been done with different number of nodes at Reynolds numbers 10, 100, and 1000. Prior to that, we studied the grid independency of the results of SUDS. For Re = 10, grid 40 × 40 gives the independent results identical to that of FUDS. At Reynolds number 100, grid 50 × 50 and grid 40 × 40 give the independent results. However, with FUDS, the mesh independency is obtained by grid 60 × 60. Grid 80 × 80 attains independent outcomes, whereas the grid independence is obtained in grid 120 × 120 by FUDS.
According to the aforementioned results, it can be concluded that at low Reynolds numbers, the grid sizes in which independency is attained are the same for both FUDS and SUDS. In contrast, at the high Reynolds numbers (Re > 100), FUDS require a finer grid than SUDS to attain independency. Consequently, the second order upwind scheme is an appropriate approximation for the large convection term at high Reynolds numbers.
Now, in order to compare the results of FUDS and SUDS, the u-velocity profiles at x = 1/2 are illustrated in Figs. 12, 13, and 14 respectively for 3 Reynolds numbers of 10, 100, and 1000 obtained with staggered grid. The results of (Ghia et al. 1982) are also brought in Figs. 13 and 14 for comparison.
FUDS versus SUDS; u-velocity profiles at x = 1/2 for Re = 10
FUDS versus SUDS; u-velocity profiles at x = 1/2 for Re = 100
FUDS versus SUDS; u-velocity profiles at x = 1/2 for Re = 1000
With regard to these profiles, there is no superiority of SUDS over FUDS at low Reynolds numbers of 10. By contrast, SUDS has significantly less error than FUDS at Reynolds number 100 in coarse grids, and this issue reveals the SUDS accuracy in the approximation of the momentum convection terms. Of course, the errors of both schemes reduce by fining the grid. In any case, SUDS have closer values to the reference results. The differences in values presented in Fig. 14 for Re = 1000 are more obvious. In this Reynolds number, FUDS has a considerable error even with a fine grid of 120 × 120 (noting that FUDS results are grid independent in this number of nodes). The SUDS results even by grid 40 × 40 have less error than grid independent results of FUDS. Furthermore, SUDS with grid 120 × 120 exactly coincide with the benchmark of (Ghia et al. 1982).
Coupling algorithms of SIMPLE family
First, it must be noted that all previous results presented in this report are obtained by SIMPLE algorithm. To investigate the accuracy and the performance of the algorithms in the iterative solution process, the residual of the x-momentum equation is plotted for 3 algorithms of SIMPLE family at Reynolds numbers of 10, 100, and 1000. The residual in terms of iteration number for Re = 10 for SIMPLE, SIMPLEC, and SIMPLER is shown in Fig. 15. Obviously, the SIMPLER algorithm apparently converges with the number of iterations about 40% of that of SIMPLE. Considering the fact that the number of calculations in each iterate involved in SIMPLER is about 30% larger than that for SIMPLE (Versteeg and Malalasekera, 2007), the simulation time SIMPLER is about half of the SIMPLE. Unfortunately, SIMPLER gives devastating oscillatory solution. SIMPLEC ranks the second in speed but it causes oscillatory solution and of course less than SIMPLER.
Performance of different coupling algorithms at Re = 10
Figure 16 depicts the residual of algorithms at Re = 100. The performance of SIMPLE and SIMPLEC algorithms are alike and nearly smooth while SIMPLER is still faster but substantially oscillatory. The residuals for Reynolds number 1000 are illustrated in Fig. 17. Here, the trends are thoroughly different from the previous graphs. From iteration number 800 onwards, SIMPLEC algorithm loses its performance and its convergence rate drops. SIMPLE and SIMPLER behave fairly smooth with acceptable convergence rate. Multiplying the number of iterations by the number of calculations in each iterate ranks SIMPLER as the fastest.
Performance of different coupling algorithms at Re = 100
Performance of different coupling algorithms at Re = 1000
All in all, SIMPLEC algorithm is a proper choice for low Reynolds numbers of order of 10, SIMPLE is suitable for Reynolds numbers of order of 102, and SIMPLER is the fastest for moderate Reynolds numbers of order of 103.
Overall comparison of the methods
In order to sum up the results of FUDS versus SUDS, and staggered grid versus collocated grid, and also to compare with the reference, u-velocity profiles along y-axis x = 1/2 and v-velocity profiles along x-axis at y = 1/2 for Re = 1000 are depicted in Figs. 18 and 19, respectively.
Comparison of methods; u-velocity profile at x = 1/2 for Re = 1000
Comparison of methods; v-velocity profile at x = 1/2 for Re = 1000
According to the two figures, it can be inferred that FUDS is rather inaccurate due to numerical diffusion especially in the regions with large gradients, and it makes the velocity profile smoother. Moreover, staggered and collocated grids have the same results with FUDS. Nevertheless, with SUDS, staggered grid has more accuracy than the collocated grid. It is remarkable that the results of staggered grid with 120 × 120 and SUDS coincide with the results of (Ghia et al. 1982). Totally, staggered grid with SUDS gives the most accurate results followed by collocated grid with SUDS.
In the present paper, a lid-driven cavity problem is modeled via two basically different approaches of spatial discretization: collocated and staggered. From CFD point of view, it is noteworthy that the semi-discrete equations of the staggered grid and the collocated grid are similar. The main difference is in the calculation of convecting velocities and applying the boundary conditions.
Grid independency study proves that collocated and staggered grids require equal grid size to attain independent results at same Reynolds numbers. By coarse grid, the difference of the results of the collocated grid and the staggered grid is larger. Also, the error of two approaches is higher in higher Reynolds numbers, especially in the regions with steep gradients. At low Reynolds numbers, the grid size in which independency is attained is the same for both FUDS and SUDS. In contrast, at the moderate Reynolds numbers, FUDS require a finer grid than SUDS to attain independency. There is no superiority of SUDS over FUDS at low Reynolds numbers whereas SUDS has considerably less error than FUDS at moderate Reynolds numbers. By comparing different coupling algorithms it can be concluded that SIMPLEC algorithm is a proper choice for low Reynolds numbers of order of 10, SIMPLE is suitable for Reynolds numbers of order of 102, and SIMPLER is the fastest for moderate Reynolds numbers of order of 103.
Briefly, FUDS is rather inaccurate due to numerical diffusion especially in the regions with large gradients, and it makes the velocity profile smoother and staggered, and collocated grids have the same results with FUDS. Besides, staggered grid with SUDS gives the most accurate results followed by collocated grid with SUDS.
A cross-sectional area
a coefficient in discretized equation
b source term
d pressure term coefficient
L cavity dimension
P pressure
P0 reference pressure
Re Reynolds number
u velocity in x-direction
u0 lid velocity
v velocity in x-direction
x longitudinal coordinate
y transversal coordinate
ρ density
μ dynamic viscosity
Subscripts
I node number in x-direction
i face number in x-direction
J node number in y-direction
j face number in y-direction
* dimensional parameter
The output data of the computer code will be available on request via email to the corresponding author.
Ding, P. (2017). Solution of lid-driven cavity problems with an improved SIMPLE algorithm at high Reynolds numbers. International Journal of Heat and Mass Transfer, 115, 942–954.
dos Santos, D. D. O., et al. (2011). Numerical approximations for flow of viscoplastic fluids in a lid-driven cavity. Journal of Non-Newtonian Fluid Mechanics, 166(12–13), 667–679.
Elshehabey, H. M., & Ahmed, S. E. (2015). MHD mixed convection in a lid-driven cavity filled by a nanofluid with sinusoidal temperature distribution on the both vertical walls using Buongiorno's nanofluid model. International Journal of Heat and Mass Transfer, 88, 181–202.
Ghia, V., Ghia, K. N., & Shin, C. T. (1982). High-resolutions for incompressible flow using the Navier-stokes equations and a multi-grid method. Journal of Computational Physics, 48, 387–411.
Gutt, R., & Groşan, T. (2015). On the lid-driven problem in a porous cavity. A theoretical and numerical approach. Applied Mathematics and Computation, 266, 1070–1082.
Indukuri, J. V., & Maniyeri, R. (2018). Numerical simulation of oscillating lid driven square cavity. Alexandria Engineering Journal, 57, 2609–2625.
McDonough, J. M. (2007). Parallel simulation of turbulent flow in a 3-D lid-driven cavity. In Parallel computational fluid dynamics 2006 (pp. 245–252). Elsevier Science BV. https://www.sciencedirect.com/science/article/pii/B978044453035650033X.
Patil, D. V., Lakshmisha, K. N., & Rogg, B. (2006). Lattice Boltzmann simulation of lid-driven flow in deep cavities. Computers & Fluids, 35(10), 1116–1125.
Peng, Y.-F., Shiau, Y.-H., & Hwang, R. R. (2003). Transition in a 2-D lid-driven cavity flow. Computers & Fluids, 32(3), 337–352.
Tamer, A. A. M., Khalid, M. S., Mohamed, A. K., & Ahmed, A. A. (2017). Revisiting the lid-driven cavity flow problem: Review and new steady state benchmarking results using GPU accelerated code. Alexandria Engineering Journal, 56, 123–135.
Versteeg, H. K., & Malalasekera, W. (2007). An introduction to computational fluid dynamics: The finite volume method (2nd ed.). Pearson Education. https://www.amazon.com/Introduction-Computational-Fluid-Dynamics-Finite/dp/0131274988. https://pearson.com.au/products/Versteeg-Malalasekra/An-Introduction-to-Computational-Fluid-Dynamics-The-Finite-Volume-Method/9780131274983?R=9780131274983.
Yapici, K., Karasozen, B., & Uludag, Y. (2009). Finite volume simulation of viscoelastic laminar flow in a lid-driven cavity. Journal of Non-Newtonian Fluid Mechanics, 164, 51–65.
The authors express their special thanks to Professor Ali Ashrafizadeh from K.N. Toosi University of Technology for all his help.
This research does not have any funding.
Department of Mechanical Engineering, K.N. Toosi University of Technology, Tehran, 19919-43344, Iran
A. A. Boroujerdi & M. Hamzeh
A. A. Boroujerdi
M. Hamzeh
AAB has developed the model and wrote the computer code. MMH has written the manuscript text and prepared the figures. The manuscript text is finally amended by AAB. Both authors read and approved the final manuscript.
Correspondence to A. A. Boroujerdi.
Boroujerdi, A.A., Hamzeh, M. Comparison of varieties of numerical methods applied to lid-driven cavity flow: coupling algorithms, staggered grid vs. collocated grid, and FUDS vs. SUDS. Int J Mech Mater Eng 14, 7 (2019). https://doi.org/10.1186/s40712-019-0104-7
Numerical simulation
Collocated
Lid-driven cavity
Upwind scheme
Coupling algorithm | CommonCrawl |
A shortened verbal autopsy instrument for use in routine mortality surveillance systems
Peter Serina1,
Ian Riley2,
Andrea Stewart1,
Abraham D. Flaxman1,
Rafael Lozano3,1,
Meghan D Mooney1,
Richard Luning1,
Bernardo Hernandez1,
Robert Black4,
Ramesh Ahuja5,6,
Nurul Alam7,
Sayed Saidul Alam7,
Said Mohammed Ali8,
Charles Atkinson1,
Abdulla H. Baqui4,
Hafizur R. Chowdhury23,
Lalit Dandona1,9,
Rakhi Dandona9,
Emily Dantzer10,
Gary L Darmstadt11,
Vinita Das12,
Usha Dhingra4,8,
Arup Dutta4,8,
Wafaie Fawzi13,
Michael Freeman1,
Saman Gamage14,
Sara Gomez15,
Dilip Hensman14,
Spencer L. James1,
Rohina Joshi16,
Henry D. Kalter4,
Aarti Kumar5,6,
Vishwajeet Kumar5,6,
Marilla Lucero17,
Saurabh Mehta18,
Bruce Neal19,20,
Summer Lockett Ohno1,
David Phillips1,
Kelsey Pierce1,
Rajendra Prasad12,
Devarsetty Praveen21,
Zul Premji22,
Dolores Ramirez-Villalobos3,
Rasika Rampatige2,
Hazel Remolador17,
Minerva Romero3,
Mwanaidi Said22,
Diozele Sanvictores17,
Sunil Sazawal4,8,
Peter K. Streatfield7,
Veronica Tallo17,
Alireza Vadhatpour1,
Nandalal Wijesekara14,
Christopher J. L. Murray1 &
Alan D. Lopez23
BMC Medicine volume 13, Article number: 302 (2015) Cite this article
Verbal autopsy (VA) is recognized as the only feasible alternative to comprehensive medical certification of deaths in settings with no or unreliable vital registration systems. However, a barrier to its use by national registration systems has been the amount of time and cost needed for data collection. Therefore, a short VA instrument (VAI) is needed. In this paper we describe a shortened version of the VAI developed for the Population Health Metrics Research Consortium (PHMRC) Gold Standard Verbal Autopsy Validation Study using a systematic approach.
We used data from the PHMRC validation study. Using the Tariff 2.0 method, we first established a rank order of individual questions in the PHMRC VAI according to their importance in predicting causes of death. Second, we reduced the size of the instrument by dropping questions in reverse order of their importance. We assessed the predictive performance of the instrument as questions were removed at the individual level by calculating chance-corrected concordance and at the population level with cause-specific mortality fraction (CSMF) accuracy. Finally, the optimum size of the shortened instrument was determined using a first derivative analysis of the decline in performance as the size of the VA instrument decreased for adults, children, and neonates.
The full PHMRC VAI had 183, 127, and 149 questions for adult, child, and neonatal deaths, respectively. The shortened instrument developed had 109, 69, and 67 questions, respectively, representing a decrease in the total number of questions of 40-55 %. The shortened instrument, with text, showed non-significant declines in CSMF accuracy from the full instrument with text of 0.4 %, 0.0 %, and 0.6 % for the adult, child, and neonatal modules, respectively.
We developed a shortened VAI using a systematic approach, and assessed its performance when administered using hand-held electronic tablets and analyzed using Tariff 2.0. The length of a VA questionnaire was shortened by almost 50 % without a significant drop in performance. The shortened VAI developed reduces the burden of time and resources required for data collection and analysis of cause of death data in civil registration systems.
Cause of death (COD) information is essential to guide and inform health policy and priority debates [1]. Ideally, COD data would be based on accurate medical certification and registration of all deaths [2]. However, vital registration systems still function poorly in many countries, particularly in resource-poor settings where mortality rates are higher and accurate cause of death information is most crucial [3]. Verbal autopsy (VA) is now becoming recognized as the only feasible alternative to comprehensive medical certification of deaths in such settings. The World Health Organization has now called for wider use of VA to improve understanding of the causes of mortality and the nature of mortality change in national populations [4].
Although VAs have been incorporated into official data collection systems already in place in countries such as India [5], Brazil [6], Bangladesh [7], and Sri Lanka [8], as well as through the collection of VA samples during national censuses as in Mozambique [9], doubts have remained about the ability of VAs to provide accurate and timely information about the COD in populations. This can be attributed, in large part, to the initial reliance on physician certification of verbal autopsies (PCVA) in demographic and health surveillance research sites. PCVA is time-consuming and expensive, and it is difficult to maintain the quality of cause assignment on a large scale over long periods of time.
These problems, however, can be resolved by introducing automated VA diagnostic methods, which have been shown to out-perform PCVA in terms of their accuracy. They now offer the potential for inexpensive, rapid, and reliable COD assignments for deaths occurring outside of hospitals [10–13].
Current practice in the application of VA is to collect interview information using paper-based verbal autopsy instruments (VAIs), which have been largely derived from VA methods developed for research sites in the 1980s and 1990s [14, 15]. A barrier to their widespread adoption by national registration systems has been the amount of time and, hence, cost needed to conduct interviews and to maintain their quality. For widespread application, a short instrument is needed, but one that still enables automated diagnostic systems to make accurate predictions of causes of death. At the same time, electronic systems for data collection need to replace paper-based systems.
We address these needs in this paper and describe a shortened version of the VAI developed for the Population Health Metrics Research Consortium Gold Standard Verbal Autopsy Validation Study (PHMRC study) [16]. Using a formal empirically-based method to shorten the VAI, we identify the key survey questions and the optimal length of a shortened VAI.
Our general approach was first to establish a rank order of individual question items in the PHMRC VAI in terms of their importance in predicting COD. We did this using the Tariff 2.0 Method [17] to predict the COD for each VA in the PHMRC Gold Standard database and by comparing the predicted COD with the gold standard cause. Second, we reduced the size of the instrument by dropping items in reverse order of their importance. We assessed the predictive performance of the instrument at each stage of item reduction by calculating chance-corrected concordance (CCC) at the level of the individual and cause specific mortality fraction (CSMF) accuracy at the level of the population (see below). Finally, the optimum size of the shortened instrument was determined using a first derivative analysis of the decline in performance as the size of the VAI progressively decreased. We followed the same approach for adults, children, and neonates.
PHMRC gold standard validation study database
The general methodology of the PHMRC study has been described in detail elsewhere [16]. In summary, VAs were collected from six sites in four countries: Andhra Pradesh and Uttar Pradesh in India, Bohol in the Philippines, Mexico City in Mexico, and Dar es Salaam and Pemba Island in Tanzania. Methods were approved by the Internal Review Boards of the University of Washington, Seattle, WA, USA; School of Public Health, University of Queensland, Australia; George Institute for Global Health, Hyderabad, India; National Institute of Public Health, Mexico; Research Institute for Tropical Medicine, Alabang, Metro Manila, Philippines; Muhimbili University, Tanzania; Public Health Laboratory Ivo de Carneri, Tanzania; and CSM Medical University, India. All data were collected with prior informed consent. Gold standard clinical diagnostic criteria for hospital deaths were specified for an initial list of 53 adult, 27 child, and 13 neonatal causes including stillbirths, chosen on the basis of epidemiological criteria and the likely ability of VA to identify the cause (Additional file 1). This was known as the target cause list. Deaths with hospital records fulfilling the gold standard criteria were identified in each of the sites. The PHMRC VAI was used to interview families about the events leading to each of these deaths [16]. Interviewers were blinded to the COD assigned in the hospital. The PHMRC database contains 12,501 verbal autopsies with gold standard diagnoses (7,846 adults, 2,064 children, 1,586 neonates, and 1,005 stillbirths).
The PHMRC VAI includes both closed-ended questions and an open-ended narrative. Questions covered: 1) symptoms of the terminal illness, 2) diagnoses of chronic illnesses obtained from health service providers, 3) risk behaviors (tobacco and alcohol), 4) details of any interactions with health services, and 5) details about the background of the decedent and about the interview itself. Not all of these questions contributed to prediction of the COD. Questions that were converted to binary variables – the necessary basis for Tariff analysis and the prediction of COD – we refer to as question items. Text items were derived from open-ended narrative using a text mining procedure (Text Mining package in R (version 2.14.0) [18]), which identifies keywords and groups words with the same or similar meanings. Performance in this paper is reported as being 1) with text, 2) without text, and 3) with a checklist. The checklist uses only a selected subset of text items as described later.
Tariff 2.0
The Tariff Method is based on a simple additive algorithm that creates a score, or tariff, for each questionnaire item and uses these scores to assign COD [10, 17]. Ideally, an item would have a high tariff for just one COD and a low tariff for all others; the model would then differentiate readily between causes [10]. For example, the item "Decedent suffered drowning" has a strong association with a few causes of death (accidental drowning, homicide, and suicide) and carries high tariffs for those causes. On the other hand, the item "Decedent had a fever" is associated with many different causes of death and carries low tariffs for the causes it is associated with. Tariffs for drowning have high standard deviations, while tariffs for fever have low standard deviations. Items with high standard deviations were considered more important for diagnosis than were tariffs with low standard deviations. To determine their order of importance, items were ranked by standard deviation. This was done separately for each module (adult, child, and neonate).
Measurement of performance
Simulated populations
The performance of a VA method in assigning a COD is a function of the true cause of death composition in the study population [19]. Therefore, for the development of a VA diagnostic method or a new VAI it is important to validate the method or instrument in as many populations with different cause compositions as possible. This is made practicable by means of computer simulation: 500 populations with random cause compositions were created based on the PHMRC dataset for the development and validation of the original suite of VA methods [16]. In the present study, every test of performance of different length instruments was done using the same 500 randomly generated populations. The 500 train-test data analysis datasets were generated by holding 75 % of the dataset as "training" data and 25 % as "test" data. Each test dataset was resampled using a Dirichlet distribution to obtain a random CSMF composition for each simulated population. Training data were used to generate the model. Analysis of test data was blinded to the gold standard COD. The accuracy of COD predictions was assessed using the performance metrics. This process is described more fully in Additional file 2.
For policy, research, and surveillance it is important to be able to quantify the actual performance of a VA method in predicting the COD, correcting for chance at both individual and population levels. We assessed performance of the progressively shortened VAI using Cohen's Kappa, CCC, sensitivity, specificity and CSMF accuracy.
CCC measures sensitivity adjusted for chance and was used to assess the extent to which Tariff 2.0 correctly predicted an individual cause of death when applied to the shortened VAI. A perfect prediction has CCC equal to one, while a random allocation would have, on average, CCC equal to zero. CCC is calculated as follows:
$$ CC{C}_j=\frac{\left(\frac{T{P}_j}{T{P}_j+F{N}_j}\right)-\left(\frac{1}{N}\right)}{1-\left(\frac{1}{N}\right)} $$
where TPj is true positives, or the number of decedents with gold standard cause j assigned correctly to cause j, FNj is false negatives, or the number of decedents incorrectly assigned to cause j, and N is the number of causes analyzed. The sum of TPj and FNj is the total number of deaths due to cause j.
Performance was also measured at the population level using mean CSMF accuracy across the 500 cause compositions, calculated as
$$ \mathrm{CSMF}\ \mathrm{accuracy}=1-\frac{{\displaystyle {\sum}_{j=1}^k\left|{\mathrm{CSMF}}_j^{true}-{\mathrm{CSMF}}_j^{pred}\right|}}{2\left(1-\mathrm{Minimum}\left({\mathrm{CSMF}}_j^{true}\right)\right)} $$
where the numerator in the calculation is the sum of the absolute error for all k causes between the true CSMF and estimated CSMF, and the denominator is the maximum possible error across all of the causes. CSMF accuracy will be one when the CSMF for every cause is predicted with no error.
Developing a shortened verbal autopsy instrument
To begin, we removed questions about the background of decedents from the full PHMRC VAI. We then turned the remaining questions into binary indicators, or items, as described above. Thus, 183 adult, 127 child, and 149 neonatal questions were converted into 170, 80, and 117 question items, respectively. Next, we ranked these items (1–170, 1–80, and 1–117) according to their importance, as defined by the standard deviation of their tariffs. We then systematically reduced the size of the instrument by 10 question items at a time in the order of their importance, as ranked by their tariff standard deviations. With each successive reduction in the number of items, we measured both CCC and CSMF accuracy using the 500 simulated populations as described above. We analyzed the performance of question items with and without text to assess the importance of text as the number of question items decreased. We then used a cubic spline to interpolate between these CCC and CSMF accuracy values to derive a continuous performance curve. Based on this curve, we identified the points (i.e., residual number of items) where each of the metrics (CCC with text, CCC without text, CSMF accuracy with text, CSMF accuracy without text) began to decrease at a significantly negative rate. This was done by taking the first derivative of the continuous performance curves for both CCC and CSMF accuracy. The optimum size of the shortened VAI for each of the three age groups was determined by the number of items that immediately preceded any significant decrease for at least one of these four metrics. These items, which had been ranked in order of importance, formed the basis for the final shortened VAI.
To complete the VAI, we also added questions that would enable the shortened version to function as a stand-alone instrument in a survey. In particular, we inserted questions to preserve the sense and flow of the instrument: for example, an important question was, "Did [name] cough blood?" but this needed to be preceded by the question, "Did [name] have a cough?" We also retained questions relevant to health service utilization and decedent background.
We then piloted the shortened VAI in three sites in the Philippines, Sri Lanka, and Bangladesh to assess its logic and applicability using Android tablets and the open source software, Open Data Kit (ODK) [20].
Checklist for open narrative
The Tariff Method uses a set of the top-ranked 40 items for each cause prediction based on standard deviation of each item's tariff [10]. In Tariff 2.0, 43 % of items used in the prediction of all 34 causes in adults were text items derived from open narrative that had been translated into English [17]. We, therefore, concluded that it was critical that we include open narrative in the shortened form of the instrument. We found, however, that we had failed to take into account the difficulties that interviewers would experience in entering open narrative directly onto the tablet. This was a consequence not only of shifting between languages but also between Bengali and Sinhala scripts and the Latin script used for English. During the field trial, some field staff had taken notes on paper, which they transcribed in the office to record the open narrative section. This process took more time and effort than any other component of data management and was a potential source of error. Such difficulties were compounded by the limited character sets for non-Latin scripts on the tablets and the much more extensive training required to enter lengthy text data into a tablet. We, therefore, developed a checklist of keywords to use in the open narrative rather than having interviewers record and transcribe an entire conversation.
This checklist comprised a list of words that were endorsed by the interviewer when mentioned by the respondent in describing the circumstances surrounding the death. These words could be converted directly into English and subjected to text mining.
Using the 500 simulated populations we measured the independent effect on performance of the addition of single text items to the shortened VAI: i.e., on CCC overall, on CCC by cause, and on CSMF accuracy (all question items plus a single text item). This was done separately for the adult, child, and neonate modules. The length of the final checklist for each of the three modules was decided on practical grounds: the checklist needed to fit on a single screen and could not have more words than could easily be remembered by the interviewer during the conversation. It was thus limited to a maximum of 12 text items. The final selection was based both on the items' contributions to performance and on their significance for the diagnosis of diseases of public health importance.
The full PHMRC VAI had 183, 127, and 149 questions for adult, child, and neonatal deaths, respectively [21]. The shortened PHMRC VAI developed through this analysis had 109, 69, and 67 questions for adult, child, and neonatal deaths, respectively, representing a decrease in the total number of questions of 40-55 % (Table 1). The paper-based version of the shortened PHMRC VAI is given in Additional file 3. The electronic version, which was created using ODK for installation on Android devices, can be obtained upon request.
Table 1 Characteristics of shortened PHMRC verbal autopsy questionnaire (VAI)
The reduced VAI analyzed with the Tariff 2.0 method ascertains causes of death in 34 mutually exclusive, collectively exhaustive categories for adults, 21 for children, and 6 for neonates, as the original PHMRC VAI. As would be expected, in terms of CCC and CSMF accuracy, the predictive performance of successively shortened versions of the questionnaire declined as question items were systematically removed (Fig. 1a). Performance metrics are shown in Table 2 for questionnaires both with and without text. Sensitivity and specificity for each COD with the long and shortened version of the questionnaire are presented in Additional files 4 and 5. For example, the shortened child module reduced to 69 questions had a CSMF accuracy of 78.3 % when all text items were included and 74.5 % when text items were excluded. CCC for the shortened child module was 52.5 % with text and 44.5 % without. These absolute differences of 3.5 % and 8.0 %, respectively, in diagnostic accuracy are reasonably substantial and continued to increase more or less monotonically as the number of question items was reduced, with the notable exception of CSMF accuracy for adult and neonate deaths without text items in the VAI.
a: Decrease in CSMF accuracy and CCC with progressive reduction in the number of question items for each age-specific module, with and without text items. b: First derivative of the predictive performance curves for CCC and CSMF accuracy, with and without text items. CSMF cause specific mortality fraction, CCC chance-corrected concordance
Table 2 Chance-corrected concordance and CSMF accuracy for full Population Health Metrics Research Consortium verbal autopsy instrument (PHMRC VAI) as compared to the shortened PHMRC VAI by type of text items included in the analysis
Perhaps a better way to summarize the decline in predictive performance with a progressively shorter VAI is to examine the first derivative at different numbers of question items (Fig. 1b). Since the number of question items for the shortened instrument was chosen to maintain performance in terms of CCC and CSMF accuracy, both with and without text items, the first derivative of the curve of performance vs. the number of question items should not drop significantly below zero. Applying this criterion, we identified the optimum number of question items as 90, 60, and 50 for adults, children, and neonates, respectively. This translated into 91, 50, and 48 questions, respectively. It should be noted that significant drop-off in performance of the neonatal module was not seen until there were only about 25 to 30 question items remaining. This reflects the shorter list of target causes for neonates.
As described in the methods section, questions that would enable the shortened version to function as a stand-alone instrument in a survey of importance were added back into the survey. This process increased the number of questions for the short VA to 109, 69, and 67 questions for adults, children, and neonates, respectively.
Comparative performance
A more detailed assessment of the comparative performance of the shortened vs the longer (original) PHMRC instrument for various text inclusions is given in Table 2. Within modules, CSMF accuracy varied little between different versions of the instrument. The shortened instrument, with text, showed only minor declines in CSMF accuracy from the full instrument with text of 0.4 %, 0.0 %, and 0.6 % for the adult, child, and neonatal modules, respectively. The short instrument with checklist performed slightly worse than the shortened instrument with text, with a decline 0.4 % (adults), 0.3 % (children), and 0.1 % (neonates). The short instrument without text items showed non-significant changes from the full instrument.
Performance at the individual level overall, as assessed by CCC, shows more variation: the average drop in CCC by using the shortened version, with text, was 0.63 %. In the case of short and long versions without text, the average decline in accuracy across the three modules was 1.0 %. This difference was greatly reduced when the checklist was applied.
More generally, the addition of the checklist (Table 3) increased CMSF accuracy and mean CCC by an average of 2.0 % and 4.6 %, respectively. The impact of the checklist on performance was most substantial for the child module (Table 2).
Table 3 List of keywords used as a checklist in the open narrative for the adult, child, and neonatal modules
More specifically, the addition of keywords with the checklist had a significant impact on performance. For example, in adults, the mean CCC declined from 50.5 % in the full instrument with text to 43.3 % in the shortened instrument without text (see also Table 2). Addition of the single text item "malaria" increased overall CCC from 43.3 % to 43.7 %. At the same time, CCC for malaria increased from 34.1 % to 42.3 %. Addition of the text item, "malaria", also increased CSMF accuracy from 74.6 % in the shortened instrument with no text to 74.7 %.
Addition of the checklist to the shortened instrument without text increases CCC for pneumonia, suicide, and tuberculosis to levels at or above those obtained from the full instrument with text. CCC for cirrhosis and malaria is substantially increased from the short instrument with no text. More detail about the effect of applying various combinations of the shortened questionnaire with text and checklist on CCC for adults, children, and neonates can be found in Additional file 6 for a comprehensive list of causes.
It should be noted that the tariff for a text item was frequently higher than the tariff for the corresponding question item. This reflected lower endorsement rates for text items than for question items. For example, 49.3 % of interviewees overall reported the presence of fever in response to a question but only 13.0 % of all interviewees reported the presence of fever in open-ended narrative. On the other hand, in cases where the gold standard cause of death was malaria, 86.0 % of interviewees reported the presence of fever in response to the question and 51.0 % reported the presence of fever in open-ended narrative. As a consequence, the text item "fever" scored a tariff of 4.0 for malaria but the corresponding question item scored a tariff of only 1.5. A symptom elicited without prompting was more important for diagnosis than a symptom elicited with prompting.
It is difficult to see how substantial improvements in obtaining information about COD patterns in resource-poor populations can be achieved through expanding medical certification. Accurate certification depends on the physician being intimately familiar with the decedent's clinical history and/or the details of the terminal illness. In resource-poor countries, only a small proportion of decedents have had access to a physician; this situation is unlikely to change in the medium term. If a physician has not been directly involved in the management of a patient, the COD on the person's medical certificate has little more value than the COD obtained from an unstructured VA (a VA that does not follow a structured and validated format). The routine use of VA with known performance characteristics for all deaths that have not been attended by a physician is the only cost-effective means of obtaining information about cause of death patterns and about how these patterns are changing.
In the context of the PHMRC validation study, in which VAs and diagnosis methods were compared with gold standard hospital deaths, automated computer diagnosis of VAs was shown to be more accurate than PCVA [22]. Reliance on PCVA is inefficient and leads to unnecessary competition for resources with clinical services, often leading to long delays in diagnosing VAs. For example, the VA data collected in India's Sample Registration System starting in 2002 have still not been reported or released because of delays to physician coding [23]. Outstanding barriers to widespread use are the length of time required to administer the full-length VAI (50–70 minutes) and the resources required for data entry of paper-based records of interview.
The Tariff Method was developed and validated using hospital gold standard deaths [10]. The Method is additive in the sense that it sums tariff scores by cause and by symptom to arrive at the most probable cause for an individual death. The use of hospital gold standard deaths as reference ensured that a full range of symptoms was available for the development of the tariff scores. The process of item-reduction was designed to remove from the questionnaire those symptoms which were not contributing significantly to the summed tariff scores. Tariff 2.0 sets cut-off points to establish that sufficient symptoms are available to assign a COD for an individual death [17]. Such cases are assigned to an indeterminate category under particular circumstances. If the symptom list in the item-reduced instrument were too short this would manifest itself in an increase in the size of the Indeterminate category.
In this paper, we have established the theoretical basis for the validity of a shortened VAI administered by means of hand-held electronic tablets. Although the shortening of an instrument may lead to a decrease in performance and some loss of specificity, at least for rare diseases, we have demonstrated by formal statistical methods applied to validation datasets, where the true COD is known, that it is possible to reduce the length of a VA questionnaire by 40 % or more without a significant drop in performance. The performance characteristics of the shortened VAI are now established. We have also shown that many of the advantages of an open-ended narrative in improving performance can be retained by use of an on-screen checklist. The application of this checklist will require special training of fieldworkers. It is worth highlighting that symptoms mentioned in the open narrative may have a different Tariff score for a given cause of death than their counterparts in the structured questionnaire. This may reflect the fact that more salient information may be easier to be mentioned spontaneously than being recalled after prompted in a questionnaire. The checklist of keywords from the open narrative is, therefore, an important innovation of this shortened questionnaire. It reduces the burden on the interviewer by registering keywords instead of registering the answer verbatim, and also captures these key items that have a substantial contribution to determine the cause of death.
We consider the greatest utility of the shortened VAI will be for the collection of COD data in civil registration systems and for the calculation of CSMFs in populations. To put our results in perspective: a CSMF accuracy of 76.2 % in adults, using the tablet with check list for data entry, compares with a reported CSMF accuracy of 82 % for medical certification of adult deaths in Mexican teaching hospitals [24]; the former equates to an adjusted accuracy of 32.8 % and the latter to an adjusted accuracy of 50 %.
We have set the number of questions items in the shortened VAI at 90 items for adults, 60 for children, and 50 for neonates. We justified the thresholds at the analytical level for question items for each questionnaire by an analysis of first derivatives, identifying the precise point at which they begin to deviate significantly from zero (signifying no apparent slope in the performance curve). In the case of the neonatal module, the suggested threshold was around 30 items. We took a conservative approach, however, and increased this to 50 items. A limited range of symptoms applies to conditions that cause death in neonates and we wished to avoid eliminating items that might add important contextual information for automated diagnosis.
CSMF accuracy declines less rapidly than does CCC with progressive reduction in the number of items. This can be attributed to the finding that random allocation of deaths to different causes would result in CSMF accuracy of at least 63 % [25]. We have, however, chosen to show absolute values for CSMF accuracy, as it is this measure that will influence the use of CSMFs derived from VAs as the basis for public health policy development.
We would argue that the different forms of the VAI and the different methods of data collection and analysis need to be tailored for use in particular circumstances. Here, we have described an approach which should be invaluable for the collection of vital statistics.
In this study, we developed a shortened VAI using a systematic approach and assessed its performance when administered by means of hand-held electronic tablets and analyzed using the Tariff 2.0 automated method. We demonstrated that, where the true cause of death is known, it is possible to reduce the length of a VA questionnaire by 40 % or more without a significant drop in performance. We have also shown that many of the advantages of the open-ended narrative in improving performance can be retained by use of an on-screen checklist. The reduced VAI developed has great utility to estimate COD data in civil registration and to calculate CSMFs in populations reducing the burden of time and resources required for data collection and analysis.
VA questionnaires have been constructed to elicit as much useful diagnostic information as possible for as many causes as possible for which symptomatic information is likely to be meaningful. Very little attention has been paid to the length of the interview or to the possible effects of interviewer/interviewee fatigue. If VA methods are to be useful beyond research settings to provide the essential intelligence on population cause of death patterns that governments and donors need, then it is critically important that they be rapidly integrated into national civil registration and vital statistics systems and routinely applied to all out-of-hospital deaths that are reported. New automated diagnostic methods and data collection platforms using tablets or mobile phones are now available for widespread use in civil registration systems. That utility will be even more enhanced if shorter VA questionnaires, such as the one reported here, are applied to cut interview time in half without any loss of diagnostic accuracy.
CCC:
Chance-corrected concordance
CSMF:
Cause-specific mortality fraction
PCVA:
Physician certified verbal autopsy
PHMRC:
Population Health Metrics Research Consortium
VA:
Verbal autopsy
VAI:
Verbal autopsy instrument
Mathers CD, Fat DM, Inoue M, Rao C, Lopez AD. Counting the dead and what they died from: an assessment of the global status of cause of death data. Bull World Health Organ. 2005;83:171–7.
Mahapatra P, Shibuya K, Lopez AD, Coullare E, Notzon FC, Rao C, et al. Civil registration systems and vital statistics: successes and missed opportunities. Lancet. 2007;370:1653–63.
Phillips D, Lozano R, Naghavi M, Atkinson C, Gonzalez-Medina D, Mikkelsen L, et al. A composite metric for assessing data on mortality and causes of death: the vital statistics performance index. Popul Health Metr. 2014;12:14.
World Health Organization. Verbal autopsy standards: ascertaining and attributing cause of death. ISBN 978 92 4 154721 (NLM classification: WA900).Geneva, Switzerland: WHO; 2007.
Indian Council of Medical Research (ICMR). Study on causes of death by verbal autopsy in India. New Delhi: ICMR; 2009.
Franca E, Campos D, Guimaraes MD, Souza MF. Use of verbal autopsy in a national health information system: effects of the investigation of ill-defined causes of death on proportional mortality due to injury in small municipalities in Brazil. Popul Health Metr. 2011;9:39.
Baqui AH, Black RE, Arifeen SE, Hill K, Mitra SN, al Sabir A. Causes of childhood deaths in Bangladesh: results of a nationwide verbal autopsy study. Bull World Health Organ. 1998;76(2):161–71.
Dharmaratne SD, Jayasuriya RJ, Perera BY, Gunesekera EM, Sathasivayyar A. Opportunities and challenges for verbal autopsy in the national Death Registration System in Sri Lanka: past and future. Popul Health Metr. 2011;9:21. doi:10.1186/1478-7954-9-21.
Mbofana F, Lewis R, West L, Mazive E, Cummings S, Mswia R. Using verbal autopsy in a post-census mortality survey to capture causes of death in Mozambique, 2006–2007. Bali, Indonesia: Global Congress on Verbal Autopsy: State of Science; 2011.
James SL, Flaxman AD, Murray CJ. Performance of the Tariff Method: validation of a simple additive algorithm for analysis of verbal autopsies. Popul Health Metr. 2011;9:31.
Flaxman AD, Vahdatpour A, Green S, James SL, Murray CJ. Random forests for verbal autopsy analysis: multisite validation study using clinical diagnostic gold standards. Popul Health Metr. 2011;9:29.
Murray CJ, Lopez AD, Feehan DM, Peter ST, Yang G. Validation of the symptom pattern method for analyzing verbal autopsy data. PLoS Med. 2007;4:e327.
King G, Lu Y. Verbal autopsy methods with multiple causes of death. Stat Sci. 2008;23:78–81.
Ronsmans C, Vanneste AM, Chakraborty J, Van Ginneken J. A comparison of three verbal autopsy methods to ascertain levels and causes of maternal deaths in Matlab, Balgladesh. Int J Epidemiol. 1998;27:660–6.
Chandramohan D, Maude GH, Rodrigues LC, Hayes RJ. Verbal autopsies for adult deaths: issues in their development and validation. Int J Epidemiol. 1994;23:213–22.
Murray CJ, Lopez AD, Black R, Ahuja R, Ali SM, Baqui A, et al. Population health metrics research consortium gold standard verbal autopsy validation study: design, implementation, and development of analysis datasets. Popul Health Metr. 2011;9:27.
Serina P, Riley I, Stewart A, James S, Flaxman A, Lozano R, et al. Improving performance of the Tariff Method for assigning causes of death to verbal autopsies. BMC Medicine. doi 10.1186/s12916-015-0527-9.
FEINERER, Ingo; HORNIK, Kurt; MEYER, David. Text Mining Infrastructure in R. urnal of Statistical Software, [S.l.], v. 25, Issue 5, p. 1 - 54, mar. 2008. ISSN 1548-7660.http://www.jstatsoft.org/index.php/jss/article/view/v025i05. Date accessed: 25 nov. 2015
Murray CJ, Lozano R, Flaxman AD, Vahdatpour A, Lopez AD. Robust metrics for assessing the performance of different verbal autopsy cause assignment methods in validation studies. Popul Health Metr. 2011;9:28.
Hartung C, Lerer A, Anokwa Y, Tseng C, Brunette W, Borriello G. Open Data Kit: tools to build information services for developing regions. In: Proceedings of the 4th ACM/IEEE International Conference on Information and Communication Technologies and Development. New York, NY: ACM; 2010. p. 18:1–18:12.
Population Health Metrics Research Consortium | Institute for Health Metrics and Evaluation. http://www.healthmetricsandevaluation.org/research/project/population-health-metrics-research-consortium Accessed on June 30, 2015.
Murray CJ, Lozano R, Flaxman AD, Serina P, Phillips D, Stewart A, et al. Using verbal autopsy to measure causes of death: the comparative performance of existing methods. BMC Med. 2014;12:5.
Aleksandrowicz L, Malhotra V, Dikshit R, Gupta PC, Kumar R, Sheth J, et al. Performance criteria for verbal autopsy‐based systems to estimate national causes of death: development and application to the Indian Million Death Study. BMC Med. 2014;12:21.
Hernández B, Ramírez-Villalobos D, Romero M, Gómez S, Atkinson C, Lozano R. Assessing quality of medical death certification: concordance between gold standard diagnosis and underlying cause of death in selected Mexican hospitals. Popul Health Metr. 2011;9:38.
Flaxman AD, Serina PT, Hernandez B, Murray CJ, Riley I, Lopez AD. Measuring causes of death in populations: a new metric that corrects cause-specific mortality fractions for chance. Popul Health Metr. 2015;13:2.
The authors thank our collaborators Dr. Osvaldo González La Rivere, Dra. Araceli Martínez González, Dr. Miguel Ángel Martínez Guzmán, Dr. Argemiro José Genes Narr, Dr. Antonio Manrique Martin, Dr. Adrián Ramírez Alvear, Dr. Benjamín Méndez Pinto, Dr. Enrique Garduño Salvador, Dr. Rogelio Pérez Padilla, Dra. Cecilia García Sancho, Dr. Mauricio Moreno Portillo, and Dr. Eduardo Barragán Padilla. The authors also thank the Secretary of Health of the Federal District in Mexico City, Dr. Armando Ahued, and the coordinator of high specialty hospitals of the Ministry of Health, Dr. Bernardo Bidart, for their help in accessing medical records needed for this study.
This analysis was made possible by the series of studies produced by the Population Health Metrics Research Consortium. The work was funded by a grant from the Bill & Melinda Gates Foundation through the Grand Challenges in Global Health Initiative. This work was also supported by a National Health and Medical Research Council project grant, Improving methods to measure comparable mortality by cause (Grant no. 631494). The funders had no role in study design, data collection and analysis, interpretation of data, decision to publish, or preparation of the manuscript. The corresponding author had full access to all data analyzed and had final responsibility for the decision to submit this original research paper for publication.
Institute for Health Metrics and Evaluation, University of Washington, 2301 Fifth Ave., Suite 600, Seattle, WA, 98121, USA
Peter Serina, Andrea Stewart, Abraham D. Flaxman, Rafael Lozano, Meghan D Mooney, Richard Luning, Bernardo Hernandez, Charles Atkinson, Lalit Dandona, Michael Freeman, Spencer L. James, Summer Lockett Ohno, David Phillips, Kelsey Pierce, Alireza Vadhatpour & Christopher J. L. Murray
University of Queensland, School of Public Health, Level 2 Public Health Building School of Public Health, Herston Road, Herston, QLD, 4006, Australia
Ian Riley & Rasika Rampatige
National Institute of Public Health, Av. Universidad 655, Buena Vista, 62100, Cuernavaca, Morelos, Mexico
Rafael Lozano, Dolores Ramirez-Villalobos & Minerva Romero
Institute for International Programs, Johns Hopkins University, Bloomberg School of Public Health, 615 N Wolfe St., Baltimore, MD, 21205, USA
Robert Black, Abdulla H. Baqui, Usha Dhingra, Arup Dutta, Henry D. Kalter & Sunil Sazawal
Community Empowerment Lab, Shivgarh, India
Ramesh Ahuja, Aarti Kumar & Vishwajeet Kumar
The INCLEN Trust International, New Delhi, India
International Center for Diarrhoeal Disease Research, Dhaka, Bangladesh
Nurul Alam, Sayed Saidul Alam & Peter K. Streatfield
Public Health Laboratory-IdC, P.O.BOX 122, Wawi, Chake Chake, Pemba, Zanzibar, Tanzania
Said Mohammed Ali, Usha Dhingra, Arup Dutta & Sunil Sazawal
Public Health Foundation of India, Plot 47, Sector 44, Gurgaon, 122002, National Capital Region, India
Lalit Dandona & Rakhi Dandona
Malaria Consortium Cambodia, 113 Mao Tse Toung, Phnom Penh, Cambodia
Emily Dantzer
Department of Pediatrics, Stanford University School of Medicine, Stanford, CA, 94304, USA
Gary L Darmstadt
CSM Medical University, Shah Mina Road, Chowk Lucknow, Uttar Pradesh, 226003, India
Vinita Das & Rajendra Prasad
Harvard School of Public Health, 677 Huntington Avenue, Boston, MA, 02115-6018, USA
Wafaie Fawzi
WHO Collaborating Centre for Public Health Workforce Development, National Institute of Health Sciences, Kalutara, Sri Lanka
Saman Gamage, Dilip Hensman & Nandalal Wijesekara
Iipas, Chapel Hill, NC, USA
Sara Gomez
The George Institute for Global Health, Sydney, Australia
Rohina Joshi
Research Institute for Tropical Medicine, Corporate Ave., Muntinlupa City, 1781, Philippines
Marilla Lucero, Hazel Remolador, Diozele Sanvictores & Veronica Tallo
Cornell University, Division of Nutritional Sciences, 314 Savage Hall, Ithaca, NY, 14853, USA
Saurabh Mehta
The George Institute for Global Health, University of Sydney and Royal Prince Albert Hospital, Sydney, Australia
Bruce Neal
Imperial college, London, London, UK
The George Institute for Global Health, Hyderabad, India
Devarsetty Praveen
Muhimbili University of Health and Allied Sciences, United Nations Rd., Dar es Salaam, Tanzania
Zul Premji & Mwanaidi Said
University of Melbourne, School of Population and Global Health, Building 379, 207 Bouverie St., Parkville, 3010, VIC, Australia
Hafizur R. Chowdhury & Alan D. Lopez
Peter Serina
Ian Riley
Andrea Stewart
Abraham D. Flaxman
Rafael Lozano
Meghan D Mooney
Richard Luning
Robert Black
Ramesh Ahuja
Nurul Alam
Sayed Saidul Alam
Said Mohammed Ali
Charles Atkinson
Abdulla H. Baqui
Hafizur R. Chowdhury
Lalit Dandona
Rakhi Dandona
Vinita Das
Usha Dhingra
Arup Dutta
Michael Freeman
Saman Gamage
Dilip Hensman
Spencer L. James
Henry D. Kalter
Aarti Kumar
Marilla Lucero
Summer Lockett Ohno
Kelsey Pierce
Rajendra Prasad
Zul Premji
Dolores Ramirez-Villalobos
Rasika Rampatige
Hazel Remolador
Minerva Romero
Mwanaidi Said
Diozele Sanvictores
Sunil Sazawal
Peter K. Streatfield
Veronica Tallo
Alireza Vadhatpour
Nandalal Wijesekara
Christopher J. L. Murray
Alan D. Lopez
Correspondence to Alan D. Lopez.
PS conducted analysis and prepared the first draft; IR and AF participated in design of the study, data collection, analysis, and draft preparation; AS, CA, MF, SJ, and DP conducted analysis; RL and PKS participated in design of the study, data collection, and analysis; BH participated in data collection, analysis, and draft preparation; MM participated in the discussion of results; BN, RB, WF, DH, and HK participated in the design of the study; RA, NA, AK, VK SSA, SMA, SM, RP, DP, ZP, DR, MR, and MS participated in data collection; AB, ED, and GD participated in the design of the study and discussion of results; HC, LD, RD, SO, and KP worked on data collection and discussion of results; VD, UD, AD, SG, SGo, RJ, RLu ML, RR, HR, DS, SS, VT, AV, and NW participated in data collection and analysis. CJLM and ADL participated in design of the study, analysis, and draft preparation. All authors read and approved the final manuscript.
Gold standard clinical diagnosis criteria. (DOCX 117 kb)
Dataset composition. (DOCX 120 kb)
Shortened PHMRC VAI. (DOCX 133 kb)
Sensitivity by cause for each module in full and shortened PHMRC VAI. (XLSX 15 kb)
Specificity by cause for each module in full and shortened PHMRC VAI. (XLSX 14 kb)
Chance-corrected concordance by cause for full Population Health Metrics Research Consortium verbal autopsy instrument (PHMRC VAI) as compared to the shortened PHMRC VAI by type of text items included in the analysis. (XLSX 15 kb)
Serina, P., Riley, I., Stewart, A. et al. A shortened verbal autopsy instrument for use in routine mortality surveillance systems. BMC Med 13, 302 (2015). https://doi.org/10.1186/s12916-015-0528-8
Verbal autopsy questionnaire
Mortality surveillance
Big Risks: the challenges and opportunities in addressing the biggest global causes of premature mortality | CommonCrawl |
Dynamics and spatiotemporal pattern formations of a homogeneous reaction-diffusion Thomas model
DCDS-S Home
Invasion traveling wave solutions in temporally discrete random-diffusion systems with delays
October 2017, 10(5): 1133-1148. doi: 10.3934/dcdss.2017061
Lyapunov-type inequalities and solvability of second-order ODEs across multi-resonance
He Zhang a,, , Xue Yang a,b, and Yong Li a,b,c,
College of Mathematics, Jilin University, Changchun 130012, China
School of Mathematics and Statistics and Center for Mathematics and Interdisciplinary Sciences, Northeast Normal University, Changchun 130024, China
State Key Laboratory of Automotive Simulation and Control, Jilin University, Changchun 130012, China
* Corresponding author: [email protected]
Received October 2016 Revised November 2016 Published June 2017
Fund Project: This work was completed with the support by National Basic Research Program of China Grant 2013CB834100, NSFC Grant 11571065, NSFC Grant 11171132 and NSFC Grant 11201173
Figure(6)
We present some new Lyapunov-type inequalities for boundary value problems of the form $y''+u(x)y=0$, $y(0)=0=y(1)$, where $-A≤ u(x)≤ B$ and there are many resonance points lying inside the interval $[-A, B]$. The classical Lyapunov's inequality and its reverse are improved by using Pontryagin's maximum principle. As applications, we establish two readily verifiable unique solvability criteria for general $u(x)$. Some relevant examples are given to illustrate our results. Variants of Lyapunov-type inequalities for nonlinear BVPs are discussed at the end of the paper.
Keywords: Lyapunov-type inequalities, Pontryagin's maximum principle, across multi-resonance, unique solvability.
Mathematics Subject Classification: Primary: 34C10, 34B15.
Citation: He Zhang, Xue Yang, Yong Li. Lyapunov-type inequalities and solvability of second-order ODEs across multi-resonance. Discrete & Continuous Dynamical Systems - S, 2017, 10 (5) : 1133-1148. doi: 10.3934/dcdss.2017061
[1] S. R. Bernfeld and V. Lakshmikantham, An Introduction to Nonlinear Boundary Value Problems, Elsevier, 1974. Google Scholar
G. Borg, On a liapounoff criterion of stability, American Journal of Mathematics, 71 (1949), 67-70. doi: 10.2307/2372093. Google Scholar
A. Cañada, J. A. Montero and S. Villegas, Lyapunov-type inequalities and neumann boundary value problems at resonance, Math. Inequal. Appl., 8 (2005), 459-475. doi: 10.7153/mia-08-42. Google Scholar
A. Cañada, J. A. Montero and S. Villegas, Lyapunov inequalities for partial differential equations, Journal of Functional Analysis, 237 (2006), 176-193. doi: 10.1016/j.jfa.2005.12.011. Google Scholar
A. Cañada, J. A. Montero and S. Villegas, Lyapunov-type inequalities for differential equations, Mediterranean Journal of Mathematics, 3 (2006), 177-187. doi: 10.1007/s00009-006-0071-0. Google Scholar
A. Cañada and S. Villegas, Optimal lyapunov inequalities for disfocality and neumann boundary conditions using lp norms, Discrete Contin. Dyn. Syst. Ser. A, 20 (2008), 877-888. doi: 10.3934/dcds.2008.20.877. Google Scholar
A. Cañada and S. Villegas, Lyapunov inequalities for neumann boundary conditions at higher eigenvalues, J. Eur. Math. Soc. (JEMS), 12 (2010), 163-178. doi: 10.4171/JEMS/193. Google Scholar
A. Cañada and S. Villegas, Lyapunov inequalities for partial differential equations at radial higher eigenvalues, Discrete Contin. Dyn. Syst., 33 (2013), 111-122. doi: 10.3934/dcds.2013.33.111. Google Scholar
X. Chang and Q. Huang, Two-point boundary value problems for duffing equations across resonance, Journal of optimization theory and applications, 140 (2009), 419-430. doi: 10.1007/s10957-008-9461-8. Google Scholar
S.-S. Cheng, Lyapunov inequalities for differential and difference equations, Fasc. Math, 23 (1991), 25-41. Google Scholar
S. B. Eliason, A lyapunov inequality for a certain second order non-linear differential equation, Journal of the London Mathematical Society, 2 (1970), 461-466. doi: 10.1112/jlms/2.Part_3.461. Google Scholar
B. Harris and Q. Kong, On the oscillation of differential equations with an oscillatory coefficient, Transactions of the American Mathematical Society, 347 (1995), 1831-1839. doi: 10.1090/S0002-9947-1995-1283552-4. Google Scholar
[13] P. Hartman, Ordinary Differential Equations, Birkhauser, Boston, 1982. Google Scholar
J. Henderson, Best interval lengths for boundary value problems for third order lipschitz equations, SIAM journal on mathematical analysis, 18 (1987), 293-305. doi: 10.1137/0518023. Google Scholar
J. Henderson, Optimal interval lengths for nonlocal boundary value problems for second order lipschitz equations, Communications in Applied Analysis, 15 (2011), 475-482. Google Scholar
M. Grigor'evich Krein, On certain problems on the maximum and minimum of characteristic values and on the lyapunov zones of stability, Amer. Math. Soc. Transl., 1 (1955), 163-187. doi: 10.1090/trans2/001/08. Google Scholar
M. Lees, Discrete methods for nonlinear two-point boundary value problems, Numerical Solution of Partial Differential Equations, 1 (1966), 59-72. Google Scholar
Y. Li and H. Wang, Neumann problems for second order ordinary differential equations across resonance, Zeitschrift für angewandte Mathematik und Physik ZAMP, 46 (1995), 393-406. doi: 10.1007/BF01003558. Google Scholar
A. Liapounoff, Problème général de la stabilité du mouvement, In Annales de la faculté des sciences de Toulouse, 9 (1907), 203-474. Université Paul Sabatier. Google Scholar
G. López and J.-A. Montero-Sánchez, Neumann boundary value problems across resonance, ESAIM: Control, Optimisation and Calculus of Variations, 12 (2006), 398-408. doi: 10.1051/cocv:2006009. Google Scholar
J. Mawhin and J. Ward, Nonresonance and existence for nonlinear elliptic boundary value problems, Nonlinear Analysis: Theory, Methods & Applications, 5 (1981), 677-684. doi: 10.1016/0362-546X(81)90084-5. Google Scholar
J. P. Pinasco, Lower bounds for eigenvalues of the one-dimensional p-laplacian, In Abstract and Applied Analysis, Hindawi Publishing Corporation, 2004,147-153. doi: 10.1155/S108533750431002X. Google Scholar
J. Qi and S. Chen, Extremal norms of the potentials recovered from inverse dirichlet problems, Inverse Problems, 32 (2016), 035007, 13pp. doi: 10.1088/0266-5611/32/3/035007. Google Scholar
K. Shen and M. Zhang, An optimal class of non-degenerate potentials for second-order ordinary differential equations, Boundary Value Problems, 2015 (2015), 1-17. doi: 10.1186/s13661-015-0451-0. Google Scholar
X. Tang and M. Zhang, Lyapunov inequalities and stability for linear hamiltonian systems, Journal of Differential Equations, 252 (2012), 358-381. doi: 10.1016/j.jde.2011.08.002. Google Scholar
H. Wang and Y. Li, Two point boundary value problems for second-order ordinary differential equations across many resonant points, Journal of mathematical analysis and applications, 179 (1993), 61-75. doi: 10.1006/jmaa.1993.1335. Google Scholar
H. Wang and Y. Li, Neumann boundary value problems for second-order ordinary differential equations across resonance, SIAM journal on control and optimization, 33 (1995), 1312-1325. doi: 10.1137/S036301299324532X. Google Scholar
H. Wang and Y. Li, Existence and uniqueness of solutions to two point boundary value problems for ordinary differential equations, Zeitschrift für angewandte Mathematik und Physik ZAMP, 47 (1996), 373-384. doi: 10.1007/BF00916644. Google Scholar
M. Zhang, Extremal values of smallest eigenvalues of hill's operators with potentials in $L^1$ balls, Journal of Differential Equations, 246 (2009), 4188-4220. doi: 10.1016/j.jde.2009.03.016. Google Scholar
Figure 1. Comparison of the classical Lyapunov inequality, main results in [28] and our revised inequalities
Figure 2. The corresponding nontrivial solution $y(x)$
Figure 3. The nontrivial solution $y(x)$
Xiaofei He, X. H. Tang. Lyapunov-type inequalities for even order differential equations. Communications on Pure & Applied Analysis, 2012, 11 (2) : 465-473. doi: 10.3934/cpaa.2012.11.465
Guy Barles, Ariela Briani, Emmanuel Trélat. Value function for regional control problems via dynamic programming and Pontryagin maximum principle. Mathematical Control & Related Fields, 2018, 8 (3&4) : 509-533. doi: 10.3934/mcrf.2018021
Torsten Lindström. Discrete models and Fisher's maximum principle in ecology. Conference Publications, 2003, 2003 (Special) : 571-579. doi: 10.3934/proc.2003.2003.571
Huaiqiang Yu, Bin Liu. Pontryagin's principle for local solutions of optimal control governed by the 2D Navier-Stokes equations with mixed control-state constraints. Mathematical Control & Related Fields, 2012, 2 (1) : 61-80. doi: 10.3934/mcrf.2012.2.61
Tomasz Komorowski, Adam Bobrowski. A quantitative Hopf-type maximum principle for subsolutions of elliptic PDEs. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020248
Takayoshi Ogawa, Kento Seraku. Logarithmic Sobolev and Shannon's inequalities and an application to the uncertainty principle. Communications on Pure & Applied Analysis, 2018, 17 (4) : 1651-1669. doi: 10.3934/cpaa.2018079
Kunquan Lan, Wei Lin. Lyapunov type inequalities for Hammerstein integral equations and applications to population dynamics. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1943-1960. doi: 10.3934/dcdsb.2018256
H. O. Fattorini. The maximum principle in infinite dimension. Discrete & Continuous Dynamical Systems - A, 2000, 6 (3) : 557-574. doi: 10.3934/dcds.2000.6.557
Hans Josef Pesch. Carathéodory's royal road of the calculus of variations: Missed exits to the maximum principle of optimal control theory. Numerical Algebra, Control & Optimization, 2013, 3 (1) : 161-173. doi: 10.3934/naco.2013.3.161
Fabio Paronetto. A Harnack type inequality and a maximum principle for an elliptic-parabolic and forward-backward parabolic De Giorgi class. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 853-866. doi: 10.3934/dcdss.2017043
Carlo Orrieri. A stochastic maximum principle with dissipativity conditions. Discrete & Continuous Dynamical Systems - A, 2015, 35 (11) : 5499-5519. doi: 10.3934/dcds.2015.35.5499
Ravi P. Agarwal, Abdullah Özbekler. Lyapunov type inequalities for $n$th order forced differential equations with mixed nonlinearities. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2281-2300. doi: 10.3934/cpaa.2016037
Omid S. Fard, Javad Soolaki, Delfim F. M. Torres. A necessary condition of Pontryagin type for fuzzy fractional optimal control problems. Discrete & Continuous Dynamical Systems - S, 2018, 11 (1) : 59-76. doi: 10.3934/dcdss.2018004
Marcus A. Khuri. On the local solvability of Darboux's equation. Conference Publications, 2009, 2009 (Special) : 451-456. doi: 10.3934/proc.2009.2009.451
Leszek Gasiński. Existence results for quasilinear hemivariational inequalities at resonance. Conference Publications, 2007, 2007 (Special) : 409-418. doi: 10.3934/proc.2007.2007.409
Irena Pawłow, Wojciech M. Zajączkowski. Unique solvability of a nonlinear thermoviscoelasticity system in Sobolev space with a mixed norm. Discrete & Continuous Dynamical Systems - S, 2011, 4 (2) : 441-466. doi: 10.3934/dcdss.2011.4.441
Jijiang Sun, Chun-Lei Tang. Resonance problems for Kirchhoff type equations. Discrete & Continuous Dynamical Systems - A, 2013, 33 (5) : 2139-2154. doi: 10.3934/dcds.2013.33.2139
W. Wei, H. M. Yin. Global solvability for a singular nonlinear Maxwell's equations. Communications on Pure & Applied Analysis, 2005, 4 (2) : 431-444. doi: 10.3934/cpaa.2005.4.431
Zheng-Chao Han, YanYan Li. On the local solvability of the Nirenberg problem on $\mathbb S^2$. Discrete & Continuous Dynamical Systems - A, 2010, 28 (2) : 607-615. doi: 10.3934/dcds.2010.28.607
Chiun-Chuan Chen, Li-Chang Hung, Hsiao-Feng Liu. N-barrier maximum principle for degenerate elliptic systems and its application. Discrete & Continuous Dynamical Systems - A, 2018, 38 (2) : 791-821. doi: 10.3934/dcds.2018034
He Zhang Xue Yang Yong Li | CommonCrawl |
# Setting up a web development environment
Before diving into JavaScript for Multiplicative Search Algorithms, it's important to have a solid web development environment set up. This section will guide you through the process of setting up a development environment that will allow you to write, test, and debug your JavaScript code.
To begin, you'll need to install a code editor. Some popular choices include Visual Studio Code, Sublime Text, and Atom. These editors provide syntax highlighting, code completion, and other features that make writing code more efficient.
Next, you'll need to install a web browser that supports JavaScript. Google Chrome and Mozilla Firefox are two popular choices. These browsers come with built-in developer tools that allow you to inspect and debug your code.
Once you have your code editor and web browser set up, you'll need to create an HTML file to write your JavaScript code. This file will serve as the container for your JavaScript code and any HTML elements you want to display on the web page.
To include your JavaScript code in the HTML file, you can either use an external JavaScript file or include your code directly within `<script>` tags. For example:
```html
<!DOCTYPE html>
<html>
<head>
<title>My JavaScript Project</title>
</head>
<body>
<h1>Welcome to my JavaScript project!</h1>
<script>
// Your JavaScript code goes here
</script>
</body>
</html>
```
## Exercise
Instructions:
- Set up a web development environment with a code editor, web browser, and HTML file.
- Write a simple JavaScript function that calculates the sum of two numbers and log the result to the console.
### Solution
```html
<!DOCTYPE html>
<html>
<head>
<title>My JavaScript Project</title>
</head>
<body>
<h1>Welcome to my JavaScript project!</h1>
<script>
function sum(a, b) {
return a + b;
}
console.log(sum(2, 3)); // Output: 5
</script>
</body>
</html>
```
# Understanding AJAX requests and DOM manipulation
AJAX (Asynchronous JavaScript and XML) is a technique for making full and partial page updates without requiring a complete page refresh. It allows you to send and receive data from a server without reloading the page, which makes your web application more responsive and efficient.
To use AJAX in JavaScript, you'll need to use the `XMLHttpRequest` object or the more modern `Fetch API`. These methods allow you to send HTTP requests to a server and receive the response asynchronously.
DOM (Document Object Model) manipulation is the process of dynamically modifying the structure and content of a web page using JavaScript. This can include adding, removing, or updating elements on the page, as well as handling user interactions with those elements.
For example, you can use JavaScript to create a new `<p>` element, add some text to it, and append it to the body of the HTML document. Here's a code snippet that demonstrates this:
```javascript
// Create a new paragraph element
const paragraph = document.createElement('p');
// Add some text to the paragraph
paragraph.textContent = 'This is a new paragraph.';
// Append the paragraph to the body of the document
document.body.appendChild(paragraph);
```
## Exercise
Instructions:
- Use the Fetch API to make an AJAX request to a public API (e.g., JSONPlaceholder) and retrieve some data.
- Use DOM manipulation to display the retrieved data on your web page.
### Solution
```html
<!DOCTYPE html>
<html>
<head>
<title>My JavaScript Project</title>
</head>
<body>
<h1>Welcome to my JavaScript project!</h1>
<script>
fetch('https://jsonplaceholder.typicode.com/posts/1')
.then(response => response.json())
.then(data => {
const title = document.createElement('h2');
title.textContent = data.title;
document.body.appendChild(title);
});
</script>
</body>
</html>
```
# Handling events and user interactions
Handling events and user interactions is an essential part of web development. It allows you to create interactive web applications that respond to user input and actions.
In JavaScript, you can use event listeners to listen for specific events, such as button clicks or form submissions. When an event occurs, the event listener's callback function is executed.
For example, you can use the `addEventListener` method to attach an event listener to a button. Here's a code snippet that demonstrates this:
```javascript
// Get a reference to the button element
const button = document.querySelector('button');
// Attach an event listener to the button
button.addEventListener('click', () => {
alert('Button clicked!');
});
```
## Exercise
Instructions:
- Create a form with an input field and a submit button.
- Use JavaScript to attach an event listener to the submit button.
- When the form is submitted, prevent the default behavior (which would refresh the page) and display an alert with the value of the input field.
### Solution
```html
<!DOCTYPE html>
<html>
<head>
<title>My JavaScript Project</title>
</head>
<body>
<h1>Welcome to my JavaScript project!</h1>
<form id="myForm">
<input type="text" name="inputField" placeholder="Enter some text">
<button type="submit">Submit</button>
</form>
<script>
const form = document.getElementById('myForm');
form.addEventListener('submit', (event) => {
event.preventDefault();
const inputField = document.querySelector('input[name="inputField"]');
alert(`You entered: ${inputField.value}`);
});
</script>
</body>
</html>
```
# Functional programming concepts in JavaScript
Functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids changing state and mutable data. JavaScript supports functional programming concepts, which can lead to more maintainable and testable code.
Some common functional programming concepts in JavaScript include:
- Pure functions: Functions that always produce the same output for the same input and have no side effects.
- Higher-order functions: Functions that take other functions as arguments or return functions as results.
- Immutable data: Data structures that cannot be changed once created.
For example, you can use the `Array.prototype.map` method to transform an array of numbers into an array of their squares. Here's a code snippet that demonstrates this:
```javascript
const numbers = [1, 2, 3, 4, 5];
const squares = numbers.map(number => number * number);
console.log(squares); // Output: [1, 4, 9, 16, 25]
```
## Exercise
Instructions:
- Write a pure function that takes an array of numbers and returns the sum of their squares.
- Use the `Array.prototype.reduce` method to implement the function.
### Solution
```javascript
function sumOfSquares(numbers) {
return numbers.reduce((sum, number) => sum + number * number, 0);
}
const numbers = [1, 2, 3, 4, 5];
console.log(sumOfSquares(numbers)); // Output: 55
```
# Using JSON for data manipulation
JSON (JavaScript Object Notation) is a lightweight data interchange format that is easy for humans to read and write and easy for machines to parse and generate. It is based on JavaScript's object syntax and is commonly used for transmitting data between a server and a web application.
In JavaScript, you can use the `JSON.parse` method to parse a JSON string and convert it into a JavaScript object. You can also use the `JSON.stringify` method to convert a JavaScript object into a JSON string.
For example, you can use the `fetch` function to retrieve JSON data from a server and then use the `JSON.parse` method to convert the data into a JavaScript object. Here's a code snippet that demonstrates this:
```javascript
fetch('https://jsonplaceholder.typicode.com/posts/1')
.then(response => response.json())
.then(data => {
console.log(data);
});
```
## Exercise
Instructions:
- Use the Fetch API to retrieve JSON data from a server.
- Use the `JSON.parse` method to convert the data into a JavaScript object.
- Use the `JSON.stringify` method to convert the JavaScript object back into a JSON string.
### Solution
```html
<!DOCTYPE html>
<html>
<head>
<title>My JavaScript Project</title>
</head>
<body>
<h1>Welcome to my JavaScript project!</h1>
<script>
fetch('https://jsonplaceholder.typicode.com/posts/1')
.then(response => response.json())
.then(data => {
console.log(JSON.stringify(data));
});
</script>
</body>
</html>
```
# Implementing Multiplicative Search Algorithm in JavaScript
The Multiplicative Search Algorithm is an efficient algorithm for searching for a specific value in a sorted array. It works by repeatedly multiplying the search key by a constant factor and comparing the result to the values in the array.
To implement the Multiplicative Search Algorithm in JavaScript, you can follow these steps:
1. Define a function that takes an array, a search key, and a constant factor as arguments.
2. Initialize two variables to store the start and end indices of the search range.
3. Use a while loop to repeatedly multiply the search key by the constant factor and update the start and end indices.
4. If the search key is found within the search range, return its index. Otherwise, return -1.
Here's a code snippet that demonstrates the implementation of the Multiplicative Search Algorithm in JavaScript:
```javascript
function multiplicativeSearch(array, key, factor) {
let start = 0;
let end = array.length - 1;
while (start <= end) {
const index = start + Math.floor((key - array[start]) / (array[end] - array[start])) * (end - start);
if (array[index] === key) {
return index;
}
if (array[index] < key) {
start = index + 1;
} else {
end = index - 1;
}
}
return -1;
}
const array = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
const key = 6;
const factor = 2;
console.log(multiplicativeSearch(array, key, factor)); // Output: 5
```
## Exercise
Instructions:
- Use the `multiplicativeSearch` function to search for a specific value in a sorted array.
- Print the result to the console.
### Solution
```html
<!DOCTYPE html>
<html>
<head>
<title>My JavaScript Project</title>
</head>
<body>
<h1>Welcome to my JavaScript project!</h1>
<script>
function multiplicativeSearch(array, key, factor) {
let start = 0;
let end = array.length - 1;
while (start <= end) {
const index = start + Math.floor((key - array[start]) / (array[end] - array[start])) * (end - start);
if (array[index] === key) {
return index;
}
if (array[index] < key) {
start = index + 1;
} else {
end = index - 1;
}
}
return -1;
}
const array = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
const key = 6;
const factor = 2;
console.log(multiplicativeSearch(array, key, factor)); // Output: 5
</script>
</body>
</html>
```
# Optimizing the algorithm for performance
To optimize the Multiplicative Search Algorithm for performance, you can use binary search to reduce the search range. This can significantly improve the algorithm's efficiency when dealing with large datasets.
Here's an example of how to modify the `multiplicativeSearch` function to use binary search:
```javascript
function multiplicativeSearch(array, key, factor) {
let start = 0;
let end = array.length - 1;
while (start <= end) {
const index = start + Math.floor((key - array[start]) / (array[end] - array[start])) * (end - start);
if (array[index] === key) {
return index;
}
if (array[index] < key) {
start = index + 1;
} else {
end = index - 1;
}
}
return -1;
}
function binarySearch(array, key, factor) {
let start = 0;
let end = array.length - 1;
while (start <= end) {
const index = start + Math.floor((key - array[start]) / (array[end] - array[start])) * (end - start);
if (array[index] === key) {
return index;
}
if (array[index] < key) {
start = index + 1;
} else {
end = index - 1;
}
}
return -1;
}
const array = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
const key = 6;
const factor = 2;
console.log(binarySearch(array, key, factor)); // Output: 5
```
## Exercise
Instructions:
- Modify the `multiplicativeSearch` function to use binary search.
- Test the modified function with a large dataset to compare its performance to the original implementation.
### Solution
```html
<!DOCTYPE html>
<html>
<head>
<title>My JavaScript Project</title>
</head>
<body>
<h1>Welcome to my JavaScript project!</h1>
<script>
function multiplicativeSearch(array, key, factor) {
let start = 0;
let end = array.length - 1;
while (start <= end) {
const index = start + Math.floor((key - array[start]) / (array[end] - array[start])) * (end - start);
if (array[index] === key) {
return index;
}
if (array[index] < key) {
start = index + 1;
} else {
end = index - 1;
}
}
return -1;
}
function binarySearch(array, key, factor) {
let start = 0;
let end = array.length - 1;
while (start <= end) {
const index = start + Math.floor((key - array[start]) / (array[end] - array[start])) * (end - start);
if (array[index] === key) {
return index;
}
if (array[index] < key) {
start = index + 1;
} else {
end = index - 1;
}
}
return -1;
}
const array = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
const key = 6;
const factor = 2;
console.log(binarySearch(array, key, factor)); // Output: 5
</script>
</body>
</html>
```
# Testing and debugging the algorithm
To ensure that your implementation of the Multiplicative Search Algorithm is correct and efficient, you should thoroughly test it and debug any issues that arise.
Some common testing and debugging techniques in JavaScript include:
- Unit testing: Writing small test functions that verify the correctness of individual functions or components.
- Integration testing: Testing the interaction between different components of your application.
- Performance testing: Measuring the efficiency and speed of your algorithm.
For example, you can use the `assert` function to write a simple unit test for the `multiplicativeSearch` function:
```javascript
function assert(condition, message) {
if (!condition) {
throw new Error(message);
}
}
function testMultiplicativeSearch() {
const array = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
const key = 6;
const factor = 2;
const result = multiplicativeSearch(array, key, factor);
assert(result === 5, 'The algorithm did not return the correct index.');
}
testMultiplicativeSearch();
```
## Exercise
Instructions:
- Write a unit test for the `multiplicativeSearch` function.
- Run the test and fix any errors that occur.
### Solution
```html
<!DOCTYPE html>
<html>
<head>
<title>My JavaScript Project</title>
</head>
<body>
<h1>Welcome to my JavaScript project!</h1>
<script>
function assert(condition, message) {
if (!condition) {
throw new Error(message);
}
}
function testMultiplicativeSearch() {
const array = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
const key = 6;
const factor = 2;
const result = multiplicativeSearch(array, key, factor);
assert(result === 5, 'The algorithm did not return the correct index.');
}
testMultiplicativeSearch();
</script>
</body>
</html>
```
# Real-world applications of Multiplicative Search Algorithm in JavaScript
The Multiplicative Search Algorithm has various real-world applications, such as:
- Searching for a specific value in large datasets, such as in databases or data tables.
- Implementing efficient search functionality in web applications, such as search engines or e-commerce platforms.
- Optimizing algorithms for specific use cases, such as searching for a specific element in a sorted array or finding the closest point to a given location.
For example, you can use the `multiplicativeSearch` function to implement a search functionality in a web application that allows users to search for specific products by name. Here's a code snippet that demonstrates this:
```html
<!DOCTYPE html>
<html>
<head>
<title>My JavaScript Project</title>
</head>
<body>
<h1>Welcome to my JavaScript project!</h1>
<input type="text" id="searchInput" placeholder="Search for a product">
<button onclick="search()">Search</button>
<ul id="results"></ul>
<script>
const products = [
{ name: 'Product 1', price: 10 },
{ name: 'Product 2', price: 20 },
{ name: 'Product 3', price: 30 },
// ...
];
function search() {
const searchInput = document.getElementById('searchInput');
const searchKey = searchInput.value;
const factor = 2;
const resultIndex = multiplicativeSearch(products, searchKey, factor);
const results = document.getElementById('results');
results.innerHTML = '';
if (resultIndex !== -1) {
const result = products[resultIndex];
const listItem = document.createElement('li');
listItem.textContent = `Found: ${result.name} - $${result.price}`;
results.appendChild(listItem);
} else {
const listItem = document.createElement('li');
listItem.textContent = 'No results found.';
results.appendChild(listItem);
}
}
</script>
</body>
</html>
```
# Future advancements in JavaScript and Multiplicative Search Algorithms
As the web development landscape continues to evolve, so too will JavaScript and its algorithms. Some potential future advancements include:
- Improved performance: New JavaScript engines and optimizations could lead to even faster execution times for algorithms like the Multiplicative Search Algorithm.
- New data structures: Developers may create more efficient data structures that can be used in conjunction with algorithms like the Multiplicative Search Algorithm.
- Machine learning integration: JavaScript libraries and frameworks may become more capable of integrating machine learning algorithms, allowing for more advanced search functionality.
In conclusion, the Multiplicative Search Algorithm is a powerful and efficient algorithm for searching for a specific value in a sorted array. By understanding its implementation and optimization techniques in JavaScript, you can apply this algorithm to a wide range of real-world applications and continue to develop your skills as a web developer. | Textbooks |
Kurt Johansson (mathematician)
Kurt Johansson (born 1960) is a Swedish mathematician, specializing in probability theory.
Johansson received his PhD in 1988 from Uppsala University under the supervision of Lennart Carleson[1][2] and is a professor in mathematics at KTH Royal Institute of Technology.[3]
In 2002 Johansson was an invited speaker of the International Congress of Mathematicians in Beijing[4] and was awarded the Göran Gustafsson Prize. In 2006 he was elected a member of the Royal Swedish Academy of Sciences. In 2012 he was elected a fellow of the American Mathematical Society.
Selected publications
• Johansson, Kurt (1997). "On Random Matrices from the Compact Classical Groups". The Annals of Mathematics. 145 (3): 519–545. doi:10.2307/2951843. JSTOR 2951843.
• Johansson, Kurt (1998). "On fluctuations of eigenvalues of random Hermitian matrices". Duke Mathematical Journal. 91 (1): 151–204. doi:10.1215/S0012-7094-98-09108-6. ISSN 0012-7094.
• Baik, Jinho; Deift, Percy; Johansson, Kurt (1999). "On the distribution of the length of the longest increasing subsequence of random permutations". Journal of the American Mathematical Society. 12 (4): 1119–1179. doi:10.1090/S0894-0347-99-00307-0.
• Johansson, Kurt (2000). "Transversal fluctuations for increasing subsequences on the plane". Probability Theory and Related Fields. 116 (4): 445–456. doi:10.1007/s004400050258. hdl:2027.42/142448. S2CID 16313314.
• Johansson, Kurt (2000). "Shape Fluctuations and Random Matrices". Communications in Mathematical Physics. 209 (2): 437–476. arXiv:math/9903134. Bibcode:2000CMaPh.209..437J. doi:10.1007/s002200050027. ISSN 0010-3616. S2CID 16291076.
• Johansson, Kurt (2001). "Random Growth and Random Matrices". European Congress of Mathematics. Progress in Mathematics, vol. 201. pp. 445–456. doi:10.1007/978-3-0348-8268-2_25. ISBN 978-3-0348-9497-5.
• Johansson, Kurt (2001). "Discrete orthogonal polynomial ensembles and the Plancherel measure" (PDF). Annals of Mathematics. 153 (1): 259–296. arXiv:math/9906120. doi:10.2307/2661375. JSTOR 2661375. S2CID 14120881.
• Johansson, Kurt (2002). "Non-intersecting paths, random tilings and random matrices". Probability Theory and Related Fields. 123 (2): 225–280. arXiv:math/0011250. doi:10.1007/s004400100187. S2CID 17994807.
• Johansson, Kurt (2005). "Non-intersecting, simple, symmetric \- random walks and the extended Hahn kernel". Annales de l'Institut Fourier. 55 (6): 2129–2145. arXiv:math/0409013. doi:10.5802/aif.2155. ISSN 0373-0956. S2CID 8434266.
• Johansson, K. (2007). "From Gumbel to Tracy-Widom". Probability Theory and Related Fields. 138 (1–2): 75–112. doi:10.1007/s00440-006-0012-7. S2CID 15410267.
• Adler, Mark; Johansson, Kurt; Van Moerbeke, Pierre (2014). "Double Aztec diamonds and the tacnode process". Advances in Mathematics. 252: 518–571. doi:10.1016/j.aim.2013.10.012.
• Adler, Mark; Chhita, Sunil; Johansson, Kurt; Van Moerbeke, Pierre (2015). "Tacnode GUE-minor processes and double Aztec diamonds" (PDF). Probability Theory and Related Fields. 162 (1–2): 275–325. doi:10.1007/s00440-014-0573-9. S2CID 119126886.
• Johansson, Kurt (2019). "The two-time distribution in geometric last-passage percolation". Probability Theory and Related Fields. 175 (3–4): 849–895. doi:10.1007/s00440-019-00901-9.
References
1. Johansson, Kurt (1988). On Szegö's asymptotic formula for Toeplitz determinants and generalizations. {{cite book}}: |website= ignored (help)
2. Kurt Johansson at the Mathematics Genealogy Project
3. "Kurt Johansson". kth.se.
4. Johansson, Kurt (2003). "Toeplitz determinants, random growth and determinant processes". Proceedings of the ICM, Beijing 2002. Vol. 3. pp. 53–62. arXiv:math/0304368.
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• United States
• Sweden
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
| Wikipedia |
# The Fourier transform and its applications
The Fourier transform is defined as a function that takes a continuous-time signal as input and returns its frequency components. The function is given by:
$$F(\omega) = \int_{-\infty}^{\infty} f(t) e^{-j\omega t} dt$$
where $F(\omega)$ represents the frequency component of the signal, $f(t)$ is the input signal, $j$ is the imaginary unit, and $\omega$ is the frequency.
The Fourier transform has several important properties:
- Time-frequency duality: The Fourier transform allows us to view a signal as a sum of sinusoids with different frequencies.
- Convolution theorem: The Fourier transform of the product of two functions is equal to the convolution of their transforms.
- Shannon-Nyquist theorem: The Fourier transform provides a way to represent a signal as a continuous function in the frequency domain.
## Exercise
Instructions:
- Calculate the Fourier transform of a given signal $f(t) = e^{-t} \cos(2\pi t)$.
- Use the Shannon-Nyquist theorem to represent the signal as a continuous function in the frequency domain.
### Solution
- The Fourier transform of the given signal is:
$$F(\omega) = \int_{-\infty}^{\infty} e^{-t} \cos(2\pi t) e^{-j\omega t} dt$$
- Using the Shannon-Nyquist theorem, we can represent the signal as a continuous function in the frequency domain:
$$F(\omega) = \int_{-\infty}^{\infty} \frac{1}{2\pi} \delta(\omega - 2\pi) - \frac{1}{2\pi} \delta(\omega + 2\pi) e^{-j\omega t} dt$$
where $\delta(\omega)$ is the Dirac delta function.
The Fourier transform has numerous applications in spectral analysis, including signal processing, image processing, and data analysis. It is widely used in fields such as communication systems, radar systems, and medical imaging.
# Creating and visualizing data with Python
To create and visualize data with Python, we will use the NumPy library for numerical computations and the Matplotlib library for data visualization.
NumPy is a powerful library that provides support for large, multi-dimensional arrays and matrices, along with a collection of mathematical functions to operate on these arrays. It is widely used in scientific computing and data analysis.
To install NumPy and Matplotlib, you can use the following commands:
```
pip install numpy
pip install matplotlib
```
# Introduction to Matplotlib
Matplotlib is a popular plotting library for Python that allows you to create a wide range of plots, including line plots, scatter plots, histograms, bar plots, and more. It provides a simple and intuitive interface for creating high-quality graphics, and supports various output formats such as PNG, PDF, and SVG.
To use Matplotlib in your Python code, you first need to import the necessary modules:
```python
import numpy as np
import matplotlib.pyplot as plt
```
# Basic plotting with Matplotlib
Matplotlib provides several functions for creating basic plots, such as line plots, scatter plots, and histograms. Here are some examples:
- Line plot:
```python
x = np.linspace(0, 10, 100)
y = np.sin(x)
plt.plot(x, y)
plt.show()
```
- Scatter plot:
```python
x = np.random.rand(50)
y = np.random.rand(50)
plt.scatter(x, y)
plt.show()
```
- Histogram:
```python
data = np.random.normal(0, 1, 1000)
plt.hist(data, bins=30)
plt.show()
```
# Advanced plotting with Matplotlib
Matplotlib also provides functions for creating advanced plots, such as bar plots, box plots, and contour plots. Here are some examples:
- Bar plot:
```python
categories = ['A', 'B', 'C']
values = [10, 20, 30]
plt.bar(categories, values)
plt.show()
```
- Box plot:
```python
data = np.random.normal(0, 1, 100)
plt.boxplot(data)
plt.show()
```
- Contour plot:
```python
x, y = np.meshgrid(np.linspace(0, 10, 100), np.linspace(0, 10, 100))
z = np.sin(x) + np.cos(y)
plt.contour(x, y, z)
plt.show()
```
# Customizing plots in Matplotlib
Matplotlib provides various functions for customizing plots, such as setting axis labels, title, and legend. Here are some examples:
- Setting axis labels and title:
```python
x = np.linspace(0, 10, 100)
y = np.sin(x)
plt.plot(x, y)
plt.xlabel('x-axis')
plt.ylabel('y-axis')
plt.title('Sine wave')
plt.show()
```
- Adding a legend:
```python
x = np.linspace(0, 10, 100)
y1 = np.sin(x)
y2 = np.cos(x)
plt.plot(x, y1, label='sin(x)')
plt.plot(x, y2, label='cos(x)')
plt.legend()
plt.show()
```
# Creating animations and interactive plots with Matplotlib
Matplotlib also provides functions for creating animations and interactive plots. Here are some examples:
- Animation:
```python
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
fig, ax = plt.subplots()
x = np.linspace(0, 10, 100)
y = np.sin(x)
line, = plt.plot(x, y)
def update(i):
y = np.sin(x + i)
line.set_ydata(y)
return line,
ani = animation.FuncAnimation(fig, update, frames=100, interval=50, blit=True)
plt.show()
```
- Interactive plot:
```python
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 10, 100)
y = np.sin(x)
fig, ax = plt.subplots()
line, = plt.plot(x, y)
def on_click(event):
x = np.linspace(0, 10, 100)
y = np.cos(x)
line.set_ydata(y)
fig.canvas.draw_idle()
fig.canvas.mpl_connect('button_press_event', on_click)
plt.show()
```
# Applications of spectral analysis in Python
Some common applications of spectral analysis in Python include:
- Signal processing: Filtering and denoising signals, detecting and tracking objects in images, and analyzing speech and audio signals.
- Image processing: Feature extraction, image segmentation, and object recognition in images.
- Data analysis: Time series analysis, frequency-domain analysis, and modeling of complex systems.
# Real-world examples of spectral analysis and Matplotlib
- Analyzing the frequency components of an audio signal using the Fourier transform.
- Visualizing the power spectral density of a time series using Matplotlib.
- Detecting and tracking objects in images using the Fourier transform and image processing techniques.
# Conclusion and further resources
In this textbook, we have covered the fundamentals of spectral analysis and graphing with Matplotlib in Python. We have discussed the Fourier transform and its applications, and learned how to create and visualize data, create basic plots, advanced plots, and customize plots with Matplotlib. We have also explored the creation of animations and interactive plots, and discussed the applications of spectral analysis in Python.
To further explore the topics covered in this textbook, you can refer to the following resources:
- The official Matplotlib documentation: https://matplotlib.org/stable/contents.html
- The official NumPy documentation: https://numpy.org/doc/stable/
- The official Python scientific computing library, SciPy: https://www.scipy.org/
- The official Python signal processing library, SciPy Signal: https://docs.scipy.org/doc/scipy/reference/signal.html
We hope you have found this textbook helpful in learning the fundamentals of spectral analysis and graphing with Matplotlib in Python. Happy coding! | Textbooks |
Ring of mixed characteristic
In commutative algebra, a ring of mixed characteristic is a commutative ring $R$ having characteristic zero and having an ideal $I$ such that $R/I$ has positive characteristic.[1]
Examples
• The integers $\mathbb {Z} $ have characteristic zero, but for any prime number $p$, $\mathbb {F} _{p}=\mathbb {Z} /p\mathbb {Z} $ is a finite field with $p$ elements and hence has characteristic $p$.
• The ring of integers of any number field is of mixed characteristic
• Fix a prime p and localize the integers at the prime ideal (p). The resulting ring Z(p) has characteristic zero. It has a unique maximal ideal pZ(p), and the quotient Z(p)/pZ(p) is a finite field with p elements. In contrast to the previous example, the only possible characteristics for rings of the form Z(p) /I are zero (when I is the zero ideal) and powers of p (when I is any other non-unit ideal); it is not possible to have a quotient of any other characteristic.
• If $P$ is a non-zero prime ideal of the ring ${\mathcal {O}}_{K}$ of integers of a number field $K$, then the localization of ${\mathcal {O}}_{K}$ at $P$ is likewise of mixed characteristic.
• The p-adic integers Zp for any prime p are a ring of characteristic zero. However, they have an ideal generated by the image of the prime number p under the canonical map Z → Zp. The quotient Zp/pZp is again the finite field of p elements. Zp is an example of a complete discrete valuation ring of mixed characteristic.
• The integers, the ring of integers of any number field, and any localization or completion of one of these rings is a characteristic zero Dedekind domain.
References
1. Bergman, George M.; Hausknecht, Adam O. (1996), Co-groups and co-rings in categories of associative rings, Mathematical Surveys and Monographs, vol. 45, American Mathematical Society, Providence, RI, p. 336, doi:10.1090/surv/045, ISBN 0-8218-0495-2, MR 1387111.
| Wikipedia |
Tsen's theorem
In mathematics, Tsen's theorem states that a function field K of an algebraic curve over an algebraically closed field is quasi-algebraically closed (i.e., C1). This implies that the Brauer group of any such field vanishes,[1] and more generally that all the Galois cohomology groups H i(K, K*) vanish for i ≥ 1. This result is used to calculate the étale cohomology groups of an algebraic curve.
The theorem was published by Chiungtze C. Tsen in 1933.
See also
• Tsen rank
References
1. Lorenz, Falko (2008). Algebra. Volume II: Fields with Structure, Algebras and Advanced Topics. Springer. p. 181. ISBN 978-0-387-72487-4. Zbl 1130.12001.
• Ding, Shisun; Kang, Ming-Chang; Tan, Eng-Tjioe (1999), "Chiungtze C. Tsen (1898–1940) and Tsen's theorems", Rocky Mountain Journal of Mathematics, 29 (4): 1237–1269, doi:10.1216/rmjm/1181070405, ISSN 0035-7596, MR 1743370, Zbl 0955.01031
• Lang, Serge (1952), "On quasi algebraic closure", Annals of Mathematics, Second Series, 55: 373–390, doi:10.2307/1969785, ISSN 0003-486X, JSTOR 1969785, Zbl 0046.26202
• Serre, J. P. (2002), Galois Cohomology, Springer Monographs in Mathematics, Translated from the French by Patrick Ion, Berlin: Springer-Verlag, ISBN 3-540-42192-0, Zbl 1004.12003
• Tsen, Chiungtze C. (1933), "Divisionsalgebren über Funktionenkörpern", Nachr. Ges. Wiss. Göttingen, Math.-Phys. Kl. (in German): 335–339, JFM 59.0160.01, Zbl 0007.29401
| Wikipedia |
\begin{document}
\title[Almost sure existence for EMHD]{Almost sure existence of global weak solutions for supercritical electron MHD}
\author [Mimi Dai]{Mimi Dai}
\address{Department of Mathematics, Statistics and Computer Science, University of Illinois at Chicago, Chicago, IL 60607, USA} \address{School of Mathematics, Institute for Advanced Study, Princeton, NJ 08540, USA}
\email{[email protected]}
\begin{abstract} We consider the Cauchy problem for the electron magnetohydrodynamics model in the supercritical regime. For rough initial data in $\mathcal H^{-s}(\mathbb T^n)$ with $s>0$, we obtain global in time weak solutions almost surely via an appropriate randomization of the initial data.
KEY WORDS: magnetohydrodynamics; supercritical; randomization; almost sure existence.
CLASSIFICATION CODE: 35Q35, 76D03, 76W05. \end{abstract}
\maketitle
\section{Introduction}
With static background flow the electron magnetohydrodynamics(MHD) is modeled by \begin{equation}\label{emhd0} \begin{split} B_t+\nabla\times((\nabla\times B)\times B)=&\ \Delta B,\\ \nabla\cdot B=&\ 0 \end{split} \end{equation} where the nonlinear term captures the Hall effect. It is a subsystem of the full MHD system with Hall effect, which has attracted much attention in recent time, see \cite{ADFL, CDL, CWeng}. The magnetic field considered in this paper takes the form $B(x,t)=(B_1(x, t), B_2(x, t),B_3(x, t))$ for either $x\in \mathbb T^2$ or $x\in \mathbb T^3$. In particular, in the former case, the problem is regarded as for a 3-dimensional (3D) magnetic field posed on the 2D torus (c.f. \cite{CW}). One notices that the highest order derivative appears both in the linear diffusion and the quadratic Hall term, resulting (\ref{emhd0}) as a quasilinear system. Beside other difficulties caused by the peculiar geometry structure of the Hall effect, the quasilinear feature is a major obstacle in the analysis of (\ref{emhd0}). We will further illustrate this point by the discussion of scaling property. System (\ref{emhd0}) has the natural scaling that if $B(x,t)$ solves the system with initial data $B_0(x)$, then the rescaled vector field \[B_\lambda(x,t)=B(\lambda x, \lambda^{2}t)\] for an arbitrary parameter $\lambda$ solves the system as well with initial data $B_0(\lambda x)$. Some scaling invariant spaces (also referred as critical spaces) with embedding for (\ref{emhd0}) in $n$-dimensional space are \begin{equation}\label{critical1} \dot{\mathcal H}^{\frac{n}2}\hookrightarrow L^{\infty}\hookrightarrow \dot B^{\frac{n}{p}}_{p,\infty} \hookrightarrow BMO \hookrightarrow \dot B^{0}_{\infty,\infty}, \ \ \ 1<p<\infty. \end{equation}
In view of (\ref{critical1}) we note the energy space $L^2(\mathbb T^n)$ is supercritical in both 2D and 3D, and system (\ref{emhd0}) is supercritical in both situations. Thus it is naturally challenging to analyze (\ref{emhd0}) even in 2D.
From the perspective of mathematics, in order to understand the competition between the nonlinear Hall effect and the linear term, we consider the electron MHD with generalized diffusion \begin{equation}\label{emhd} \begin{split} B_t+\nabla\times((\nabla\times B)\times B)=&-(-\Delta)^\alpha B,\\ \nabla\cdot B=&\ 0 \end{split} \end{equation} on $[0,\infty)\times \mathbb T^n$, $n=2,3$, for $\alpha>0$. The MHD system with fractional diffusion was previously studied, for instance see \cite{CWW}. System (\ref{emhd}) possesses the scaling \begin{equation}\notag B_\lambda(x,t)=\lambda^{2\alpha-2}B(\lambda x, \lambda^{2\alpha}t), \end{equation} according to which, some critical spaces with embedding for (\ref{emhd}) are \begin{equation}\label{critical2} \dot{\mathcal H}^{\frac{n}2+2-2\alpha}\hookrightarrow L^{\frac{n}{2\alpha-2}}\hookrightarrow \dot B^{2-2\alpha+\frac{n}{p}}_{p,\infty} \hookrightarrow BMO^{2-2\alpha} \hookrightarrow \dot B^{2-2\alpha}_{\infty,\infty}, \ 1<p<\infty. \end{equation}
The basic energy law for (\ref{emhd}) is given by \begin{equation}\label{energy}
\|B(x,t)\|_{L^2}^2+\int_0^t\|\nabla^\alpha B(x,s)\|_{L^2}^2\, ds=\|B(x,0)\|_{L^2}^2. \end{equation} It follows from (\ref{energy}) that solutions of (\ref{emhd}) satisfy the a priori estimates \[B\in L^\infty([0,T); L^2(\mathbb T^n)) \cap L^2([0,T); H^\alpha (\mathbb T^n)).\] On the other hand we see from (\ref{critical2}) that the Sobolev space $\dot{\mathcal H}^{3-2\alpha}$ is critical for (\ref{emhd}) in 2D, while $\dot{\mathcal H}^{\frac72-2\alpha}$ is critical in 3D. Since $\dot{\mathcal H}^{3-2\alpha}=L^2$ for $\alpha=\frac32$ and $\dot{\mathcal H}^{\frac72-2\alpha}=L^2$ for $\alpha=\frac74$, system (\ref{emhd}) is critical in 2D when $\alpha=\frac32$ and critical in 3D when $\alpha=\frac74$. For $\alpha>\frac32$ in 2D and $\alpha>\frac74$ in 3D, system (\ref{emhd}) is referred to be subcritical in which situation the linear term dominates, and hence global regularity is known to hold by standard energy method. While for $\alpha<\frac32$ in 2D and $\alpha<\frac74$ in 3D (including $\alpha=1$), system (\ref{emhd}) is supercritical and challenging in general. In this paper, we consider (\ref{emhd}) in the supercritical and critical setting, i.e. $\alpha\leq\frac32$ in 2D and $\alpha\leq\frac74$ in 3D.
When the initial data is rather regular, say in $\dot{\mathcal H}^{s}$ with $s>\frac n2+2-2\alpha$, well-posedness of (\ref{emhd}) is expected, see \cite{Dai2} for instance. With initial data $B_0\in L^2(\mathbb T^n)$, one can obtain weak solutions for (\ref{emhd}) by using Galerkin approximating approach. Nevertheless, for rough initial data below $L^2(\mathbb T^n)$, it is not clear how to construct weak solutions for (\ref{emhd}). This is a similar situation for many other equations, like the Navier-Stokes equation, nonlinear Schr\"ondinger equation, nonlinear wave equation, etc. The purpose of the paper is to construct global in time weak solutions to (\ref{emhd}) by randomizing initial data in Sobolev spaces $\dot{\mathcal H}^{-s}(\mathbb T^n)$ with $n=2,3$ and $s>0$. For a given rough initial data $f\in \dot{\mathcal H}^{-s}(\mathbb T^n)$ with $\nabla\cdot f=0$ and $\int_{\mathbb T^n} f\, dx=0$, we randomize it appropriately to $f^\omega$ satisfying $\nabla\cdot f^\omega=0$. We then consider solution of (\ref{emhd}) with the initial data $f^\omega$ in the form \[B(x,t)= e^{-t(-\Delta)^\alpha}f^\omega (x)+H(x,t)\] where $H$ satisfies a nonlinear equation that depends on $e^{-t(-\Delta)^\alpha}f^\omega$ and obviously $H(x,0)=0$. A crucial point is that the free evolution $e^{-t(-\Delta)^\alpha}f^\omega$ has almost surely improved $L^p$ estimates thanks to the randomization of the data. As a consequence, it provides the possibility to construct a global in time weak solution $H$.
The study of well-posedness for randomized initial data was initiated by Bourgain in \cite{Bou96} for supercritical nonlinear Schr\"ondinger equation. Random data Cauchy problem was investigated for supercritical wave equation by Burq and Tzvetkov \cite{BT1, BT2}, eventually leading to a global existence theory. Applying the method from \cite{BT1, BT2}, Nahmod, Pavlovi\'c and Staffilani \cite{NPS} showed almost sure existence of global weak solutions for the Navier-Stokes equation (NSE) below $L^2(\mathbb T^n)$. An almost sure global well-posedness result for the 2D nonlinear Schr\"ondinger equation with random radial initial data in the supercritical regime was established by Deng \cite{Deng}. Later on, L\"uhrmann and Mendelson \cite{LM} established the random data Cauchy theory for nonlinear wave equations of power-type on $\mathbb R^3$. Random data Cauchy problem has been also treated for various other equations when the deterministic Cauchy theory is hard to be achieved. We do not intend to give an extensive list here.
Applying the framework of random data Cauchy theory to (\ref{emhd}) in this paper, the main difficulty comes from the strong nonlinear effect. The Hall term of (\ref{emhd}) is one degree higher than the nonlinear term $(u\cdot\nabla) u$ of the well-known (NSE). It is a general belief that the nonlinearity of $(u\cdot\nabla) u$ poses intrinsic obstacles to crack the global regularity problem for the 3D NSE, see \cite{Tao}. As a result of the presence of the strong nonlinear term in (\ref{emhd}) and the quasilinear feature of (\ref{emhd}), it prevents us to show global existence of weak solutions with randomized data for $\alpha=1$ in both 2D and 3D. Nevertheless, for larger value of $\alpha$ which is still below the critical exponent, we are able to obtain almost sure existence of global weak solutions for (\ref{emhd}) in the supercritical regime.
\section{Main results}\label{statement}
In this section we fix notations to be used throughout the text, introduce the procedure of randomization, and then state the main results.
\subsection{Notations} We often denote $C$ by a constant in estimates which may vary from line to line. When it is not necessary to track the constant, $f\lesssim g$ is used to denote $f\leq Cg$ for some constant $C>0$.
Note the inhomogeneous and homogeneous Sobolev spaces are equivalent on torus. In the rest of the paper we only use $\mathcal H^s$ to denote the Sobolev space. We further denote \begin{equation}\notag \begin{split}
\mathcal H=&\ \mbox{the closure of} \ \{ f\in C^\infty(\mathbb T^n)| \nabla\cdot f=0\} \ \mbox{in} \ L^2(\mathbb T^n),\\
\mathcal V_\alpha=&\ \mbox{the closure of} \ \{ f\in C^\infty(\mathbb T^n)| \nabla\cdot f=0\} \ \mbox{in} \ \mathcal H^\alpha(\mathbb T^n),\\ \mathcal V'_\alpha=&\ \mbox{the dual of}\ \mathcal V_\alpha. \end{split} \end{equation} The inner product in $L^2(\mathbb T^n)$ is denoted by \[\langle f, g \rangle =\int_{\mathbb T^n} f\cdot g \, dx.\]
\subsection{Notion of randomization}
We first recollect the large deviation estimates established in \cite{BT2}.
\begin{Lemma}\label{BT} Let $(l_i(\omega))_{i=1}^\infty$ be a sequence of real-valued, zero-mean and independent random variables on a probability space $(\Omega, \mathcal A, P)$ with associated distributions $(\mu_i)_{i=1}^\infty$. Assume that there exists $c>0$ such that \begin{equation}\label{ass1}
\left|\int_{-\infty}^{\infty}e^{\gamma x}\, d\mu_i(x)\right|\leq e^{c\gamma^2} \ \ \ \ \forall \gamma\in\mathbb R \ \ \ \forall i\geq 1. \end{equation} Then there exists $\beta>0$ such that
\begin{equation}\notag P\left(\omega: \left|\sum_{i=1}^\infty c_il_i(\omega)\right|>\lambda\right)\leq 2e^{-\frac{\beta \lambda^2}{\sum_{i=1}^\infty c_i^2}} \ \ \ \ \forall \lambda>0 \ \ \ \forall (c_i)_{i=1}^\infty\in \ell^2. \end{equation} Consequently, there exists another constant $c>0$ such that \begin{equation}\notag
\left\|\sum_{i=1}^\infty c_il_i(\omega)\right\|_{L^q(\Omega)}\leq c \sqrt q\left(\sum_{i=1}^\infty c_i^2\right)^{\frac12}
\ \ \ \ \forall q\geq 2 \ \ \ \forall (c_i)_{i=1}^\infty\in \ell^2. \end{equation} \end{Lemma}
We point out that both the standard Gaussian and Bernoulli variables satisfy the assumption (\ref{ass1}), see \cite{BT2}.
We follow \cite{NPS} to introduce the diagonal randomization on the Sobolev space $\mathcal H^s(\mathbb T^n)$ as follows.
\begin{Definition}\label{def-rand} Let $(l_k(\omega))_{k\in \mathbb Z^n}$ be a sequence of real-valued and independent random variables on the probability space $(\Omega,\mathcal A, P)$ as in Lemma \ref{BT}. Let $e_k(x)=e^{ik\cdot x}$ for any $k\in \mathbb Z^n$. For a vector field $f=(f_1, f_2, ..., f_n)\in \mathcal H^s(\mathbb T^n)$ with Fourier coefficients $(a_{k})_{k\in \mathbb Z^n}$ and $a_k=(a_k^1, a_k^2, ..., a_k^n)$, the map \begin{equation}\label{R} \begin{split} \mathcal R: (\Omega, \mathcal A) & \longrightarrow \mathcal H^s(\mathbb T^n)\\ \omega & \longrightarrow f^\omega, \ \ \ f^\omega(x)=\left(\sum_{k\in\mathbb Z^n}l_k(\omega)a_k^1 e_k(x), ... , \sum_{k\in\mathbb Z^n}l_k(\omega)a_k^n e_k(x)\right) \end{split} \end{equation} equipped with the Borel sigma algebra is introduced. The map $\mathcal R$ is called randomization. \end{Definition}
It follows from Lemma \ref{BT} that the map $\mathcal R$ is measurable and $f^\omega$ is an $\mathcal H^s(\mathbb T^n)$-valued random variable. Moreover, we have
\begin{equation}\notag f^\omega\in L^2(\Omega; \mathcal H^s(\mathbb T^n), \ \ \ \|f^\omega\|_{\mathcal H^s}\sim \|f\|_{\mathcal H^s}. \end{equation} Indeed, as shown in \cite{BT2}, the randomization $\mathcal R$ does not provide regularization of $\mathcal H^s$ in term of the regularity index $s$. Nevertheless, it gives rise to improved $L^p$ estimate almost surely.
\subsection{Statement of the main results}
\begin{Definition}\label{def-sol1} Let $\alpha>1$. Let $f\in \mathcal H^{-s}(\mathbb T^n)$ with $s>0$ and \begin{equation}\notag \nabla\cdot f=0, \ \ \ \int_{\mathbb T^n} f\, dx=0. \end{equation} A function $B(x,t)$ is said to be a weak solution of the electron MHD (\ref{emhd}) with initial data $f$ on $[0,T]$ if \begin{equation}\notag \langle\frac{d B}{dt}, \phi\rangle+\left<\nabla^\alpha B, \nabla^\alpha \phi\right>+\left<(\nabla\times B)\times B, \nabla\times \phi\right>=0 \ \ \ \mbox{for a.e. } \ \ t \ \ \mbox{and for all } \ \ \phi\in \mathcal V'_\alpha, \end{equation} \[B\in L^2_{loc}((0,T); \mathcal V_\alpha (\mathbb T^n))\cap L_{loc}^\infty((0,T); \mathcal H(\mathbb T^n))\cap C_{weak}((0,T); \mathcal H^{-s}(\mathbb T^n)),\] \[\frac{d B}{dt}\in L^1_{loc}((0,T); \mathcal V'_\alpha (\mathbb T^n)),\] and \begin{equation}\notag \lim_{t\to 0^+}B(t)= f \ \ \mbox{weakly in } \ \ \mathcal H^{-s}(\mathbb T^n). \end{equation} \end{Definition}
\begin{Theorem}\label{thm-2d} Let $f$ be as in Definition \ref{def-sol1}. Let $\alpha\in [\frac43, \frac32]$. Assume $s\in(0, 2\alpha-\frac52)$. There exists a set $\Sigma\subset \Omega$ with $P(\Sigma)=1$ such that for any $\omega\in\Sigma$ the electron MHD (\ref{emhd}) with initial data $f^\omega$ on $\mathbb T^2$ has a global in time weak solution $B$ of the form \begin{equation}\label{sol-split} B=B_{f^\omega}+H \end{equation} with $B_{f^\omega}=e^{-t(-\Delta)^\alpha} f^\omega$ and \begin{equation}\notag H\in L^\infty([0,\infty); L^2(\mathbb T^2))\cap L^2([0,\infty); \mathcal H^\alpha(\mathbb T^2)). \end{equation} In addition, if $\alpha\geq \frac32$, the solution is regular and unique. \end{Theorem}
\begin{Theorem}\label{thm-3d} Consider (\ref{emhd}) on $\mathbb T^3$. Let $\alpha\in (\frac{11}{8}, \frac74]$. Assume $s\in (0, 2\alpha-\frac{11}{4})$. Then the first statement of Theorem \ref{thm-2d} holds. In addition, if $\alpha\geq \frac74$, the solution is regular and unique. \end{Theorem}
\section{Outline of the proof of the main results}\label{sec-outline}
The strategy of showing existence of weak solutions for the electron MHD (\ref{emhd}) with initial data $f^\omega$ is to look for solutions in the form $B=B_{f^\omega}+H$ with the linear part \[B_{f^\omega}=e^{-t(-\Delta)^\alpha} f^\omega, \ \ \ B_{f^\omega}(x,0)=f^\omega(x)\] and the remaining nonlinear part $H$. Denote the bilinear operator \begin{equation}\notag \mathcal B(u,v)=(\nabla\times u)\times v. \end{equation} Note that
\[\mathcal B(u,u)= (u\cdot\nabla )u-\nabla \frac{|u|^2}{2}. \] If $\nabla\cdot u=0$, we can further write
\[\mathcal B(u,u)= \nabla\cdot(u\otimes u)-\nabla \frac{|u|^2}{2}. \] One can check that if $B$ satisfies (\ref{emhd}) with initial data $f^\omega$, the nonlinear part $H$ solves the Cauchy problem \begin{equation}\label{H} \begin{split} H_t+\nabla\times\mathcal B(H, H)+\nabla\times\mathcal B(H, B_{f^\omega})\ \ \ &\\ +\nabla\times\mathcal B(B_{f^\omega}, H)+\nabla\times\mathcal B(B_{f^\omega}, B_{f^\omega})=&-(-\Delta)^\alpha H,\\ \nabla\cdot H=&\ 0,\\ H(x, 0)=&\ 0. \end{split} \end{equation} In order to prove Theorems \ref{thm-2d} and \ref{thm-3d}, it is sufficient to show existence of global in time weak solutions for (\ref{H}) on $\mathbb T^n$ with $n=2,3$. We thus proceed to define weak solutions for (\ref{H}) and formulate the existence theorem.
\begin{Definition}\label{def2} A function $H(x,t)$ is said to be a weak solution of (\ref{H}) on $[0,T]$ if \begin{equation}\notag \begin{split} \langle\frac{d H}{dt}, \phi\rangle&+\left<\nabla^\alpha H, \nabla^\alpha \phi\right>+\left<\mathcal B(H, H), \nabla\times \phi\right>\\ &+\left<\mathcal B(H, B_{f^\omega}), \nabla\times \phi\right> +\left<\mathcal B(B_{f^\omega}, H), \nabla\times \phi\right> +\left<\mathcal B(B_{f^\omega}, B_{f^\omega}), \nabla\times \phi\right>=0 \\ &\ \ \ \mbox{for a.e. } \ \ t \ \ \mbox{and for all } \ \ \phi\in \mathcal V'_\alpha, \end{split} \end{equation} \[H\in L^2((0,T); \mathcal V_\alpha (\mathbb T^n))\cap L^\infty((0,T); \mathcal H(\mathbb T^n)),\ \ \ \ \frac{d H}{dt}\in L^1((0,T); \mathcal V'_\alpha(\mathbb T^n)),\] and \begin{equation}\notag \lim_{t\to 0^+}H(t)= 0 \ \ \mbox{weakly in } \ \ \mathcal H^{-s}(\mathbb T^n). \end{equation} \end{Definition}
Denote \begin{equation}\label{B-norms} \begin{split}
B_{f^\omega}(\alpha, \beta, s, \gamma, T)
:=&\ \|t^\gamma B_{f^\omega}\|_{L^{p_1}([0,T]; L^{q_1}(\mathbb T^n))}+\|t^\gamma B_{f^\omega}\|_{L^{p_2}([0,T]; L^{q_2}(\mathbb T^n))}\\
&+\|t^\gamma (-\Delta)^{\frac{2-\alpha+\beta}{2}} B_{f^\omega}\|_{L^{p_3}([0,T]; L^{q_3}(\mathbb T^n))}\\
&+\|t^\gamma (-\Delta)^{2-\alpha} B_{f^\omega}\|_{L^{p_4}([0,T]; L^{q_4}(\mathbb T^n))} \end{split} \end{equation} where the parameters $p_i$ and $q_i$ with $1\leq i\leq 4$ are given by \[p_1=\frac{2\alpha}{2\alpha-2-\epsilon} \ \ \mbox{for a small enough}\ \epsilon>0, \ \ \ \ q_1\gg 1,\] \[p_2=q_2=\frac{4\alpha}{2\alpha-\beta-2}, \ \ p_3=q_3=\frac{4\alpha}{\beta+2},\ \ p_4=2, \ \ q_4\gg 1.\]
\begin{Theorem}\label{thm-H} Fix $\lambda>0$. For $n=2$, let $\alpha\in [\frac43, \frac32]$, $s\in(0, 2\alpha-\frac52)$ and $\gamma<0$ such that $0<s<2\alpha-\frac52+2\alpha\gamma$. For $n=3$, let $\alpha\in (\frac{11}{8}, \frac74]$, $s\in(0, 2\alpha-\frac{11}{4})$ and $\gamma<0$ such that $0<s<2\alpha-\frac{11}{4}+2\alpha\gamma$. Assume the free evolution $B_{f^\omega}$ satisfies \begin{equation}\label{ass-bf1} \begin{split}
\|B_{f^\omega}\|_{L^2(\mathbb T^n)}\lesssim&\ (1+t^{-\frac{s}{2\alpha}}), \\
\|\nabla^m B_{f^\omega}\|_{L^\infty(\mathbb T^n)}\lesssim&\ (\max\{t^{-\frac12}, t^{-(\frac{2m+n+2s}{2\alpha})}\})^{\frac12} \ \ \mbox{for} \ \ m=0,1, 2 \end{split} \end{equation} and \begin{equation}\label{ass-bf2}
B_{f^\omega}(\alpha, \beta, s, \gamma, T) \leq \lambda. \end{equation} Then there exists a weak solution $H(x,t)$ to the Cauchy problem (\ref{H}) in the sense of Definition \ref{def2}. The solution is unique in 2D for $\alpha\geq \frac32$ and in 3D for $\alpha\geq \frac74$. \end{Theorem}
\textbf{Proof of Theorems \ref{thm-2d} and \ref{thm-3d}:} Under the conditions on $\alpha$ and $s$ of Theorems \ref{thm-2d} and \ref{thm-3d}, one can find an appropriate constant $\gamma<0$ such that the assumptions on the parameters of Theorem \ref{thm-H} are satisfied. By Lemmas \ref{le-heat3} and \ref{le-heat4}, the assumption (\ref{ass-bf2}) is satisfied almost surely. On the other hand, assumption (\ref{ass-bf1}) is guaranteed by Lemma \ref{le-lin1}. Thus the existence of a global weak solution $H(x,t)$ to system (\ref{H}) follows from Theorem \ref{thm-H}. Consequently we obtain the existence of a global weak solution $B(x,t)=B_{f^\omega}(x,t)+H(x,t)$ to (\ref{emhd}) almost surely. Recall that system (\ref{emhd}) is critical in 2D for $\alpha=\frac32$ and in 3D for $\alpha=\frac74$. Hence, above the critical value of $\alpha$, regularity of the solution can be established by standard bootstrapping argument. The proof of uniqueness is presented in Appendix.
\par{\raggedleft$\Box$\par}
The remaining part of the paper is devoted to the proof of Theorem \ref{thm-H}. The first step is to establish estimates on the linear part $B_{f^\omega}$ such that assumptions of the theorem are satisfied. This will be the content of Section \ref{sec-linear}. The crucial idea of adapting randomized initial data is revealed in this part. In fact, although the initial data $f$ is merely in $\mathcal H^{-s}$ for $s>0$, the free evolution of the randomized data $f^\omega$ has almost surely improved $L^p$ estimates. As a consequence, we are able to establish suitable a priori estimates for $H$ in Section \ref{sec-est}. Then in Section \ref{sec-galerkin} we construct Galerkin approximating solutions for (\ref{H}) by standard arguments, for instance see \cite{CF, DG}, and pass to a limit by applying the a priori estimates.
\section{The linear equation with randomized initial data}\label{sec-linear}
We consider the linear equation with randomized initial data \begin{equation}\label{eq-lin} \begin{split} B_t+(-\Delta)^\alpha B=&\ 0,\\ B(x,0)=&\ f^\omega, \end{split} \end{equation} and establish some a priori estimates for its solution $B_{f^\omega}=e^{-t(-\Delta)^\alpha}f^\omega$.
The following lemma concerns deterministic estimates. \begin{Lemma}\label{le-lin1} Let $s\geq 0$ and $f^\omega\in \mathcal H^{-s}(\mathbb T^n)$. Then the estimate \begin{equation}\label{est-lin1}
\|\nabla^m B_{f^\omega}\|_{L^2(\mathbb T^n)}\lesssim \left(1+t^{-\frac{m+s}{2\alpha}}\right)\|f\|_{\mathcal H^{-s}(\mathbb T^n)}, \end{equation} holds for any nonnegative integer $m$ and $\alpha>0$; and \begin{equation}\label{est-lin2}
\|\nabla^m B_{f^\omega}\|_{L^\infty(\mathbb T^n)}\lesssim \left(\max\{t^{-\frac12}, t^{-\frac{2m+n+2s}{2\alpha}}\}\right)^{\frac12}\|f\|_{\mathcal H^{-s}(\mathbb T^n)} \end{equation} holds for $m\geq 0$, $\alpha>0$ and $2m+n\geq \alpha$. \end{Lemma} \textbf{Proof:\ } Note that $y^ae^{-y}\leq C$ for $a\geq 0$ and $y\geq 0$. By Plancherel's theorem we deduce \begin{equation}\notag \begin{split}
\|\nabla^m B_{f^\omega}(\cdot, t)\|_{L^2(\mathbb T^n)}\sim&\ \||\xi|^m e^{-|\xi|^{2\alpha}t}\widehat{f^\omega}(\xi)\|_{\ell^2_{\xi}}\\
\sim&\ \|t^{-\frac{m+s}{2\alpha}}(|\xi|^{2\alpha}t)^{\frac{m+s}{2\alpha}} e^{-|\xi|^{2\alpha}t}|\xi|^{-s}\widehat{f^\omega}(\xi)\|_{\ell^2_{\xi}}\\
\lesssim&\ \left(1+t^{-\frac{m+s}{2\alpha}}\right)\|f\|_{\mathcal H^{-s}(\mathbb T^n)} \end{split} \end{equation} which verifies (\ref{est-lin1}).
In order to show (\ref{est-lin2}), denote \begin{equation}\notag I= \int_0^\infty (1+\rho^2)^s\rho^{2m}e^{-2\rho^{2\alpha}t} \rho^{n-1}\, d\rho. \end{equation} Applying Fourier transform on $\mathbb T^n$ and H\"older's inequality we have \begin{equation}\label{est-i0} \begin{split}
|\nabla^m B_{f^\omega}(x,t)|\leq& \sum_{\xi} |\xi|^m e^{-|\xi|^{2\alpha}t}|\widehat{f^\omega}(\xi)|\\
\lesssim &\ \|f\|_{\mathcal H^{-s}(\mathbb T^n)}\left(\sum_{\xi}(1+|\xi|^2)^{s}|\xi|^{2m} e^{-2|\xi|^{2\alpha}t}\right)^{\frac12}\\
\lesssim &\ \|f\|_{\mathcal H^{-s}(\mathbb T^n)} I^{\frac12}. \end{split} \end{equation} The task now is to estimate the integral $I$. Changing variable $y=\rho^\alpha\sqrt t$ in the integral we can write \begin{equation}\notag \begin{split} I=&\ \int_0^\infty\frac{1}{\alpha}\left(1+\left(\frac{y^2}{t}\right)^{\frac1\alpha}\right)^s\left(\frac{y}{\sqrt t}\right)^{\frac{2m+n-\alpha}{\alpha}}\frac{1}{\sqrt t}e^{-2y^2}\, dy\\ =& \int_{0}^{\sqrt t} \cdot\cdot\cdot \, dy+\int_{\sqrt t}^\infty \cdot\cdot\cdot \, dy\\ =:&\ I_1+I_2 \end{split} \end{equation} where the integrand of $I_1$ and $I_2$ is the same as that of $I$. For $0\leq y\leq \sqrt t$, we have \begin{equation}\notag 1+\left(\frac{y^2}{t}\right)^{\frac1\alpha}\leq C, \ \ \ \left(\frac{y}{\sqrt t}\right)^{\frac{2m+n-\alpha}{\alpha}}\leq 1 \end{equation} since $2m+n\geq \alpha>0$. Thus the integral $I_1$ satisfies \begin{equation}\label{est-i1} \begin{split} I_1\lesssim & \int_0^{\sqrt t}\frac{1}{\sqrt t}e^{-2y^2}\, dy\\ \lesssim &\ t^{-\frac12} \int_0^{\sqrt t}e^{-2y^2}\, dy\\ \lesssim &\ t^{-\frac12}. \end{split} \end{equation} While for $y>\sqrt t$, it follows \begin{equation}\notag \left(1+\left(\frac{y^2}{t}\right)^{\frac1\alpha}\right)^s\lesssim y^{\frac{2s}{\alpha}}t^{-\frac{s}{\alpha}} \end{equation} and hence \begin{equation}\label{est-i2} \begin{split} I_2\lesssim & \int_{\sqrt t}^\infty y^{\frac{2s}{\alpha}}t^{-\frac{s}{\alpha}}\left(\frac{y}{\sqrt t}\right)^{\frac{2m+n-\alpha}{\alpha}}\frac{1}{\sqrt t}e^{-2y^2}\, dy\\ \lesssim &\ t^{-\frac{2m+n+2s}{2\alpha}} \int_{\sqrt t}^\infty y^{\frac{2m+n-\alpha+2s}{\alpha}}e^{-2y^2}\, dy\\ \lesssim &\ t^{-\frac{2m+n+2s}{2\alpha}}. \end{split} \end{equation} Therefore estimate (\ref{est-lin2}) follows from (\ref{est-i0}), (\ref{est-i1}) and (\ref{est-i2}).
\par{\raggedleft$\Box$\par}
Probabilistic estimates are obtained as well. Namely, \begin{Lemma}\label{le-lin2} Fix $r\geq p\geq q\geq 2$, $\sigma\geq 0$ and $\gamma\in\mathbb R$ such that $q(\frac{\sigma+s}{2\alpha}-\gamma)<1$. Then for any $T>0$ and $s\geq0$ there exists $C_T(p,q,r,\sigma,\gamma,s)>0$ such that for any $f^\omega\in \mathcal H^{-s}(\mathbb T^n)$ \begin{equation}\label{est-lin3}
\|t^\gamma (-\Delta)^{\frac{\sigma}{2}} B_{f^\omega}\|_{L^r(\Omega; L^q([0,T]; L^p(\mathbb T^n)))}\leq C_T\|f\|_{\mathcal H^{-s}(\mathbb T^n)}. \end{equation} Denote \begin{equation}\label{est-lin4}
E_{\lambda, T, f, \sigma, p}=\{\omega\in\Omega: \|t^\gamma (-\Delta)^{\frac{\sigma}{2}} B_{f^\omega}\|_{L^q([0,T]; L^p(\mathbb T^n))}\geq \lambda\}. \end{equation} Then there exists $c_1>0$ and $c_2>0$ such that \begin{equation}\label{est-lin5}
P(E_{\lambda, T, f, \sigma, p})\leq c_1 \exp\left\{-\frac{c_2\lambda^2}{C_T\|f\|_{\mathcal H^{-s}}^2}\right\} \ \ \ \forall \lambda>0 \ \ \ \forall f^\omega\in \mathcal H^{-s}(\mathbb T^n). \end{equation} \end{Lemma}
\textbf{Proof:\ } Denote $\langle-\Delta\rangle$ by the operator with Fourier symbol $\widehat{\langle-\Delta\rangle}=1+|\xi|^2$.
We express the term in Fourier representation \begin{equation}\label{est-plin1} \begin{split} &t^\gamma (-\Delta)^{\frac{\sigma}{2}} B_{f^\omega}\\ =&\ t^\gamma (-\Delta)^{\frac{\sigma}{2}} \langle-\Delta\rangle^{\frac s2} e^{-t(-\Delta)^\alpha}\langle-\Delta\rangle^{-\frac s2} f^\omega\\
=&\ t^\gamma \sum_{\xi\in \mathbb Z^n}|\xi|^\sigma (1+|\xi|^2)^{\frac{s}2} e^{-t|\xi|^{2\alpha}} (1+|\xi|^2)^{-\frac{s}2}\widehat{f^\omega}(\xi) e_{\xi}(x)\\
\leq &\ t^\gamma \sum_{\xi\in \mathbb Z^n, |\xi|\leq 2}|\xi|^\sigma (1+|\xi|^2)^{\frac{s}2} e^{-t|\xi|^{2\alpha}} (1+|\xi|^2)^{-\frac{s}2}\widehat{f^\omega}(\xi) e_{\xi}(x)\\
&+t^\gamma \sum_{\xi\in \mathbb Z^n, |\xi|> 2}|\xi|^\sigma (1+|\xi|^2)^{\frac{s}2} e^{-t|\xi|^{2\alpha}} (1+|\xi|^2)^{-\frac{s}2}\widehat{f^\omega}(\xi) e_{\xi}(x)\\ =:&\ J_1+J_2. \end{split} \end{equation} Using again the fact that $y^ae^{-y}\leq C$ for $a\geq 0$ and $y>0$, we have \begin{equation}\notag \begin{split}
J_1\lesssim&\ t^{\gamma-\frac{\sigma}{2\alpha}} \sum_{\xi\in \mathbb Z^n}(t|\xi|^{2\alpha})^{\frac{\sigma}{2\alpha}} e^{-t|\xi|^{2\alpha}} (1+|\xi|^2)^{-\frac{s}2}\widehat{f^\omega}(\xi) e_{\xi}(x)\\
\lesssim&\ t^{\gamma-\frac{\sigma}{2\alpha}} \sum_{\xi\in \mathbb Z^n}(1+|\xi|^2)^{-\frac{s}2}\widehat{f^\omega}(\xi) e_{\xi}(x) \end{split} \end{equation} and \begin{equation}\notag \begin{split}
J_2\lesssim&\ t^{\gamma-\frac{\sigma+s}{2\alpha}} \sum_{\xi\in \mathbb Z^n}(t|\xi|^{2\alpha})^{\frac{\sigma+s}{2\alpha}} e^{-t|\xi|^{2\alpha}} (1+|\xi|^2)^{-\frac{s}2}\widehat{f^\omega}(\xi) e_{\xi}(x)\\
\lesssim&\ t^{\gamma-\frac{\sigma+s}{2\alpha}} \sum_{\xi\in \mathbb Z^n}(1+|\xi|^2)^{-\frac{s}2}\widehat{f^\omega}(\xi) e_{\xi}(x). \end{split} \end{equation}
Denote $h=\langle-\Delta\rangle^{-\frac{s}{2}}f$ and hence $\widehat{h^\omega} (\xi)=(1+|\xi|^2)^{-\frac{s}2}\widehat{f^\omega}(\xi)$ in view of the randomization (\ref{R}). We estimate the norm of $J_1$ by applying Minkowski's inequality, \begin{equation}\notag \begin{split}
\|J_1\|_{L^r(\Omega; L^q([0,T]; L^p(\mathbb T^n)))}\leq &\ C \| t^{\gamma-\frac{\sigma}{2\alpha}} \sum_{\xi\in \mathbb Z^n}\widehat{h^\omega}(\xi) e_{\xi}(x)\|_{L^r(\Omega; L^q([0,T]; L^p(\mathbb T^n)))}\\
\leq &\ C_r \left\|\left(\sum_{\xi\in \mathbb Z^n}\left|t^{\gamma-\frac{\sigma}{2\alpha}}\widehat{h}(\xi) e_{\xi}(x)\right|^2\right)^{\frac12}\right\|_{L^q([0,T]; L^p(\mathbb T^n))}\\
=&\ C_r \left\|\sum_{\xi\in \mathbb Z^n}\left|t^{\gamma-\frac{\sigma}{2\alpha}}\widehat{h}(\xi) e_{\xi}(x)\right|^2\right\|^{\frac12}_{L^{\frac{q}2}([0,T]; L^{\frac{p}2}(\mathbb T^n))}\\
\leq &\ C_r\left(\int_0^T t^{\frac{q}{2}(2\gamma-\frac{\sigma}{\alpha})}\, dt\right)^{\frac{1}{q}} \left\|\sum_{\xi\in \mathbb Z^n}\left|\widehat{h}(\xi) e_{\xi}(x)\right|^2\right\|^{\frac12}_{L^{\frac{p}2}(\mathbb T^n)}.\\ \end{split} \end{equation} We further apply Lemma \ref{BT} to deduce \begin{equation}\label{est-j1} \begin{split}
\|J_1\|_{L^r(\Omega; L^q([0,T]; L^p(\mathbb T^n)))}
\leq &\ C_{r,p}\left(\int_0^T t^{\frac{q}{2}(2\gamma-\frac{\sigma}{\alpha})}\, dt\right)^{\frac{1}{q}} \left(\sum_{\xi\in \mathbb Z^n}\left|\widehat{h}(\xi)\right|^4\right)^{\frac14}\\
\leq &\ C_{r,p,q}T^{\gamma-\frac{\sigma}{2\alpha}+\frac1q} \left(\sum_{\xi\in \mathbb Z^n}\left|\widehat{h}(\xi)\right|^2\right)^{\frac12}\\
\leq &\ C_{T, r,p,q, \sigma, \gamma, \alpha}\|f\|_{\mathcal H^{-s}(\mathbb T^n)} \end{split} \end{equation} where we need to require $q(\frac{\sigma}{2\alpha}-\gamma)<1$ for the time integral to be finite. Analogously we have \begin{equation}\label{est-j2}
\|J_2\|_{L^r(\Omega; L^q([0,T]; L^p(\mathbb T^n)))}
\leq C_{T, r,p,q}\|f\|_{\mathcal H^{-s}(\mathbb T^n)} \end{equation} for $q(\frac{\sigma+s}{2\alpha}-\gamma)<1$. Thus the estimate (\ref{est-lin3}) follows from (\ref{est-plin1}), (\ref{est-j1}) and (\ref{est-j2}).
In the end, the estimate (\ref{est-lin5}) follows from Bienaym\'e-Tchebishev's inequality (see Proposition 4.4 of \cite{BT2}) and Lemma \ref{BT}.
\par{\raggedleft$\Box$\par}
The following maximal regularity result for the free evolution equation is needed to establish energy estimate for $H$ in Section \ref{sec-est}. \begin{Lemma}\label{le-heat2} Let $T>0$ and $f\in L^2((0,T); L^2(\mathbb T^n))$. Denote \[g(x,t)=\int_0^t e^{-(t-s)(-\Delta)^\alpha}(-\Delta)^\alpha f (x, s)\, ds.\] Then we have for any $\alpha>0$
\[\|g\|_{L^2((0,T); L^2(\mathbb T^n))}\lesssim \|f\|_{L^2((0,T); L^2(\mathbb T^n))}.\] \end{Lemma} \textbf{Proof:\ } The estimate for $\alpha=1$ is classical, for instance see Theorem 7.3 of \cite{LR}. We follow the lines of \cite{LR} to prove the estimate for general $\alpha>0$.
Let $G(x)$ be the kernel function of the operator $e^{-(-\Delta)^\alpha}$,
\[G(x)=(2\pi)^{-\frac{n}{2}}\int_{\mathbb T^n} e^{ix\cdot\xi} e^{-|\xi|^{2\alpha}}\, d\xi\] and $G(x, t)$ the rescaled function \[G(x, t)=t^{-\frac{n}{2\alpha}} G\left(\frac{x}{t^{1/2\alpha}}\right), \ \ \ t>0.\] We extend $G(x, t)$ to the entire time line by setting $G(x, t)=0$ for $t<0$. We then can write $g(x,t)$ as \begin{equation}\notag \begin{split} g(x,t)=& \int_{-\infty}^{\infty} \int_{\mathbb T^n}\frac{1}{t-s} G(x-y, t-s) f(y,s)\, dy ds \\ =& \left(\frac1{t} G(x,t)\right)* f(x,t) \end{split} \end{equation} where the convolution is in both $x$ and $t$. Thus by Young's inequality we have \begin{equation}\notag
\|g\|_{L^2((0,T); L^2(\mathbb T^n))}\lesssim \left\|\frac1{t} G(x,t)\right\|_{L^1((0,T); L^1(\mathbb T^n))}\|f\|_{L^2((0,T); L^2(\mathbb T^n))}. \end{equation} Note that the Fourier transform of $\frac1{t} G(x,t)$ in both $x$ and $t$ is given by \begin{equation}\notag
\mathcal F\left(\frac1{t} G\right)(\xi, \tau)=-\int_0^\infty |\xi|^{2\alpha} e^{-t|\xi|^{2\alpha}} e^{-i t\tau}\, dt=-\frac{|\xi|^{2\alpha}}{|\xi|^{2\alpha}+i\tau}. \end{equation} We observe that \begin{equation}\notag
\left|\mathcal F\left(\frac1{t} G\right)(\xi, \tau)\right|\leq 1 \end{equation} and hence \begin{equation}\notag
\left\|\frac1{t} G(x,t)\right\|_{L^1((0,T); L^1(\mathbb T^n))}\leq C. \end{equation} It then follows
\[\|g\|_{L^2((0,T); L^2(\mathbb T^n))}\lesssim \|f\|_{L^2((0,T); L^2(\mathbb T^n))}.\]
\par{\raggedleft$\Box$\par}
We also need the following estimate. \begin{Lemma}\label{le-heat0} Let $T>0$ and $f\in L^2((0,T); L^2(\mathbb T^n))$. Denote \[g(x,t)=\int_0^t e^{-(t-s)(-\Delta)^\alpha}\nabla^m f (x, s)\, ds.\] Then we have for $2\alpha> m$
\[\|g(t)\|_{L^2(\mathbb T^n)}\lesssim \|f\|_{L^2((0,T); L^2(\mathbb T^n))} \ \ \ \ \forall t>0.\] \end{Lemma} \textbf{Proof:\ } For any $\varphi\in L^2(\mathbb T^n)$, using integration by parts and H\"older's inequality we have \begin{equation}\notag \begin{split}
\left|\langle g(t), \varphi\rangle \right|=&\ \left| \int_0^t \langle f(s), e^{-(t-s)(-\Delta)^\alpha}\nabla^m \varphi \rangle \, ds\right|\\
\lesssim& \left(\int_0^t\int_{\mathbb T^n} f\, dxds\right)^{\frac12}\left(\int_0^t\int_{\mathbb T^n} \left|e^{-(t-s)(-\Delta)^\alpha}\nabla^m \varphi \right|^2\, dxds\right)^{\frac12}\\
\lesssim& \|f\|_{L^2((0,T); L^2(\mathbb T^n))} \| e^{-t(-\Delta)^\alpha}\nabla^m \varphi \|_{L^2((0,T); L^2(\mathbb T^n))}. \end{split} \end{equation} In view of Plancherel's theorem, we deduce \begin{equation}\notag \begin{split}
\| e^{-t(-\Delta)^\alpha}\nabla^m \varphi \|^2_{ L^2(\mathbb T^n)}=&\ (2\pi)^{-n} \int_{\mathbb T^n}\left| e^{-t|\xi|^{2\alpha}}|\xi|^{m}\widehat\varphi^2(\xi)\right|\, d\xi\\
\lesssim &\ \frac{1}{t^{\frac{m}{2\alpha}}} \|\varphi\|_{L^2(\mathbb T^n)} \end{split} \end{equation} where we used the fact $x^ae^{-x^2}\leq C$ for $x>0$ and $a>0$. Therefore, we obtain for $m<2\alpha$ \begin{equation}\notag \begin{split}
\| e^{-t(-\Delta)^\alpha}\nabla^m \varphi \|^2_{L^2((0,T); L^2(\mathbb T^n))}
\lesssim &\ \|\varphi\|_{L^2(\mathbb T^n)}\int_0^T\frac{1}{t^{\frac{m}{2\alpha}}} \, ds \lesssim \|\varphi\|_{L^2(\mathbb T^n)}. \end{split} \end{equation} Therefore we have \begin{equation}\notag
\left|\langle g(t), \varphi\rangle \right|
\lesssim \|f\|_{L^2((0,T); L^2(\mathbb T^n))} \|\varphi\|_{L^2(\mathbb T^n)} \ \ \forall \varphi\in L^2(\mathbb T^n) \end{equation} which concludes the proof of the lemma.
\par{\raggedleft$\Box$\par}
We introduce one more probabilistic estimate for the free evolution $B_{f^\omega}$ in each case of 2D and 3D.
\begin{Lemma}\label{le-heat3} Let $n=2$, $\alpha\in [\frac43, \frac32]$ and $\beta=3-2\alpha$. Let $0<s<2\alpha-\frac52+2\alpha\gamma$ for some $\gamma<0$ such that $2\alpha-\frac52+2\alpha\gamma>0$. Let $ B_{f^\omega}(\alpha, \beta, s, \gamma, T)$ be the sum of the norms defined in (\ref{B-norms}). There exists a set $\Sigma\subset \Omega$ with $P(\Sigma)=1$ such that for any $\omega\in\Sigma$ we can find a constant $\lambda>0$ such that \begin{equation}\notag
B_{f^\omega}(\alpha, \beta, s, \gamma, T)\leq \lambda. \end{equation} \end{Lemma} \textbf{Proof:\ } For any $\lambda>0$ denote
\begin{equation}\notag E(\lambda):=E(\lambda, s, \alpha, f, \gamma, T)=\left\{\omega\in\Omega | B_{f^\omega}(\alpha, \beta, s, \gamma, T)>\lambda \right\}. \end{equation} For any $j\geq 0$ we also denote $\lambda_j=2^j$ and $E_j=E(\lambda_j)$. Note that $E_{j+1}\subset E_j$. Take \[\Sigma=\cup_{j\geq 0} E_j^{c}\subset \Omega.\] One can check that the parameters satisfy the condition of Lemma \ref{le-lin2}. Hence it follows from Lemma \ref{le-lin2} that
\begin{equation}\notag P(E_j)\leq c_1 \exp\left\{-\frac{c_2\lambda_j^2}{C_T\|f\|_{H^{-s}}^2}\right\} \ \ \ \forall j\geq0 \ \ \ \forall f^\omega\in (H^{-s}(\mathbb T^2))^2. \end{equation} Therefore we deduce \begin{equation}\notag \begin{split} 1\geq &\ P(\Sigma)=1-P(\Sigma^c)=1-P(\cap E_j)=1-P(\lim_{j\to\infty} E_j)\\
\geq &\ 1-\lim_{j\to\infty} c_1 \exp\left\{-\frac{c_2\lambda_j^2}{C_T\|f\|_{H^{-s}}^2}\right\}=1 \end{split} \end{equation} which immediately gives $P(\Sigma)=1$. By definition of $\Sigma$, we see that for any $\omega\in\Sigma$ there exists $j\geq 0$ such that $\omega\in E_j^c$, i.e. \[B_{f^\omega}(\alpha, \beta, s, \gamma, T)\leq \lambda_j.\]
\par{\raggedleft$\Box$\par}
\begin{Lemma}\label{le-heat4} Let $n=3$, $\alpha\in (\frac{11}{8}, \frac{7}{4}]$ and $\beta=\frac72-2\alpha$. Let $0<s<2\alpha-\frac{11}{4}+2\alpha\gamma$ for some $\gamma<0$ such that $2\alpha-\frac{11}{4}+2\alpha\gamma>0$.
There exists a set $\Sigma\subset \Omega$ with $P(\Sigma)=1$ such that for any $\omega\in\Sigma$ we can find a constant $\lambda>0$ such that \begin{equation}\notag B_{f^\omega}(\alpha, \beta, s, \gamma, T)\leq \lambda. \end{equation} \end{Lemma}
\textbf{Proof:\ } We observe that the parameters specified in the lemma satisfy the assumptions of Lemma \ref{le-lin2}. The proof follows from an analogous argument as that of Lemma \ref{le-heat3}.
\par{\raggedleft$\Box$\par}
\section{A priori estimates for $H$}\label{sec-est}
In this section we establish a priori estimates for the nonlinear part $H$ which solves the Cauchy problem (\ref{H}). Notice that $B_{f^\omega}$ appears in the quadratic nonlinear terms of (\ref{H}) and the estimates of $B_{f^\omega}$ in (\ref{est-lin1}) and (\ref{est-lin2}) exhibit a singularity at $t=0$. To avoid this singularity, we choose to perform the estimates near time zero by working with the integral form of (\ref{H}). Away from time zero, the estimates can be obtained from (\ref{H}). Therefore, before starting the estimates we introduce the mild formulation of (\ref{H}) and show that the two formulations are equivalent under appropriate assumptions.
Denote \begin{equation}\notag \begin{split} \tilde Q(x,t)=&\ \nabla\times\left[\mathcal B(H+B_{f^\omega}, H+B_{f^\omega})\right],\\
Q(x,t)=&\ (H+B_{f^\omega})\otimes (H+B_{f^\omega}) (x, t). \end{split} \end{equation} Since $\nabla \cdot H=0$ and $\nabla\cdot B_{f^\omega}=0$, we have that following several vector identities \[\tilde Q(x,t)= \nabla\times\nabla\cdot Q(x,t).\] Thus we can write \begin{equation}\label{H2} \begin{split} H(x,t)=&-\int_0^t e^{-(t-s)(-\Delta)^\alpha} \tilde Q(x, s)\, ds\\ =&-\int_0^t e^{-(t-s)(-\Delta)^\alpha} \nabla\times\nabla\cdot Q(x, s)\, ds. \end{split} \end{equation}
\begin{Lemma}\label{le-equiv} Assume $B_{f^\omega}$ satisfies the assumptions (\ref{ass-bf1}) and (\ref{ass-bf2}). Then $H$ is a weak solution to (\ref{H}) if and only if $H\in L^\infty((0,T); \mathcal H(\mathbb T^n))\cap L^2((0,T); \mathcal V_\alpha (\mathbb T^n))$ is a solution to (\ref{H2}). \end{Lemma} \textbf{Proof:\ } We follow the lines of \cite{LR}. First we assume $H\in L^\infty((0,T); \mathcal H(\mathbb T^n))\cap L^2((0,T); \mathcal V_\alpha (\mathbb T^n))$ is a solution to (\ref{H2}). Denote \begin{equation}\label{eq-M} \mathcal M (H)(x,t)=-\int_0^t e^{-(t-s)(-\Delta)^\alpha} \nabla\times\nabla\cdot Q(x, s)\, ds. \end{equation} Thanks to the assumptions (\ref{ass-bf1}) and (\ref{ass-bf2}) and the fact $H\in L^\infty((0,T); \mathcal H(\mathbb T^n))\cap L^2((0,T); \mathcal V_\alpha (\mathbb T^n))$ we have $Q\in L^1((0,T); L^1(\mathbb T^n))$ and hence \[\nabla\times\nabla\cdot Q \in L^1((0,T); \mathcal D').\] It then follows \begin{equation}\notag e^{-(t-s)(-\Delta)^\alpha} \nabla\times\nabla\cdot Q\in L^1((0,T); C^\infty(\mathbb T^n)). \end{equation} Thus by Leibniz rule we have \begin{equation}\notag \partial_t \mathcal M (H)(x,t)=-(-\Delta)^\alpha \mathcal M (H)(x,t)- \nabla\times\nabla\cdot Q \end{equation} in the distributional sense. On the other hand, we see \[\lim_{t\to 0^+} H(x,t)=0. \] Therefore $H=\mathcal M(H)$ is a weak solution of (\ref{H}).
Conversely, we assume $H$ is a weak solution of (\ref{H}). Define $\mathcal M(x,t)$ as in (\ref{eq-M}). Applying the estimates from Proposition \ref{prop} below near time zero we obtain \[\mathcal M\in L^\infty((0,T); \mathcal H(\mathbb T^n))\cap L^2((0,T); \mathcal V_\alpha(\mathbb T^n)),\] \[\frac{d\mathcal M}{dt}\in L^1((0,T); \mathcal V'_\alpha(\mathbb T^n)).\] Hence we deduce by Leibniz rule again \begin{equation}\notag \begin{split} \partial_t(\mathcal M-H)=&-(-\Delta)^\alpha \mathcal M (H)- \nabla\times\nabla\cdot Q+(-\Delta)^\alpha \mathcal H+ \nabla\times\nabla\cdot Q\\ =&-(-\Delta)^\alpha (\mathcal M (H)-H) \end{split} \end{equation} which is satisfied in the distributional sense. Note that \[\lim_{t\to 0^+}(\mathcal M (H)(t)-H(t))=0.\] It then follows from the uniqueness of the generalized heat flow that $\mathcal M=H$ and hence $H$ is a weak solution of (\ref{H2}).
\par{\raggedleft$\Box$\par}
Denote the basic energy functional \begin{equation}\notag
\mathcal E(H)(t)=\|H(t)\|_{L^2(\mathbb T^n)}^2+2\int_0^t \int_{\mathbb T^n} |\nabla^\alpha H(s)|^2\, dx\, ds, \end{equation} and the higher oder energy functional for some $\beta$ to be determined \begin{equation}\label{E2} \begin{split} \mathcal E_{\alpha}(H)(t)=&\ \mathcal E(H)(t)+\mathcal E((-\Delta)^{\frac{\beta}{2}}H)(t)\\
=&\ \|H(t)\|_{L^2(\mathbb T^n)}^2+2\int_0^t \int_{\mathbb T^n} |\nabla^\alpha H(s)|^2\, dx\, ds\\
&+\|H(t)\|_{\mathcal H^{\beta}(\mathbb T^n)}^2+2\int_0^t \int_{\mathbb T^n} |\nabla^{\alpha+\beta} H(s)|^2\, dx\, ds. \end{split} \end{equation}
\begin{Proposition}\label{prop} Assume $B_{f^\omega}$ satisfies the conditions (\ref{ass-bf1}) and (\ref{ass-bf2}). Let $H\in L^\infty((0,T); \mathcal H(\mathbb T^n))\cap L^2((0,T); \mathcal V_\alpha(\mathbb T^n))$ be a solution to (\ref{H}). Then there exists a constant $C(T, \lambda, s)$ such that \begin{equation}\label{ap-est1} \mathcal E(H)(t)\leq C(T, \lambda, s) \ \ \ \mbox{for all} \ \ t\in[0,T], \end{equation} and \begin{equation}\label{ap-est2} \begin{split}
\left\|\frac{d}{dt}H\right\|_{L^2((0,T); \mathcal H^{-2\alpha}(\mathbb T^2))}\leq &\ C(T, \lambda, s), \\
\left\|\frac{d}{dt}H\right\|_{L^{\frac{4\alpha}3}((0,T); \mathcal H^{-2\alpha}(\mathbb T^3))}\leq &\ C(T, \lambda, s). \end{split} \end{equation} \end{Proposition} \textbf{Proof:\ } As discussed earlier, in order to obtain the estimate (\ref{ap-est1}) we split the time interval into two regimes $[0, t_0]$ and $[t_0, T]$ for a small time $t_0>0$ to be determined later. On $[0,t_0]$ we work with the integral form (\ref{H2}) and take the advantage of the fact $H(x, 0)=0$; while on $[t_0, T]$ we work with the differential form (\ref{H}) since no time singularity presents on this interval. Achieving the estimates on $[0,t_0]$ turns out to be more challenging. We apply the higher order energy method to overcome the obstruction by estimating the energy functional $\mathcal E_{\alpha}$ instead of $\mathcal E$. We choose to treat the 2D and 3D cases separately. Thus the proof consists four parts: (i) estimate of $\mathcal E_\alpha$ on $[0,t_0]$ in 2D; (ii) estimate of $\mathcal E_\alpha$ on $[0,t_0]$ in 3D; (iii) estimate of $\mathcal E$ on $[t_0, T]$ for any spatial dimension; (iv) estimate of $\frac{d}{dt}H$.
{\textbf{(i) Estimates on $[0,t_0]$ in 2D.}} By Lemma \ref{le-heat0} we have for $\alpha>1$ and any $0<t\leq t_0$ \begin{equation}\label{L22} \begin{split}
\|H(t)\|_{L^2(\mathbb T^2)}\lesssim &\ \|Q\|_{L^2((0, t_0); L^{2}(\mathbb T^2))}, \\
\|H(t)\|_{\mathcal H^{\beta}(\mathbb T^2)}\lesssim &\ \|Q\|_{L^2((0, t_0); \mathcal H^{\beta}(\mathbb T^2))}. \\
\end{split} \end{equation} In view of the second line of (\ref{H2}) we have \begin{equation}\notag \begin{split} (-\Delta)^{\frac{\alpha}{2}}H(x,t)=&-\int_0^t e^{-(t-s)(-\Delta)^\alpha}(-\Delta)^{\alpha} \nabla\times\nabla\cdot (-\Delta)^{-\frac{\alpha}{2}}Q(x, s)\, ds\\
(-\Delta)^{\frac{\alpha+\beta}{2}}H(x,t)=&-\int_0^t e^{-(t-s)(-\Delta)^\alpha}(-\Delta)^{\alpha} \nabla\times\nabla\cdot (-\Delta)^{\frac{\beta-\alpha}{2}}Q(x, s)\, ds\\ \end{split} \end{equation} and hence we have from Lemma \ref{le-heat2} \begin{equation}\label{Lpq2} \begin{split}
\|H(t)\|_{L^2((0,t_0); \mathcal H^\alpha(\mathbb T^2))} \lesssim&\ \|Q\|_{L^2((0, t_0); \mathcal H^{2-\alpha}(\mathbb T^2))},\\
\|H(t)\|_{L^2((0, t_0); \mathcal H^{\alpha+\beta}(\mathbb T^2))} \lesssim&\ \|Q\|_{L^2((0, t_0); \mathcal H^{2-\alpha+\beta}(\mathbb T^2))}. \end{split} \end{equation}
In view of (\ref{L22}) and (\ref{Lpq2}), we need to estimate $\|Q\|_{L^2((0, t_0); L^{2}(\mathbb T^2))}$, $\|Q\|_{L^2((0, t_0); \mathcal H^{2-\alpha}(\mathbb T^2))}$, $\|Q\|_{L^2((0, t_0); \mathcal H^{\beta}(\mathbb T^2))}$, and $\|Q\|_{L^2((0, t_0); \mathcal H^{2-\alpha+\beta}(\mathbb T^2))}$. With the restriction of $1<\alpha\leq \frac32$ and $\beta\geq0$, we have $2-\alpha\leq 2-\alpha+\beta$ and $\beta\leq 2-\alpha+\beta$. Thus it is sufficient to estimate the last one, i.e. $\|Q\|_{L^2((0, t_0); \mathcal H^{2-\alpha+\beta}(\mathbb T^2))}$.
Note that \begin{equation}\label{est-q1} \begin{split}
&\|Q\|_{L^2((0, t_0); \mathcal H^{2-\alpha+\beta}(\mathbb T^2))}\\
\lesssim&\ \|H\otimes H\|_{L^2((0, t_0); \mathcal H^{2-\alpha+\beta}(\mathbb T^2))}+\|H\otimes B_{f^\omega}\|_{L^2((0, t_0); \mathcal H^{2-\alpha+\beta}(\mathbb T^2))}\\
&+\|B_{f^\omega}\otimes B_{f^\omega}\|_{L^2((0, t_0); \mathcal H^{2-\alpha+\beta}(\mathbb T^2))}. \end{split} \end{equation} It follows from H\"older's inequality that if $\alpha-\beta\geq 1$
\begin{equation}\notag \begin{split}
&\|H\otimes H\|_{L^2((0, t_0); \mathcal H^{2-\alpha+\beta}(\mathbb T^2))}\\
\lesssim &\ \|H\nabla^{2-\alpha+\beta}H\|_{L^2([0,t_0];L^{2}(\mathbb T^2))} \\
\lesssim &\ \|H\|_{L^4([0,t_0];L^{\frac{4}{\alpha-\beta-1}}(\mathbb T^2))} \|\nabla^{2-\alpha+\beta} H\|_{L^4([0,t_0];L^{\frac{4}{3-\alpha+\beta}}(\mathbb T^2))}. \end{split} \end{equation} Since by Sobolev embedding
\[\|H\|_{L^4([0,t_0];L^{\frac{4}{\alpha-\beta-1}}(\mathbb T^2))}\lesssim \|\nabla^{2-\alpha+\beta} H\|_{L^4([0,t_0];L^{\frac{4}{3-\alpha+\beta}}(\mathbb T^2))},\] we only estimate the latter for $1\leq \alpha-\beta\leq 3$: \begin{equation}\notag \begin{split}
&\|\nabla^{2-\alpha+\beta} H\|_{L^4([0,t_0];L^{\frac{4}{3-\alpha+\beta}}(\mathbb T^2))}\\
= & \left(\int_0^{t_0}\|\nabla^{2-\alpha+\beta}H\|^4_{L^{\frac{4}{3-\alpha+\beta}}(\mathbb T^2)} \, dt\right)^{\frac14}\\
\lesssim & \left( \int_0^{t_0}\|\nabla^{\beta}H\|^2_{L^{2}(\mathbb T^2)}\|\nabla^{m+\beta}H\|^2_{L^{2}(\mathbb T^2)} \, dt \right)^{\frac14}\\
\lesssim &\ \left(\sup_{t\in(0, t_0)}\|\nabla^{\beta} H(t)\|^2_{L^2_x}\right)^{\frac14}\left(\int_0^{t_0} \|\nabla^{\alpha+\beta}H\|_{L^2_x}^2\, dt\right)^{\frac14}\\ \lesssim &\ \mathcal E_{\alpha}^{\frac12}(H)(t_0) \end{split} \end{equation} with $m=3-\alpha-\beta\leq \alpha$ provided $\beta\geq 3-2\alpha$. Hence, if $1\leq \alpha-\beta\leq 3$ and $\beta\geq 3-2\alpha$ we have \begin{equation}\label{est-q2}
\|H\otimes H\|_{L^2((0, t_0); H^{2-\alpha+\beta}(\mathbb T^2))} \lesssim \mathcal E_{\alpha}(H)(t_0). \end{equation} The conditions $1\leq \alpha-\beta\leq 3$ and $\beta\geq 3-2\alpha$ imply \[\frac43\leq \alpha\leq \frac32.\] To optimize the final result, we take the smallest $\beta=3-2\alpha$ from now on.
We continue to estimate \begin{equation}\notag \begin{split}
&\|H\otimes B_{f^\omega}\|_{L^2((0, t_0); H^{2-\alpha+\beta}(\mathbb T^2))}\\
\lesssim&\ \|H \nabla^{2-\alpha+\beta} B_{f^\omega}\|_{L^2((0, t_0); L^{2}(\mathbb T^2))}+\|B_{f^\omega} \nabla^{2-\alpha+\beta}H\|_{L^2((0, t_0); L^{2}(\mathbb T^2))}. \end{split} \end{equation} The first term is estimated as follows by applying H\"older's inequality and (\ref{ass-bf2}) \begin{equation}\notag \begin{split}
&\|H \nabla^{2-\alpha+\beta} B_{f^\omega}\|_{L^2((0, t_0); L^{2}(\mathbb T^2))}\\
\lesssim&\ \|H\|_{L^\infty((0, t_0); L^{p}(\mathbb T^2))}\|\nabla^{2-\alpha+\beta} B_{f^\omega}\|_{L^2((0, t_0); L^{p'}(\mathbb T^2))}\\
\lesssim&\ \|H\|_{L^\infty((0, t_0); H^{\beta}(\mathbb T^2))}\|\nabla^{2-\alpha+\beta} B_{f^\omega}\|_{L^2((0, t_0); L^{p'}(\mathbb T^2))}\\ \lesssim&\ \lambda t_0^{-\gamma}\mathcal E_{\alpha}^{\frac12}(H)(t_0) \end{split} \end{equation} with $\frac{1}{p}+\frac{1}{p'}=\frac12$ and $p=2+\epsilon$ such that the Sobolev embedding holds. The second term is estimated as \begin{equation}\notag \begin{split}
&\|B_{f^\omega} \nabla^{2-\alpha+\beta} H\|_{L^2((0, t_0); L^{2}(\mathbb T^2))}\\
\lesssim&\ \|\nabla^{2-\alpha+\beta} H\|_{L^{p}((0, t_0); L^{q}(\mathbb T^2))}\|B_{f^\omega}\|_{L^{p'}((0, t_0); L^{q'}(\mathbb T^2))} \end{split} \end{equation} with $\frac{1}{p}+\frac{1}{p'}=\frac12$, $\frac{1}{p}+\frac{1}{p'}=\frac12$, $p'\leq q'$ and $p\geq q$. By Gagliardo-Nirenberg's inequality we know \begin{equation}\notag
\|\nabla^{2-\alpha+\beta} H\|_{L^q(\mathbb T^2)}\lesssim \|\nabla^{\alpha+\beta} H\|_{L^2(\mathbb T^2)}^{\theta}\|\nabla^{\beta} H\|_{L^2(\mathbb T^2)}^{1-\theta} \end{equation} with $q=\frac{2}{3-\alpha-\alpha\theta}$ and $p\theta=2$. Take $q=2+\epsilon$ for some small constant $\epsilon>0$, we obtain $p'=\frac{2\alpha}{2\alpha-2-\epsilon}$ for another small constant $\epsilon>0$, and analogous computation as before shows \begin{equation}\notag \begin{split}
&\|\nabla^{2-\alpha+\beta} H\|_{L^{p}((0, t_0); L^{q}(\mathbb T^2))}\\
\lesssim &\ \left(\int_0^{t_0} \|\nabla^{\alpha+\beta} H\|_{L^2(\mathbb T^2)}^2 \|\nabla^{\beta}H\|_{L^2(\mathbb T^2)}^{p-2} \, dt\right)^{\frac1p}\\
\lesssim &\ \left(\sup_{0\leq t\leq t_0}\|\nabla^{\beta}H\|_{L^2(\mathbb T^2)}^2 \right)^{\frac{p-2}{2p}}
\left(\int_0^{t_0} \|\nabla^{\alpha+\beta} H\|_{L^2(\mathbb T^2)}^2 \, dt\right)^{\frac{1}{p}}\\ \lesssim&\ \mathcal E_{\alpha}^{\frac12}(H)(t_0). \end{split} \end{equation} Hence we have \begin{equation}\notag
\|B_{f^\omega} \nabla^{2-\alpha+\beta} H\|_{L^2((0, t_0); L^{2}(\mathbb T^2))} \lesssim \lambda t_0^{-\gamma}\mathcal E_{\alpha}^{\frac12}(H)(t_0). \end{equation} Collecting the estimates above we obtain \begin{equation}\label{est-q3}
\|H\otimes B_{f^\omega}\|_{L^2((0, t_0); H^{2-\alpha+\beta}(\mathbb T^2))} \lesssim \lambda t_0^{-\gamma}\mathcal E_{\alpha}^{\frac12}(H)(t_0). \end{equation}
In the end the condition (\ref{ass-bf2}) again implies \begin{equation}\label{est-q4} \begin{split}
&\|B_{f^\omega}\otimes B_{f^\omega}\|_{L^2((0, t_0); H^{2-\alpha+\beta}(\mathbb T^2))}\\
\lesssim&\ \|B_{f^\omega} \nabla^{2-\alpha+\beta}B_{f^\omega}\|_{L^2((0, t_0); L^2(\mathbb T^2))}\\
\lesssim&\ \|B_{f^\omega}\|_{L^p((0, t_0); L^{p}(\mathbb T^2))}\|\nabla^{2-\alpha+\beta} B_{f^\omega}\|_{L^{p'}((0, t_0); L^{p'}(\mathbb T^2))}\\ \lesssim&\ \lambda^2t_0^{-2\gamma} \end{split} \end{equation} with $\frac{1}{p}+\frac{1}{p'}=\frac12$. This estimate needs to be optimized such that \[ p'\left(\frac{2-\alpha+\beta+s}{2\alpha}-\gamma\right)<1, \ \ p\left(\frac{s}{2\alpha}-\gamma\right)<1\] for the largest possible value of $s$ and some $\gamma<0$. The optimization results in \[p=\frac{4\alpha}{2\alpha-\beta-2}, \ \ \ p'=\frac{4\alpha}{\beta+2}, \ \ \ s<\frac{2\alpha}{p}+2\alpha\gamma=\frac12(2\alpha-\beta-2)+2\alpha\gamma.\]
Combining (\ref{E2}), (\ref{L22}), (\ref{Lpq2}) and the estimates (\ref{est-q1})-(\ref{est-q4}) we obtain for $t\in[0,t_0]$, some $\gamma<0$ and $\alpha, \beta$ and $s$ satisfying \[\alpha\geq \frac43, \ \ \ \beta=3-2\alpha, \ \ \ 0<s<2\alpha-\frac52+2\alpha\gamma\] that \begin{equation}\notag \begin{split}
\mathcal E^{\frac12}_{\alpha}(H)(t_0)\lesssim&\ \|H\|_{L^\infty((0,t_0); L^2(\mathbb T^2))}+\|H\|_{L^\infty((0,t_0); \mathcal H^{2-\alpha}(\mathbb T^2))}\\
&+\|H\|_{L^2((0,t_0); \mathcal H^\alpha(\mathbb T^2))}+\|H\|_{L^2((0,t_0); \mathcal H^{2-\alpha+\beta}(\mathbb T^2))}\\ \lesssim&\ \mathcal E_{\alpha}(H)(t_0)+\lambda t_0^{-\gamma} \mathcal E^{\frac12}_{\alpha}(H)(t_0)+\lambda^2 t_0^{-2\gamma}\\ \leq&\ C_1\mathcal E_{\alpha}(H)(t_0)+C_2\lambda t_0^{-\gamma} \mathcal E^{\frac12}_{\alpha}(H)(t_0)+C_3\lambda^2 t_0^{-2\gamma} \end{split} \end{equation} for some constants $C_1, C_2$ and $C_3$. By a continuity argument we conclude that for small enough $t_0$ such that $C_3\lambda^2 t_0^{-2\gamma} \ll1$, \begin{equation}\notag \mathcal E(H)(t)\leq \mathcal E_{\alpha}(H)(t)\leq C \ \ \ \forall t\in[0,t_0]. \end{equation}
{\textbf{(ii) Estimates on $[0,t_0]$ in 3D.}}
The estimates will be carried on analogously as in the 2D case. Differences come in when we apply Gagliardo-Nirenberg's interpolation inequality and Sobolev embedding inequality. It is again sufficient to estimate \begin{equation}\label{est-q6} \begin{split}
&\|Q\|_{L^2((0, t_0); \mathcal H^{2-\alpha+\beta}(\mathbb T^3))}\\
\lesssim&\ \|H\otimes H\|_{L^2((0, t_0); \mathcal H^{2-\alpha+\beta}(\mathbb T^3))}+\|H\otimes B_{f^\omega}\|_{L^2((0, t_0); \mathcal H^{2-\alpha+\beta}(\mathbb T^3))}\\
&+\|B_{f^\omega}\otimes B_{f^\omega}\|_{L^2((0, t_0); \mathcal H^{2-\alpha+\beta}(\mathbb T^3))}. \end{split} \end{equation} By H\"older's inequality the first term on the right hand side of (\ref{est-q6}) is estimated for $\alpha-\beta\geq 1/2$
\begin{equation}\notag \begin{split}
&\|H\otimes H\|_{L^2((0, t_0); \mathcal H^{2-\alpha+\beta}(\mathbb T^3))}\\
\lesssim &\ \|H\nabla^{2-\alpha+\beta}H\|_{L^2([0,t_0];L^{2}(\mathbb T^3))} \\
\lesssim &\ \|H\|_{L^4([0,t_0];L^{\frac{12}{2\alpha-2\beta-1}}(\mathbb T^3))} \|\nabla^{2-\alpha+\beta} H\|_{L^4([0,t_0];L^{\frac{12}{7-2\alpha+2\beta}}(\mathbb T^3))}. \end{split} \end{equation} In view of Sobolev embedding
\[\|H\|_{L^4([0,t_0];L^{\frac{12}{2\alpha-2\beta-1}}(\mathbb T^3))}\lesssim \|\nabla^{2-\alpha+\beta} H\|_{L^4([0,t_0];L^{\frac{12}{7-2\alpha+2\beta}}(\mathbb T^3))},\] we only need to estimate for $ \alpha-\beta\geq 1/2$ \begin{equation}\notag \begin{split}
&\|\nabla^{2-\alpha+\beta} H\|_{L^4([0,t_0];L^{\frac{12}{7-2\alpha+2\beta}}(\mathbb T^3))}\\
= & \left(\int_0^{t_0}\|\nabla^{2-\alpha+\beta}H\|^4_{L^{\frac{12}{7-2\alpha+2\beta}}(\mathbb T^3)} \, dt\right)^{\frac14}\\
\lesssim & \left( \int_0^{t_0}\|\nabla^{\beta}H\|^2_{L^{2}(\mathbb T^3)}\|\nabla^{m+\beta}H\|^2_{L^{2}(\mathbb T^3)} \, dt \right)^{\frac14}\\
\lesssim &\ \left(\sup_{t\in(0, t_0)}\|\nabla^{\beta} H(t)\|^2_{L^2_x}\right)^{\frac14}\left(\int_0^{t_0} \|\nabla^{\alpha+\beta}H\|_{L^2_x}^2\, dt\right)^{\frac14}\\ \lesssim &\ \mathcal E_{\alpha}^{\frac12}(H)(t_0) \end{split} \end{equation} with $m=\frac72-\alpha-\beta\leq \alpha$ provided $\beta\geq \frac72-2\alpha$. Hence for $ \alpha-\beta\geq 1/2$ and $\beta\geq \frac72-2\alpha$ (which imply $\alpha\geq 4/3$) we have \begin{equation}\label{est-q7}
\|H\otimes H\|_{L^2((0, t_0); \mathcal H^{2-\alpha+\beta}(\mathbb T^3))} \lesssim \mathcal E_{\alpha}(H)(t_0). \end{equation} As before, we choose the smallest $\beta= \frac72-2\alpha$ from now on.
Following the inequality \begin{equation}\notag \begin{split}
&\|H\otimes B_{f^\omega}\|_{L^2((0, t_0); \mathcal H^{2-\alpha+\beta}(\mathbb T^3))}\\
\lesssim&\ \|H \nabla^{2-\alpha+\beta} B_{f^\omega}\|_{L^2((0, t_0); L^{2}(\mathbb T^3))}+\|B_{f^\omega} \nabla^{2-\alpha+\beta}H\|_{L^2((0, t_0); L^{2}(\mathbb T^3))}, \end{split} \end{equation} we proceed to estimate the former one on the right hand side as \begin{equation}\notag \begin{split}
&\|H \nabla^{2-\alpha+\beta} B_{f^\omega}\|_{L^2((0, t_0); L^{2}(\mathbb T^3))}\\
\lesssim&\ \|H\|_{L^\infty((0, t_0); L^{p}(\mathbb T^2))}\|\nabla^{2-\alpha+\beta} B_{f^\omega}\|_{L^2((0, t_0); L^{p'}(\mathbb T^3))}\\
\lesssim&\ \|H\|_{L^\infty((0, t_0); H^{\beta}(\mathbb T^3))}\|\nabla^{2-\alpha+\beta} B_{f^\omega}\|_{L^2((0, t_0); L^{p'}(\mathbb T^3))}\\ \lesssim&\ \lambda t_0^{-\gamma}\mathcal E_{\alpha}^{\frac12}(H)(t_0) \end{split} \end{equation} with $\frac{1}{p}+\frac{1}{p'}=\frac12$ and $p=2+\epsilon$ such that the Sobolev embedding holds. The latter one is estimated as \begin{equation}\notag \begin{split}
&\|B_{f^\omega} \nabla^{2-\alpha+\beta} H\|_{L^2((0, t_0); L^{2}(\mathbb T^3))}\\
\lesssim&\ \|\nabla^{2-\alpha+\beta} H\|_{L^{p}((0, t_0); L^{q}(\mathbb T^3))}\|B_{f^\omega}\|_{L^{p'}((0, t_0); L^{q'}(\mathbb T^3))} \end{split} \end{equation} with $\frac{1}{p}+\frac{1}{p'}=\frac12$ and $\frac{1}{p}+\frac{1}{p'}=\frac12$, $p'\leq q'$ and $p\geq q$. We use Gagliardo-Nirenberg's inequality \begin{equation}\notag
\|\nabla^{2-\alpha+\beta} H\|_{L^q(\mathbb T^3)}\lesssim \|\nabla^{\alpha+\beta} H\|_{L^2(\mathbb T^3)}^{\theta}\|\nabla^{\beta} H\|_{L^2(\mathbb T^3)}^{1-\theta} \end{equation} with $q=\frac{6}{7-2\alpha-2\alpha\theta}$ and $p\theta=2$. Taking $q=2+\epsilon$ for some small constant $\epsilon>0$, we obtain $p'=\frac{\alpha}{\alpha-1-\epsilon}$ for a different small constant $\epsilon>0$. It follows that \begin{equation}\notag \begin{split}
&\|\nabla^{2-\alpha+\beta} H\|_{L^{p}((0, t_0); L^{q}(\mathbb T^3))}\\
\lesssim &\ \left(\int_0^{t_0} \|\nabla^{\alpha+\beta} H\|_{L^2(\mathbb T^3)}^2 \|\nabla^{\beta}H\|_{L^2(\mathbb T^3)}^{p-2} \, dt\right)^{\frac1p}\\
\lesssim &\ \left(\sup_{0\leq t\leq t_0}\|\nabla^{\beta}H\|_{L^2(\mathbb T^3)}^2 \right)^{\frac{p-2}{2p}}
\left(\int_0^{t_0} \|\nabla^{\alpha+\beta} H\|_{L^2(\mathbb T^3)}^2 \, dt\right)^{\frac{1}{p}}\\ \lesssim&\ \mathcal E_{\alpha}^{\frac12}(H)(t_0). \end{split} \end{equation} Consequently it leads to \begin{equation}\notag
\|B_{f^\omega} \nabla^{2-\alpha+\beta} H\|_{L^2((0, t_0); L^{2}(\mathbb T^3))} \lesssim \lambda t_0^{-\gamma}\mathcal E_{\alpha}^{\frac12}(H)(t_0). \end{equation} In conclusion we get \begin{equation}\label{est-q8}
\|H\otimes B_{f^\omega}\|_{L^2((0, t_0); \mathcal H^{2-\alpha+\beta}(\mathbb T^3))} \lesssim \lambda t_0^{-\gamma}\mathcal E_{\alpha}^{\frac12}(H)(t_0). \end{equation}
Thanks to condition (\ref{ass-bf2}), the last term in (\ref{est-q6}) can be estimated \begin{equation}\label{est-q9} \begin{split}
&\|B_{f^\omega}\otimes B_{f^\omega}\|_{L^2((0, t_0); \mathcal H^{2-\alpha+\beta}(\mathbb T^3))}\\
\lesssim&\ \|B_{f^\omega} \nabla^{2-\alpha+\beta}B_{f^\omega}\|_{L^2((0, t_0); L^2(\mathbb T^3))}\\
\lesssim&\ \|B_{f^\omega}\|_{L^p((0, t_0); L^{p}(\mathbb T^3))}\|\nabla^{2-\alpha+\beta} B_{f^\omega}\|_{L^{p'}((0, t_0); L^{p'}(\mathbb T^3))}\\ \lesssim&\ \lambda^2t_0^{-2\gamma} \end{split} \end{equation} with $\frac{1}{p}+\frac{1}{p'}=\frac12$. In order to obtain the largest possible value of $s$ such that for some $\gamma<0$ \[p'\left(\frac{2-\alpha+\beta+s}{2\alpha}-\gamma\right)<1,\ \ \ p\left(\frac{s}{2\alpha}-\gamma\right)<1,\] we choose \[p=\frac{4\alpha}{2\alpha-\beta-2}, \ \ \ p'=\frac{4\alpha}{\beta+2},\] and hence \[ s<\frac{2\alpha}{p}+2\alpha\gamma=\frac12(2\alpha-\beta-2)+2\alpha\gamma.\] Recall that $\beta=\frac72-2\alpha$. Requiring $s>0$ leads to $2\alpha-\beta-2>0$ which implies $\alpha>\frac{11}{8}$.
Therefore, for $\alpha\in(\frac{11}{8}, \frac{7}{4}]$, $0<s<2\alpha-\frac{11}{4}+2\alpha\gamma$ with some $\gamma<0$, we have from (\ref{E2}) and (\ref{est-q6})-(\ref{est-q9}) that \begin{equation}\notag \begin{split} \mathcal E^{\frac12}_{\alpha}(H)(t_0)
\lesssim&\ \mathcal E_{\alpha}(H)(t_0)+\lambda t_0^{-\gamma} \mathcal E^{\frac12}_{\alpha}(H)(t_0)+\lambda^2 t_0^{-2\gamma}, \ \ \mbox{for} \ \ t\in[0,t_0]. \end{split} \end{equation} Similarly a continuity argument yields that for small enough $t_0$ we have \begin{equation}\notag \mathcal E(H)(t)\leq \mathcal E_{\alpha}(H)(t)\leq C \ \ \ \forall t\in[0,t_0]. \end{equation}
{\textbf{(iii) Estimates on $[t_0, T]$ in both 2D and 3D.}} For $t\in[t_0, T]$, taking inner product of (\ref{H}) with $H$ and integrating over $\mathbb T^n$ yields \begin{equation}\notag \begin{split}
&\frac12\frac{d}{dt}\|H(t)\|^2_{L^2(\mathbb T^n)}+\int_{\mathbb T^n} |\nabla^\alpha H|^2\, dx\\ =&-\int_{\mathbb T^n} \left[\nabla\times\nabla\cdot\left((H+B_{f^\omega})\otimes(H+B_{f^\omega})\right)\right]\cdot H\, dx\\ =&-\int_{\mathbb T^n} \left[\nabla\times\nabla\cdot\left((H+B_{f^\omega})\otimes(H+B_{f^\omega})\right)\right]\cdot (H+B_{f^\omega})\, dx\\ &+\int_{\mathbb T^n} \left[\nabla\times\nabla\cdot\left((H+B_{f^\omega})\otimes(H+B_{f^\omega})\right)\right]\cdot B_{f^\omega}\, dx. \end{split} \end{equation} Note that the first integral on the right hand side above is zero due to the fact \[\int_{\mathbb T^n} \left[\nabla\times\nabla\cdot\left(u\otimes u\right)\right]\cdot u\, dx=\int_{\mathbb T^n} \left[\nabla\times((\nabla\times u)\times u)\right]\cdot u\, dx=0\] for any vector field $u$ with $\nabla\cdot u=0$. For the same reason, we have \[\int_{\mathbb T^n} \left[\nabla\times\nabla\cdot\left(B_{f^\omega}\otimes B_{f^\omega}\right)\right]\cdot B_{f^\omega}\, dx=0.\] Therefore it follows that \begin{equation}\label{est-largetime} \begin{split}
&\frac12\frac{d}{dt}\|H(t)\|^2_{L^2(\mathbb T^n)}+\int_{\mathbb T^n} |\nabla^\alpha H|^2\, dx\\ =&\int_{\mathbb T^n} \left[\nabla\times\nabla\cdot\left((H+B_{f^\omega})\otimes(H+B_{f^\omega})\right)\right]\cdot B_{f^\omega}\, dx\\ =&\int_{\mathbb T^n} \left[\nabla\times\nabla\cdot(H\otimes H)\right]\cdot B_{f^\omega}\, dx\\ &+\int_{\mathbb T^n} \left[\nabla\times\nabla\cdot\left(H\otimes B_{f^\omega}\right)\right]\cdot B_{f^\omega}\, dx\\ &+\int_{\mathbb T^n} \left[\nabla\times\nabla\cdot\left(B_{f^\omega}\otimes H\right)\right]\cdot B_{f^\omega}\, dx\\ =:& \ K_1+K_2+K_3. \end{split} \end{equation} Applying integration by parts we obtain \begin{equation}\notag K_1=-\int_{\mathbb T^n} (H\otimes H)\cdot \nabla\nabla\times B_{f^\omega}\, dx. \end{equation} It then follows from H\"older's inequality and condition (\ref{ass-bf1}) \begin{equation}\label{est-k1} \begin{split}
|K_1|\leq&\ \|H\|^2_{L^2(\mathbb T^n)}\|\nabla\nabla\times B_{f^\omega}\|_{L^\infty(\mathbb T^n)}\\
\lesssim&\ (\mathrm{max}\{t^{-\frac12}, t^{-\frac{4+n+2s}{2\alpha}}\})^{\frac12}\|H\|^2_{L^2(\mathbb T^n)}. \end{split} \end{equation} Similarly we have from (\ref{ass-bf1}) \begin{equation}\label{est-k2} \begin{split}
|K_2|+|K_3|\lesssim&\ \|H\|_{L^2(\mathbb T^n)}\|B_{f^\omega}\|_{L^2}\|\nabla\nabla\times B_{f^\omega}\|_{L^\infty(\mathbb T^n)}\\
\lesssim&\ (1+t^{-\frac{s}{2\alpha}})(\mathrm{max}\{t^{-\frac12}, t^{-\frac{4+n+2s}{2\alpha}}\})^{\frac12}\|H\|_{L^2(\mathbb T^n)}. \end{split} \end{equation} Putting (\ref{est-largetime}), (\ref{est-k1}) and (\ref{est-k2}) together we obtain \begin{equation}\label{est-large2} \begin{split} \frac{d}{dt} \mathcal E(H)(t)\lesssim &\ (\mathrm{max}\{t^{-\frac12}, t^{-\frac{4+n+2s}{2\alpha}}\})^{\frac12}\mathcal E(H)(t)\\ &+(1+t^{-\frac{s}{2\alpha}})(\mathrm{max}\{t^{-\frac12}, t^{-\frac{4+n+2s}{2\alpha}}\})^{\frac12}\mathcal E^{\frac12}(H)(t). \end{split} \end{equation} Note that \begin{equation}\label{est-time1} \begin{split} \int_{t_0}^T(\mathrm{max}\{t^{-\frac12}, t^{-\frac{4+n+2s}{2\alpha}}\})^{\frac12} \, dt=&\int_{t_0}^1t^{-\frac{4+n+2s}{4\alpha}}\,dt+\int_{1}^Tt^{-\frac14}\, dt\\ \leq &\ C(t_0, T, \alpha, n, s) \end{split} \end{equation} and similarly \begin{equation}\label{est-time2} \begin{split} \int_{t_0}^T(1+t^{-\frac{s}{2\alpha}})(\mathrm{max}\{t^{-\frac12}, t^{-\frac{4+n+2s}{2\alpha}}\})^{\frac12} \, dt
\leq &\ C(t_0, T, \alpha, n, s). \end{split} \end{equation} It follows from (\ref{est-large2}), (\ref{est-time1}) and (\ref{est-time2}) that \begin{equation}\notag \mathcal E(H)(t)\leq C(t_0, T, \alpha, n, s) \ \ \forall \ t\in[t_0, T]. \end{equation}
{\textbf{(iv) Estimates of $\frac{d H}{dt}$ in both 2D and 3D.}} It follows directly from (\ref{H}) that \begin{equation}\label{est-ht1} \begin{split}
&\left\|\frac{d}{dt} H\right\|_{L^p([0,T]; \mathcal H^{-2\alpha}(\mathbb T^n))}\\
\lesssim&\ \|(-\Delta)^{\alpha}H\|_{L^p([0,T]; \mathcal H^{-2\alpha}(\mathbb T^n))}+\|\nabla\times \nabla\cdot(H\otimes H)\|_{L^p([0,T]; \mathcal H^{-2\alpha}(\mathbb T^n))}\\
&+\|\nabla\times \nabla\cdot(H\otimes B_{f^\omega})\|_{L^p([0,T]; \mathcal H^{-2\alpha}(\mathbb T^n))}\\
&+\|\nabla\times \nabla\cdot(B_{f^\omega}\otimes B_{f^\omega})\|_{L^p([0,T]; \mathcal H^{-2\alpha}(\mathbb T^n))}. \end{split} \end{equation} When $n=2$, we take $p=2$. It is obvious that \begin{equation}\notag
\|(-\Delta)^{\alpha}H\|_{L^2([0,T]; \mathcal H^{-2\alpha}(\mathbb T^2))}\lesssim \|H\|_{L^2([0,T]; L^{2}(\mathbb T^2))}\lesssim T^{\frac12}\|H\|_{L^\infty([0,T]; L^{2}(\mathbb T^2))}. \end{equation} H\"older's, interpolation and Sobolev embedding inequalities yields for $\alpha>1$ \begin{equation}\notag \begin{split}
&\|\nabla\times \nabla\cdot(H\otimes H)\|_{L^2([0,T]; \mathcal H^{-2\alpha}(\mathbb T^2))}\\
\lesssim &\ \|H\otimes H\|_{L^2([0, T]; L^{2}(\mathbb T^2))}\\
\lesssim &\ \left(\int_0^{T} \|H\|_{L^2(\mathbb T^2)}^2\|H\|_{L^\infty(\mathbb T^2)}^2\, dt\right)^{\frac12}\\
\lesssim &\ \left(\int_0^{T} \|H\|_{L^2(\mathbb T^2)}^2\|H\|_{\mathcal H^\alpha(\mathbb T^2)}^2\, dt\right)^{\frac12}\\
\lesssim &\ \left(\sup_{t\in(0, T)}\|H(t)\|_{L^2(\mathbb T^2)}\right)\left(\int_0^{T} \|H\|_{\mathcal H^\alpha(\mathbb T^2)}^2\, dt\right)^{\frac12}\\ \lesssim &\ \mathcal E(H)(T). \end{split} \end{equation} It follows from H\"older's inequality and condition (\ref{ass-bf2}) that for $p$, $p'$ and $m$ satisfying \[\frac{1}{p}+\frac{1}{p'}=\frac12, \ \ p'=\frac{4\alpha}{4\alpha-5}, \ \ m=\frac{4\alpha-5}{5-2\alpha}\] we have \begin{equation}\notag \begin{split}
&\|\nabla\times \nabla\cdot(H\otimes B_{f^\omega})\|_{L^2([0,T]; \mathcal H^{-2\alpha}(\mathbb T^2))}\\
\lesssim&\ \|H\otimes B_{f^\omega}\|_{L^2([0, T]; L^{2}(\mathbb T^2))}\\
\lesssim&\ \|H\|_{L^{p}([0,T]; L^{p}(\mathbb T^2))} \|B_{f^\omega}\|_{L^{p'}([0, T]; L^{p'}(\mathbb T^2))}\\
\lesssim&\ \left(\sup_{0\leq t\leq T}\|H(t)\|^2_{L^2(\mathbb T^2)}\right)^{\frac{p-2}{2p}}\|\nabla^mH\|_{L^{2}([0,T]; L^{2}(\mathbb T^2))}^{\frac{2}{p}} \|B_{f^\omega}\|_{L^{p'}([0, T]; L^{p'}(\mathbb T^2))}\\ \lesssim&\ \lambda \mathcal E^{\frac12}(H) T^{-\gamma}, \end{split} \end{equation} where we used the fact $m\leq \alpha$ for $\frac43\leq \alpha\leq \frac32$. Moreover, the condition (\ref{ass-bf2}) implies \begin{equation}\notag \begin{split}
&\|\nabla\times \nabla\cdot(B_{f^\omega}\otimes B_{f^\omega})\|_{L^2([0,T]; \mathcal H^{-2\alpha}(\mathbb T^2))}\\
\lesssim &\ \|B_{f^\omega}\otimes B_{f^\omega}\|_{L^2([0, T]; L^{2}(\mathbb T^2))}\\
\lesssim&\ \|B_{f^\omega}\|^2_{L^4([0, T]; L^4(\mathbb T^2))} \\
\lesssim&\ C(T)\|B_{f^\omega}\|^2_{L^{\frac{4\alpha}{4\alpha-5}}([0, T]; L^{\frac{4\alpha}{4\alpha-5}}(\mathbb T^2))} \\ \leq &\ C(T)\lambda^2T^{-2\gamma} \end{split} \end{equation} since $4\leq \frac{4\alpha}{4\alpha-5}$ for $\frac43\leq \alpha\leq \frac32$. Combining the estimates above with (\ref{est-ht1}) we have \begin{equation}\notag
\left\|\frac{d}{dt} H\right\|_{L^2([0,T]; \mathcal H^{-2\alpha}(\mathbb T^2))}\lesssim \mathcal E(H)(T)+\lambda \mathcal E^{\frac12}(H) T^{-\gamma}+\lambda^2T^{-2\gamma}\lesssim C(T, \lambda, s). \end{equation}
When $n=3$, take $p=\frac{4\alpha}{3}$. First we have \begin{equation}\notag
\|(-\Delta)^{\alpha}H\|_{L^{\frac{4\alpha}{3}}([0,T]; \mathcal H^{-2\alpha}(\mathbb T^2))}\lesssim \|H\|_{L^{\frac{4\alpha}{3}}([0,T]; L^{2}(\mathbb T^2))}\lesssim T^{\frac{3}{4\alpha}}\|H\|_{L^\infty([0,T]; L^{2}(\mathbb T^2))}. \end{equation} Similarly following the application of H\"older's and Gagliardo-Nirenberg's interpolation inequalities we infer \begin{equation}\notag \begin{split}
&\|\nabla\times \nabla\cdot(H\otimes H)\|_{L^{\frac{4\alpha}{3}}([0,T]; \mathcal H^{-2\alpha}(\mathbb T^3))}\\
\lesssim &\ \|H\otimes H\|_{L^{\frac{4\alpha}{3}}([0, T]; L^{2}(\mathbb T^3))}\\
\lesssim &\ \left(\int_0^{T} \|\nabla^\alpha H\|_{L^2(\mathbb T^3)}^2\|H\|_{L^2(\mathbb T^3)}^{\frac{8\alpha}{3}-2}\, dt\right)^{\frac{3}{4\alpha}}\\
\lesssim &\ \left(\int_0^{T} \|H\|_{L^2(\mathbb T^3)}^2\|H\|_{\mathcal H^\alpha(\mathbb T^3)}^2\, dt\right)^{\frac{3}{4\alpha}}\\
\lesssim &\ \left(\sup_{t\in(0, T)}\|H(t)\|^2_{L^2(\mathbb T^3)}\right)^{\frac{4\alpha-3}{4\alpha}}\left(\int_0^{T} \|H\|_{\mathcal H^\alpha(\mathbb T^3)}^2\, dt\right)^{\frac{3}{4\alpha}}\\ \lesssim &\ \mathcal E(H)(T). \end{split} \end{equation} For $p, q, p'$ and $m$ satisfying \[\frac{1}{p}+\frac{1}{p'}=\frac{3}{4\alpha}, \ \ \frac{1}{q}+\frac{1}{p'}=\frac{1}{2}, \ \ p'=\frac{8\alpha}{8\alpha-11}, \ \ m=\frac{3(8\alpha-11)}{2(17-8\alpha)},\] we apply H\"older's inequality and condition (\ref{ass-bf2}) to deduce \begin{equation}\notag \begin{split}
&\|\nabla\times \nabla\cdot(H\otimes B_{f^\omega})\|_{L^{\frac{4\alpha}{3}}([0,T]; \mathcal H^{-2\alpha}(\mathbb T^3))}\\
\lesssim&\ \|H\otimes B_{f^\omega}\|_{L^{\frac{4\alpha}{3}}([0, T]; L^{2}(\mathbb T^3))}\\
\lesssim&\ \|H\|_{L^{p}([0,T]; L^{q}(\mathbb T^3))} \|B_{f^\omega}\|_{L^{p'}([0, T]; L^{p'}(\mathbb T^3))}\\
\lesssim&\ \left(\sup_{0\leq t\leq T}\|H(t)\|^2_{L^2(\mathbb T^3)}\right)^{\frac{p-2}{2p}}\|\nabla^mH\|_{L^{2}([0,T]; L^{2}(\mathbb T^3))}^{\frac{2}{p}} \|B_{f^\omega}\|_{L^{p'}([0, T]; L^{p'}(\mathbb T^3))}\\ \lesssim&\ \lambda \mathcal E^{\frac12}(H) T^{-\gamma} \end{split} \end{equation} thanks to the fact that $m\leq \alpha$ for $\frac{11}{8}<\alpha\leq \frac{7}{4}$. In the end, it follows from H\"older's inequality and (\ref{ass-bf2}) that \begin{equation}\notag \begin{split}
&\|\nabla\times \nabla\cdot(B_{f^\omega}\otimes B_{f^\omega})\|_{L^{\frac{4\alpha}{3}}([0,T]; \mathcal H^{-2\alpha}(\mathbb T^3))}\\
\lesssim &\ \|B_{f^\omega}\otimes B_{f^\omega}\|_{L^{\frac{4\alpha}{3}}([0, T]; L^{2}(\mathbb T^3))}\\
\lesssim&\ \|B_{f^\omega}\|^2_{L^{\frac{8\alpha}{3}}([0, T]; L^4(\mathbb T^3))} \\
\lesssim&\ C(T)\|B_{f^\omega}\|^2_{L^{\frac{8\alpha}{8\alpha-11}}([0, T]; L^{\frac{8\alpha}{8\alpha-11}}(\mathbb T^2))} \\ \leq &\ C(T)\lambda^2T^{-2\gamma} \end{split} \end{equation} since $\frac{8\alpha}{3}\leq \frac{8\alpha}{8\alpha-11}$ for $\alpha\leq \frac74$. Again collecting the estimates above with (\ref{est-ht1}) we get \begin{equation}\notag
\left\|\frac{d}{dt} H\right\|_{L^{\frac{4\alpha}{3}}([0,T]; \mathcal H^{-2\alpha}(\mathbb T^3))}
\lesssim C(T, \lambda, s). \end{equation}
\par{\raggedleft$\Box$\par}
\section{Existence of weak solutions to (\ref{H})}\label{sec-galerkin}
We are ready to establish the existence of weak solutions to (\ref{H}) by using the standard Galerkin approximating approach (c.f. \cite{CF, DG}) and the a priori estimates obtained in the previous section. Namely we will prove Theorem \ref{thm-H} by constructing a sequence of Galerkin approximating solutions and passing to a limit.
Recall the Fourier transform and its inverse on torus $\mathbb T^n$, \begin{equation}\notag \begin{split} \widehat f(k, t)=&\int_{\mathbb T^n} f(x,t)e^{-2\pi ik\cdot x}\, dx, \ \ \ \ k\in \mathbb Z^n\\ f(x,t)=&\sum_{k\in\mathbb Z^n} \widehat f(k, t)e^{2\pi ik\cdot x}. \end{split} \end{equation} Denote $P_K$ by the Fourier projection operator
\begin{equation}\notag P_K f=\sum_{\{k: |k_i|\leq K, 1\leq i\leq n\}} \widehat f(k, t)e^{2\pi ik\cdot x} \end{equation} and $H^K= P_K H$. For any fixed $K\in \mathbb N$ we consider the truncated system \begin{equation}\label{eq-K} \begin{split} H^K_t=&-(-\Delta)^\alpha H^K-P_K\left[\nabla\times\left(\mathcal B(H^K, H^K)+\mathcal B(H^K, B^K_{f^\omega}) \right)\right]\\ &-P_K\left[\nabla\times \left(\mathcal B(B^K_{f^\omega}, H^K)+\mathcal B(B^K_{f^\omega}, B^K_{f^\omega})\right)\right],\\ \nabla\cdot H^K=&\ 0,\\ H^K(x,0)=&\ 0. \end{split} \end{equation} Taking Fourier transform on (\ref{eq-K}) yields \begin{equation}\label{eq-KF} \begin{split}
\widehat {H^K}_t=&\ (-1)^\alpha |k|^{2\alpha}\widehat {H^K}(k,t)\\
&-ik\times \sum_{\{k'+k''=k, |k'_i|\leq K, |k''_i|\leq K \}}\widehat {H^K} (k', t)\cdot k'' \widehat{H^K}(k'', t)\\
&-ik\times \sum_{\{k'+k''=k, |k'_i|\leq K, |k''_i|\leq K \}}\widehat {H^K} (k', t)\cdot k'' \widehat{B^K_{f^\omega}}(k'', t)\\
&-ik\times \sum_{\{k'+k''=k, |k'_i|\leq K, |k''_i|\leq K \}}\widehat {B^K_{f^\omega}} (k', t)\cdot k'' \widehat{H^K}(k'', t)\\
&-ik\times \sum_{\{k'+k''=k, |k'_i|\leq K, |k''_i|\leq K \}}\widehat {B^K_{f^\omega}} (k', t)\cdot k'' \widehat{B^K_{f^\omega}}(k'', t),\\ k\cdot \widehat{H^K}(k, t)=&\ 0,\\ \widehat{H^K}(k,0)=&\ 0. \end{split} \end{equation} Note that (\ref{eq-KF}) is a finite ODE system for any fixed $K\in\mathbb N$. From the integral form of (\ref{eq-KF}) we define the map \begin{equation}\notag \begin{split}
\Phi(\widehat{H^K})(k,t):=& \int_0^t(-1)^\alpha |k|^{2\alpha}\widehat {H^K}(k,s)\, ds\\
&-\int_0^t ik\times \sum_{\{k'+k''=k, |k'_i|\leq K, |k''_i|\leq K \}}\widehat {H^K} (k', t)\cdot k'' \widehat{H^K}(k'', s)\, ds\\
&-\int_0^t ik\times \sum_{\{k'+k''=k, |k'_i|\leq K, |k''_i|\leq K \}}\widehat {H^K} (k', t)\cdot k'' \widehat{B^K_{f^\omega}}(k'', s)\, ds\\
&-\int_0^t ik\times \sum_{\{k'+k''=k, |k'_i|\leq K, |k''_i|\leq K \}}\widehat {B^K_{f^\omega}} (k', t)\cdot k'' \widehat{H^K}(k'', s)\, ds\\
&-\int_0^t ik\times \sum_{\{k'+k''=k, |k'_i|\leq K, |k''_i|\leq K \}}\widehat {B^K_{f^\omega}} (k', t)\cdot k'' \widehat{B^K_{f^\omega}}(k'', s)\, ds\\ =:&\ \Phi_1(k,t)+\Phi_2(k,t)+\Phi_3(k,t)+\Phi_4(k,t)+\Phi_5(k,t). \end{split} \end{equation} Denote the function space \begin{equation}\notag X_T= C([0,T]; \ell^2)\cap L^2([0,T]; \mathcal H^\alpha), \ \ \mbox{for} \ \ T>0. \end{equation} We first show that the map $\Phi$ has a fixed point on $X_{t_1}$ for a small time $t_1$ by showing that $\Phi$ is a contraction map on a ball of $X_{t_1}$. We then claim that this process can be iterated to reach time $T$.
For $t\in[0,t_1]$ one has \begin{equation}\notag
\|\Phi_1(t)\|_{\ell^2}\lesssim K^{2\alpha} t_1\|\widehat{H^K}\|_{L^\infty([0,t_1]; \ell^2)}. \end{equation} Applying Plancherel's theorem and Sobolev imbedding gives \begin{equation}\notag
\|\Phi_2(t)\|_{\ell^2}\lesssim K^{2+\frac{n}{2}} t_1 \|\widehat{H^K}\|^2_{L^\infty([0,t_1]; \ell^2)}. \end{equation} Using Plancherel's theorem and Sobolev imbedding again and the estimate (\ref{est-lin1}) we have \begin{equation}\notag
\|\Phi_3(t)\|_{\ell^2}+\|\Phi_4(t)\|_{\ell^2} \lesssim K^{2+\frac{n}{2}} t_1^{1-\frac{s}{2\alpha}}\|\widehat{H^K}\|_{L^\infty([0,t_1]; \ell^2)}. \end{equation}
It follows from Plancherel's theorem and Lemmas \ref{le-heat3} and \ref{le-heat4} that
\begin{equation}\notag
\|\Phi_5(t)\|_{\ell^2}\lesssim K^2\lambda^2 t_1^{\frac{\beta+2}{2\alpha}-2\gamma}, \end{equation} where we recall $\beta=3-2\alpha$ in 2D and $\beta=\frac72-2\alpha$ in 3D, and we observe $\frac{\beta+2}{2\alpha}>0$. Thus combining the estimates above leads to \begin{equation}\label{est-contract1} \begin{split}
\|\Phi(\widehat{H^K})(t)\|_{\ell^2}\lesssim&\ K^{2\alpha} t_1\|\widehat{H^K}\|_{L^\infty([0,t_1]; \ell^2)}+K^{2+\frac{n}{2}} t_1 \|\widehat{H^K}\|^2_{L^\infty([0,t_1]; \ell^2)}\\
&+K^{2+\frac{n}{2}} t_1^{1-\frac{s}{2\alpha}}\|\widehat{H^K}\|_{L^\infty([0,t_1]; \ell^2)}+K^2\lambda^2 t_1^{\frac{\beta+2}{2\alpha}-2\gamma}. \end{split} \end{equation} Analogously we obtain \begin{equation}\label{est-contract2} \begin{split}
&\||k|^\alpha \Phi(\widehat{H^K})(t)\|_{L^2([0,t_1];\ell^2)}\\
\lesssim&\ K^{3\alpha} t_1\|\widehat{H^K}\|_{L^\infty([0,t_1]; \ell^2)}+K^{2+\alpha+\frac{n}{2}} t_1 \|\widehat{H^K}\|^2_{L^\infty([0,t_1]; \ell^2)}\\
&+K^{2+\alpha+\frac{n}{2}} \lambda t_1^{1-\frac{s}{2\alpha}}\|\widehat{H^K}\|_{L^\infty([0,t_1]; \ell^2)}+K^{2+\alpha}\lambda^2 t_1^{\frac{\beta+2}{2\alpha}-2\gamma}. \end{split} \end{equation} Take $R=K$. Note that $\gamma<0$, $s<2\alpha$ and $\frac{\beta+2}{2\alpha}>0$; hence all the index of $t_1$ are positive in (\ref{est-contract1})-(\ref{est-contract2}). Thus we can choose $t_1$ small enough such that the estimates (\ref{est-contract1}) and (\ref{est-contract2}) imply that $\Phi$ maps the ball $B_R(0)\subset X_{t_1}$ to itself continuously. Analogous analysis guarantees that the map $\Phi$ is a contraction. Hence there exists a unique solution $\widehat{H^K}$ to (\ref{eq-KF}) in $X_{t_1}$. Consequently, there exists a unique solution $H^K$ to (\ref{eq-K}) in $L^\infty([0,t_1]; L^2(\mathbb T^n))\cap L^2([0,t_1]; \mathcal H^\alpha(\mathbb T^n))$. Note that since the energy estimate (\ref{ap-est1}) holds for system (\ref{eq-K}) on $[0,T]$ as well, iterations of the previous process can yield a solution of (\ref{eq-K}) up to time $T$. Automatically the solution $H^K$ satisfies estimates (\ref{ap-est1}) and (\ref{ap-est2}). Note that $P_K$ is a bounded operator in $L^p$ for any $1<p<\infty$ and hence $B^K_{f^\omega}$ converges strongly to $B_{f^\omega}$ in $L^p$ as $K\to\infty$. Therefore the estimates (\ref{ap-est1}) and (\ref{ap-est2}) are sufficient for us to extract a subsequence of $H^K$ which converges to a weak solution $H$ of (\ref{H}) on $[0,T]$.
\section{Appendix: Proof of uniqueness}\label{sec-unique} In this section, we show the uniqueness of the weak solutions for critical and subcritical values of $\alpha$. Let $B^1=B_{f^\omega}+H^1$ and $B^2=B_{f^\omega}+H^2$ be two weak solutions obtained in Theorems \ref{thm-2d} and \ref{thm-3d} in 2D and 3D respectively, for system (\ref{emhd}) with the same initial data $f$. Thus both $H^1$ and $H^2$ satisfy (\ref{H}). In order to fully explore cancellations in the estimates later, we write the equations of $H^1$ and $H^2$ as \begin{equation}\notag \begin{split} H^1_t+\nabla\times((\nabla\times(H^1+B_{f^\omega}))\times (H^1+B_{f^\omega}))=-(-\Delta)^\alpha H^1,\\ H^2_t+\nabla\times((\nabla\times(H^2+B_{f^\omega}))\times (H^2+B_{f^\omega}))=-(-\Delta)^\alpha H^2. \end{split} \end{equation} Denote $\widetilde H=H^1-H^2$. Taking subtraction of the last two equations gives \begin{equation}\label{eq-diff} \widetilde H_t+\nabla\times((\nabla\times\widetilde H)\times (H^1+B_{f^\omega}))+\nabla\times((\nabla\times(H^2+B_{f^\omega}))\times \widetilde H)=-(-\Delta)^\alpha \widetilde H. \end{equation} Taking inner product of (\ref{eq-diff}) with $\widetilde H$, integrating over $\mathbb T^n$ and using integration by parts we obtain \begin{equation}\label{energy-diff1} \begin{split}
&\frac{1}{2}\frac{d}{dt} \|\widetilde H(t)\|_{L^2(\mathbb T^n)}^2+\int_{\mathbb T^n}|\nabla^\alpha \widetilde H(t)|^2\, dx\\ =& -\int_{\mathbb T^n} (\nabla\times(H^2(t)+B_{f^\omega}))\times \widetilde H(t)\cdot \nabla\times \widetilde H(t)\, dx \end{split} \end{equation} where we used the cancellation \begin{equation}\notag \begin{split} \int_{\mathbb T^n}\nabla\times((\nabla\times\widetilde H)\times (H^1+B_{f^\omega}))\cdot \widetilde H\, dx =&\int_{\mathbb T^n}((\nabla\times\widetilde H)\times (H^1+B_{f^\omega}))\cdot \nabla\times \widetilde H\, dx\\ =&\ 0. \end{split} \end{equation} In 2D, i.e. $n=2$, we estimate the integral on the right hand side of (\ref{energy-diff1}) by using H\"older's inequality, Sobolev embedding and Young's inequality for some $p$ and $q$ satisfying $\frac{1}{p}+\frac{1}{q}=\frac12$ \begin{equation}\notag \begin{split}
&\left| \int_{\mathbb T^2} (\nabla\times H^2(t))\times \widetilde H(t)\cdot \nabla\times \widetilde H(t)\, dx\right|\\
\leq &\ C\|\widetilde H\|_{L^2(\mathbb T^2)} \|\nabla H^2\|_{L^p(\mathbb T^2)} \|\nabla \widetilde H\|_{L^q(\mathbb T^2)} \\
\leq &\ C\|\widetilde H\|_{L^2(\mathbb T^2)} \|\nabla^{2-\frac{2}{p}} H^2\|_{L^2(\mathbb T^2)} \|\nabla^{2-\frac{2}{q}} \widetilde H\|_{L^2(\mathbb T^2)} \\
\leq &\ C\|\widetilde H\|_{L^2(\mathbb T^2)} \|\nabla^{\alpha} H^2\|_{L^2(\mathbb T^2)} \|\nabla^{\alpha} \widetilde H\|_{L^2(\mathbb T^2)} \\
\leq &\ C\|\widetilde H\|^2_{L^2(\mathbb T^2)} \|\nabla^{\alpha} H^2\|^2_{L^2(\mathbb T^2)} +\frac12 \|\nabla^{\alpha} \widetilde H\|^2_{L^2(\mathbb T^2)} \\ \end{split} \end{equation} where we require $2-\frac{2}{p}\leq \alpha$ and $2-\frac{2}{q}\leq \alpha$. When $\alpha\geq \frac32$, we are able to find proper $p$ and $q$ satisfying the conditions. Analogously, in 3D we have for $\alpha\geq \frac74$ \begin{equation}\notag \begin{split}
&\left| \int_{\mathbb T^3} (\nabla\times H^2(t))\times \widetilde H(t)\cdot \nabla\times \widetilde H(t)\, dx\right|\\
\leq &\ C\|\widetilde H\|_{L^2(\mathbb T^3)} \|\nabla H^2\|_{L^p(\mathbb T^3)} \|\nabla \widetilde H\|_{L^q(\mathbb T^3)} \\
\leq &\ C\|\widetilde H\|_{L^2(\mathbb T^3)} \|\nabla^{\frac52-\frac{3}{p}} H^2\|_{L^2(\mathbb T^3)} \|\nabla^{\frac52-\frac{3}{q}} \widetilde H\|_{L^2(\mathbb T^3)} \\
\leq &\ C\|\widetilde H\|^2_{L^2(\mathbb T^3)} \|\nabla^{\alpha} H^2\|^2_{L^2(\mathbb T^3)} +\frac12 \|\nabla^{\alpha} \widetilde H\|^2_{L^2(\mathbb T^3)} \\ \end{split} \end{equation} provided $\frac{1}{p}+\frac{1}{q}=\frac12$, $\frac52-\frac{3}{p}\leq \alpha$ and $\frac52-\frac{3}{q}\leq \alpha$.
On the other hand, we have in 2D \begin{equation}\notag \begin{split}
&\left| \int_{\mathbb T^2} (\nabla\times B_{f^\omega})\times \widetilde H(t)\cdot \nabla\times \widetilde H(t)\, dx\right|\\
\leq &\ C\|\widetilde H\|_{L^2(\mathbb T^2)} \|\nabla B_{f^\omega}\|_{L^p(\mathbb T^2)} \|\nabla \widetilde H\|_{L^q(\mathbb T^2)} \\
\leq &\ C\|\widetilde H\|_{L^2(\mathbb T^2)} \|\nabla^{2-\frac{2}{p}} B_{f^\omega}\|_{L^2(\mathbb T^2)} \|\nabla^{2-\frac{2}{q}} \widetilde H\|_{L^2(\mathbb T^2)} \\
\leq &\ C\|\widetilde H\|_{L^2(\mathbb T^2)} \|\nabla^{\alpha} B_{f^\omega}\|_{L^2(\mathbb T^2)} \|\nabla^{\alpha} \widetilde H\|_{L^2(\mathbb T^2)} \\
\leq &\ C\|\widetilde H\|^2_{L^2(\mathbb T^2)} \|\nabla^{\alpha} B_{f^\omega}\|^2_{L^2(\mathbb T^2)} +\frac12 \|\nabla^{\alpha} \widetilde H\|^2_{L^2(\mathbb T^2)} \end{split} \end{equation} and in 3D \begin{equation}\notag \begin{split}
&\left| \int_{\mathbb T^3} (\nabla\times B_{f^\omega}(t))\times \widetilde H(t)\cdot \nabla\times \widetilde H(t)\, dx\right|\\
\leq &\ C\|\widetilde H\|_{L^2(\mathbb T^3)} \|\nabla B_{f^\omega}\|_{L^p(\mathbb T^3)} \|\nabla \widetilde H\|_{L^q(\mathbb T^3)} \\
\leq &\ C\|\widetilde H\|_{L^2(\mathbb T^3)} \|\nabla^{\frac52-\frac{3}{p}} B_{f^\omega}\|_{L^2(\mathbb T^3)} \|\nabla^{\frac52-\frac{3}{q}} \widetilde H\|_{L^2(\mathbb T^3)} \\
\leq &\ C\|\widetilde H\|^2_{L^2(\mathbb T^3)} \|\nabla^{\alpha} B_{f^\omega}\|^2_{L^2(\mathbb T^3)} +\frac12 \|\nabla^{\alpha} \widetilde H\|^2_{L^2(\mathbb T^3)} \\ \end{split} \end{equation}
Therefore, it follows from (\ref{energy-diff1}) that \begin{equation}\label{energy-diff2} \begin{split}
&\frac{d}{dt} \|\widetilde H(t)\|_{L^2(\mathbb T^n)}^2+\int_{\mathbb T^n}|\nabla^\alpha \widetilde H(t)|^2\, dx\\
\leq &\ C\|\widetilde H\|^2_{L^2(\mathbb T^n)} \left(\|\nabla^{\alpha} H^2\|^2_{L^2(\mathbb T^n)}+\|\nabla^{\alpha} B_{f^\omega}\|^2_{L^2(\mathbb T^n)} \right). \end{split} \end{equation} Applying Gr\"onwall's inequality to (\ref{energy-diff2}) we obtain \begin{equation}\label{energy-diff3} \begin{split}
&\|\widetilde H(t)\|_{L^2(\mathbb T^n)}^2\\
\leq &\ \|\widetilde H(0)\|_{L^2(\mathbb T^n)}^2 \exp\left\{C\int_0^t\left(\|\nabla^{\alpha} H^2\|^2_{L^2(\mathbb T^n)}+\|\nabla^{\alpha} B_{f^\omega}\|^2_{L^2(\mathbb T^n)} \right) \, d\tau \right\}. \end{split} \end{equation} Note that $H^2\in L^2([0,T]; \mathcal H^\alpha (\mathbb T^n))$ and from (\ref{est-lin1}) \begin{equation}\notag
\int_0^t\|\nabla^{\alpha} B_{f^\omega}\|^2_{L^2(\mathbb T^n)} \, d\tau\lesssim \int_0^t \tau^{-\frac{\alpha+s}{2\alpha}}\, d\tau\lesssim t^{1-\frac{\alpha+s}{2\alpha}}. \end{equation} Recall that $\alpha\in [\frac43, \frac32]$ and $s\in(0, 2\alpha-\frac52)$ in 2D, and $\alpha\in (\frac{11}{8}, \frac74]$ and $s\in (0, 2\alpha-\frac{11}{4})$ in 3D. One can check that $\frac12<\frac{\alpha+s}{2\alpha}<1$ in both cases. Combining with the fact $\widetilde H(0)=0$, (\ref{energy-diff3}) implies $\widetilde H(t)\equiv 0$ and hence $H^1(t)\equiv H^2(t)$. It follows naturally $B^1(t)\equiv B^2(t)$.
\end{document} | arXiv |
\begin{document}
{\footnotesize {Mathematical Problems of Computer Science {\small 23, 2004, 127--129.}}}
\begin{center} {\Large \textbf{On Lower Bound for $W(K_{2n})$}}
{\normalsize Rafael R. Kamalian and Petros A. Petrosyan}
{\small Institute for Informatics and Automation Problems of NAS of RA}
{\small e-mails [email protected], pet\[email protected]}
\textbf{Abstract} \end{center}
The lower bound $W(K_{2n})\geq 3n-2$\ is proved for the greatest possible number of colors in an interval edge coloring of the complete graph $K_{2n}$.
Let $G=(V(G),E(G))$ be an undirected graph without loops and multiple edges [1], $V(G)$ and $E(G)$ be the sets of vertices and edges of $G$, respectively. The degree of a vertex $x\in V(G)$ is denoted by $d_{G}(x)$, the maximum degree of a vertex of $G$-by $\Delta (G)$, and the chromatic index of $G$-by $\chi ^{\prime }(G)$. A graph is regular, if all its vertices have the one degree. If $\alpha $ is a proper edge coloring of the graph $G$, then $\alpha (e)$ denotes the color of an edge $e\in E(G)$ in the coloring $\alpha $.
Let $\alpha $ be a proper coloring of edges of $G$ with colors $1,2,\ldots ,t $. $\alpha $ is interval [2], if for each color $i,1\leq i\leq t,$ there exists at least one edge $e_{i}\in E(G)$ with $\alpha (e_{i})=i$ and the edges incident with each vertex $x\in V(G)$ are colored by $d_{G}(x)$ consecutive colors.
A graph $G$ is interval-colorable if there is $t\geq 1$ for which $G$ has an interval edge coloring $\alpha $ with colors $1,2,\ldots ,t$. The set of all interval-colorable graphs is denoted by $\mathcal{N}$ [2].
For $G\in \mathcal{N}$ we denote by $W(G)$ the greatest value of $\ t$, for which $G$ has an interval edge coloring $\alpha $ with colors $1,2,\ldots ,t$ .
It was proved [3] for bipartite graphs that verification whether $G\in \mathcal{N}$ \ is $NP$-complete [4,5].
It was proved [2] that if $G$ has no triangle and $G\in \mathcal{N}$ then $ W(G)\leq \left\vert V(G)\right\vert -1$. It follows from here that if $G$ is bipartite and $G\in \mathcal{N}$ then $W(G)\leq \left\vert V(G)\right\vert -1 $ [2].
For graphs which can contain a triangle the following results hold:
\textbf{Theorem 1 }[6]. If $G\in \mathcal{N}$ is a graph with nonempty set of edges then $W(G)\leq 2\left\vert V(G)\right\vert -3$.
\textbf{Theorem 2 }[7]. If $G\in \mathcal{N}$ and $\left\vert V(G)\right\vert \geq 3$ then $W(G)\leq 2\left\vert V(G)\right\vert -4$.
Non-defined conceptions and notations can be found in [1,6,8,9].
In [2] there was proved the following
\textbf{Theorem 3}. If $G\in \mathcal{N}$ then $\chi ^{\prime }(G)=\Delta (G) $.
\textbf{Corollary 1 }[2]. For a regular graph $G$ \ $G\in \mathcal{N}$ \ iff \ $\chi ^{\prime }(G)=\Delta (G)$.
A well-known inequality $\Delta (G)\leq \chi ^{\prime }(G)\leq \Delta (G)+1$ for the chromatic index of an arbitrary graph $G$ was proved in [10]. It was proved [11] for regular graphs that verification whether $\chi ^{\prime }(G)=\Delta (G)$ is $NP$-complete.Therefore we can conclude from the \textbf{ Corollary 1} that the problem
\textquotedblleft Whether a given regular graph belongs to the set $\mathcal{ N}$ or not?\textquotedblright\ is also $NP$-complete.
From the results of [10] and \textbf{Corollary 1} it follows that for any odd $p$ $\ \ K_{p}\notin \mathcal{N}$. It is not difficult to see that $\chi ^{\prime }(K_{2n})=\Delta (K_{2n})$ (it can be obtained from the theorem 9.1 of [1]). Consequently from the \textbf{Corollary 1} we obtain
\textbf{Theorem 4}. For any $\ n\in N$ $\ \ \ K_{2n}\in \mathcal{N}$.
It is not difficult, using the results of [6], to obtain the following
\textbf{Theorem 5}. For any $n\in N$ \ \ $W(K_{2n})\geq 2n-1+\left\lfloor \log _{2}\left( 2n-1\right) \right\rfloor $.
The main result of this paper consists of the following
\textbf{Theorem 6}. For any \ $n\in N$ \ \ $W(K_{2n})\geq 3n-2$.
\textbf{Proof}. Obviously, for $n\leq 3$ the estimate of the \textbf{Theorem 6} coincides with that one of the \textbf{Theorem 5}.
Now assume that $n\geq 4$.
Consider a graph $K_{2n}$ with $V(K_{2n})=\left\{ u_{1},u_{2},...,u_{2n}\right\} $\ and$\ $
$E(K_{2n})=\left\{ \left( u_{i},u_{j}\right) |\text{ \ }u_{i}\in V(K_{2n}), \text{ }u_{j}\in V(K_{2n}),\text{ }i<j\right\} $.
Define a coloring $\ \alpha $ of the edges of the graph $K_{2n}$ in the following way:
for $\ i=1,...,\left\lfloor \frac{{\large n}}{{\large 2}}\right\rfloor ,$ $ j=2,...,n,$ $i<j,$\ $i+j\leq n+1$ \ \ \ \ \
\begin{center} $\alpha \left( \left( u_{i},u_{j}\right) \right) =i+j-2;$\ \ \ \ \ \ \end{center}
for \ $i=2,...,n-1,$ $j=\left\lfloor \frac{{\large n}}{{\large 2}} \right\rfloor +2,...,n,$ $i<j,$\ $i+j\geq n+2$ \ \ \ \
\begin{center} $\alpha \left( \left( u_{i},u_{j}\right) \right) =i+j+n-3;$\ \ \ \ \ \ \end{center}
for \ $i=3,...,n,$ $j=n+1,...,2n-2,$ $j-i\leq n-2$\ \ \ \ \
\begin{center} $\alpha \left( \left( u_{i},u_{j}\right) \right) =n+j-i;$\ \end{center}
for \ $i=1,...,n,$ $j=n+1,...,2n,$ $j-i\geq n$\ \
\begin{center} $\alpha \left( \left( u_{i},u_{j}\right) \right) =j-i;$ \end{center}
for \ $i=2,...,1+\left\lfloor \frac{{\large n-1}}{{\large 2}}\right\rfloor ,$ $j=n+1,...,n$ $+\left\lfloor \frac{{\large n-1}}{{\large 2}}\right\rfloor ,$ $\ j-i=n-1$ \ \
\begin{center} $\alpha \left( \left( u_{i},u_{j}\right) \right) =2(i-1);$\ \ \end{center}
for \ $i=$ $\left\lfloor \frac{{\large n-1}}{{\large 2}}\right\rfloor +2,...,n,$ $\ j=n+1+\left\lfloor \frac{{\large n-1}}{{\large 2}} \right\rfloor ,...,2n-1,$ $\ j-i=n-1$ \
\begin{center} $\alpha \left( \left( u_{i},u_{j}\right) \right) =i+j-2;$\ \end{center}
for \ $i=n+1,...,n+\left\lfloor \frac{{\large n}}{{\large 2}}\right\rfloor -1,$ $j=n+2,...,2n-2,$ $i<j,$\ $i+j\leq 3n-1$ \ \ \
\begin{center} $\alpha \left( \left( u_{i},u_{j}\right) \right) =i+j-2n;$ \end{center}
for \ $i=n+1,...,2n-1,$ $j=n+\left\lfloor \frac{{\large n}}{{\large 2}} \right\rfloor +1,...,2n,$ $i<j,$\ $i+j\geq 3n$\
\begin{center} $\alpha \left( \left( u_{i},u_{j}\right) \right) =i+j-n-1.$ \end{center}
It is not difficult to see that $\ \alpha $ \ is an interval edge coloring of the graph $K_{2n}$ with colors $1,2,...,3n-2.$
The proof is complete.
\end{document} | arXiv |
\begin{document}
\title{The Kakimizu complex of a connected sum of links} \author{Jessica E. Banks} \date{} \maketitle
\begin{abstract}
We show that $|\ms(L_1\# L_2)|=|\ms(L_1)|\times|\ms(L_2)|\times\mathbb{R}$ when $L_1$ and $L_2$ are any non-split and non-fibred links. Here $\ms(L)$ denotes the Kakimizu complex of a link $L$, which records the taut Seifert surfaces for $L$. We also show that the analogous result holds if we study incompressible Seifert surfaces instead of taut ones. \end{abstract}
\section{Introduction}
The Kakimizu complex of a link $L$, denoted $\ms(L)$, is a simplicial complex that records the taut Seifert surfaces for $L$. It is analogous to the curve complex of a compact, orientable surface. We will give the definition in Section \ref{msdefnsection}.
Kakimizu proved the following, building on work of Eisner in \cite{MR0440528}.
\begin{theorem}[\cite{MR1177053} Theorem B]
Let $L_1,L_2$ be knots, each not fibred but with a unique incompressible Seifert surface. Then $|\ms(L_1\#L_2)|$ is homeomorphic to $\mathbb{R}$. \end{theorem}
In this paper we prove the following more general result, as well as the analogous result for incompressible Seifert surfaces.
\begin{restatable}{theorem}{connectsumthm}\label{connectsumthm} Let $L_1,L_2$ be non-split, non-fibred, links in $\mathbb{S}^3$,
and let $L=L_1\#L_2$. Then $|\ms(L)|$ is homeomorphic to $|\ms(L_1)|\times|\ms(L_2)|\times\mathbb{R}$. \end{restatable}
For a distant union of two links $L_1$ and $L_2$, the taut Seifert surfaces for the two links do not interact. We may therefore consider the two links separately. For this reason, the `non-split' hypothesis in Theorem \ref{connectsumthm} is not a significant restriction. For the remainder of this paper we will only consider non-split links.
The case that one of the $L_i$ is fibred was dealt with by Kakimizu.
\begin{proposition}[\cite{MR2131376} Proposition 2.4] If $L_1$ is fibred then $\ms(L)\cong\ms(L_2)$. \end{proposition}
The idea of the proof of Theorem \ref{connectsumthm} is to first define a triangulation of $|\ms(L_1)|\times|\ms(L_2)|\times\mathbb{R}$, then choose a representative of each element of $\V(\ms(L_i))$, which we use to define a map between the vertices of the two simplicial complexes, and finally show that this map is an isomorphism.
Sections \ref{msdefnsection}--\ref{orderingsection} cover definitions and results we will need. Sections \ref{repssection}--\ref{edgessection} each constitute a step in the proof of Theorem \ref{connectsumthm}. In Section \ref{proofsection} these ideas are drawn together to complete the proof. Finally, in Section \ref{incompressiblesection} we discuss incompressible Seifert surfaces.
I am grateful to Marc Lackenby for suggesting that this proof might be possible. This result has been proved independently by Bassem Saad.
\section{The definition of the Kakimizu complex}\label{msdefnsection}
\begin{definition} A \textit{Seifert surface} for a link $L$ is a compact, orientable surface $R$ with no closed components such that $\partial R=L$ as an oriented link. The surface $R$ can also be seen as properly embedded in the link complement $\mathbb{S}^3\setminus\nhd(L)$. Say $R$ is \textit{taut} if it has maximal Euler characteristic among all Seifert surfaces for $L$. \end{definition}
The Kakimizu complex was first defined as follows.
\begin{definition}[\cite{MR1177053} p225]\label{firstmsdefn} For a link $L$, the \textit{Kakimizu complex} $\ms(L)$ of $L$ is a simplicial complex, the vertices of which are the ambient isotopy classes of taut Seifert surfaces for $L$. Vertices $R_0,\ldots,R_n$ span an $n$--simplex exactly when they can be realised disjointly. \end{definition}
\begin{definition} A metric is defined on the vertices of $\ms(L)$. The distance between two vertices is the distance in the 1--skeleton of $\ms(L)$ when every edge has length 1. \end{definition}
In \cite{MR1177053}, Kakimizu showed that the distance between two vertices of $\ms(L)$ can be calculated by considering lifts of the two Seifert surfaces to the infinite cyclic cover of the link complement.
\begin{proposition}[\cite{MR1177053} Proposition 3.1]\label{msdistanceprop} Let $L$ be a link, and let $M=\mathbb{S}^3\setminus\nhd(L)$. Consider the infinite cyclic cover $\tilde{M}$ of $M$ (that is, the cover corresponding to the linking number $\lk\colon\pi_1(M)\to\mathbb{Z}$), and let $\tau$ be a generator for the group of covering transformations.
Let $R,R'$ be taut Seifert surfaces for $L$ that represent distinct vertices of $\ms(L)$. Choose a lift $V_0$ of $M\setminus R$ to $\tilde{M}$. For $n\in\mathbb{Z}$ let $V_n=\tau^n(V_0)$.
Take a lift $V_{R'}$ of $M\setminus R'$. Isotope the Seifert surface $R'$ in $M$ so as to minimise the value of $\max\{n:V_{R'}\cap V_n\neq\emptyset\}-\min\{n:V_{R'}\cap V_n\neq\emptyset\}$. Then $\dist_{\ms(L)}(R,R')=\max\{n:V_{R'}\cap V_n\neq\emptyset\}-\min\{n:V_{R'}\cap V_n\neq\emptyset\}$. \end{proposition}
Przytycki and Schultens pointed out in \cite{przytycki-2010} that this result is not in fact true. It may fail in the case of a link that bounds a disconnected taut Seifert surface. Such links are called \textit{boundary links}. Note that this is a fairly unusual property for a link to have. In particular, all boundary links have Alexander polynomial 0. However, we will need to allow for this case in the proof of Theorem \ref{connectsumthm}, which makes the proof more complicated than it would be otherwise. In particular, it is difficult to control the situation where pairs of Seifert surfaces have components that can be made to coincide.
A reader who wishes to avoid this difficulty should take Definition \ref{firstmsdefn} as their working definition of $\ms(L)$, and may ignore the remainder of this section, Section \ref{mfldssection} from Lemma \ref{arcintersectionlemma} onwards and Lemma \ref{threearcslemma}, as well as some of the detail in the proofs of Corollary \ref{edgescor} and Proposition \ref{mainedgesprop}.
Figure \ref{msdefnpic1} gives an instructive example of how Proposition \ref{msdistanceprop} can fail. The link $L_{\alpha}$ shown consists of two linked copies of the knot $7_4$. This knot, which is the $(1,3,3)$ pretzel knot, has two taut Seifert surfaces. One is given by applying Seifert's algorithm to an alternating diagram. The other is given by performing a flype on the diagram that moves the single crossing across one of the lines of three crossings, and then applying Seifert's algorithm. Combining two copies of each of these surfaces gives the two disjoint taut Seifert surfaces $R_a,R_b$ for $L_{\alpha}$ shown in Figure \ref{msdefnpic1}. The arc $\rho$ shown in Figure \ref{msdefnpic2} runs from the top side of $R_a$ to the bottom side, and in doing so passes through $R_b$ in the positive direction twice. This means that if we first take lifts $V_n$ of $M\setminus R_b$ and then a lift $V$ of $M\setminus R_a$ as in Proposition \ref{msdistanceprop} we find that, for example, $V$ intersects $V_0,V_1,V_2$. The surfaces $R_a,R_b$ should therefore be distance 2 apart, instead of adjacent.
\begin{figure}\label{msdefnpic1}
\end{figure}
\begin{figure}\label{msdefnpic2}
\end{figure}
With Proposition \ref{msdistanceprop} and their own work in mind, Przytycki and Schultens redefine the Kakimizu complex as follows.
\begin{definition}[see \cite{przytycki-2010}]\label{fullmsdefn} Let $L$ be a link, and let $M=\mathbb{S}^3\setminus\nhd(L)$. Define the \textit{Kakimizu complex} $\ms(L)$ of $L$ to be the following flag simplicial complex. Its vertices are ambient isotopy classes of taut Seifert surfaces for $L$. Two vertices span an edge if they have representatives $R,R'$ such that a lift of $M\setminus R'$ to the infinite cyclic cover of $M$ intersects exactly two lifts of $M\setminus R$. \end{definition}
\section{Products of simplicial complexes}\label{complexessection}
\begin{definition}[\cite{MR0050886} Chapter II Definition 8.7] A simplicial complex $\mathcal{X}$ is \textit{ordered} if there is a binary relation $\leq$ on the vertices of $\mathcal{X}$ with the following properties. \begin{itemize}
\item[(P1)] $(u \leq v \textrm{ and } v\leq u)\Rightarrow u=v$.
\item[(P2)] If $u,v$ are distinct, $(u \leq v \textrm{ or } v\leq u) \Leftrightarrow \textrm{ $u$ and $v$ are adjacent}$.
\item[(P3)] If $u,v,w$ are vertices of a 2--simplex then $(u \leq v \textrm{ and } v\leq w)\Rightarrow u\leq w$. \end{itemize} \end{definition}
\begin{remark} It is clear that in searching for such a relation we may use the following weaker version of (P2). \begin{itemize}
\item[(P2)$'$] If $u,v$ are adjacent then $(u\leq v \textrm{ or } v\leq u)$. \end{itemize} To see this, note that we can remove all relationships between non-adjacent vertices. \end{remark}
\begin{definition}[\cite{MR0050886} Chapter II Definition 8.8] Let $\mathcal{X}_1,\mathcal{X}_2$ be ordered simplicial complexes. We define the simplicial complex $\mathcal{X}_1\times\mathcal{X}_2$. Its vertices are given by the set $\V(\mathcal{X}_1)\times\V(\mathcal{X}_2)$. Vertices $(u_0,v_0),\ldots,(u_n,v_n)$ span an $n$--simplex if the following hold. \begin{itemize}
\item $\{u_0,\cdots,u_n\}$ is an $m$--simplex of $\mathcal{X}_1$ for some $m\leq n$.
\item $\{v_0,\cdots,v_n\}$ is an $m$--simplex of $\mathcal{X}_2$ for some $m\leq n$.
\item The relation defined by $(u,v)\leq (u',v')\Leftrightarrow(u\leq u' \textrm{ and } v\leq v')$ gives a total linear order on $(u_0,v_0),\ldots,(u_n,v_n)$. \end{itemize} \end{definition}
\begin{remark} The projection maps on the vertices extend to simplicial maps of the complexes. \end{remark}
\begin{theorem}[\cite{MR0050886} Chapter II Lemma 8.9]\label{productcomplextheorem}
The map $|\mathcal{X}_1\times\mathcal{X}_2|\to|\mathcal{X}_1|\times|\mathcal{X}_2|$ induced by projection is a homeomorphism. \end{theorem}
\begin{definition}
Denote by $\mathcal{Z}$ the simplicial complex with a vertex at each integer and an edge joining $n-1$ to $n$ for each $n\in\mathbb{Z}$, so that $|\mathcal{Z}|\cong\mathbb{R}$. Order the vertices of $\mathcal{Z}$ using the usual order $\leq$ on $\mathbb{Z}$. \end{definition}
\section{Product regions and detecting adjacency}\label{mfldssection}
In this section we recall some results from 3--manifold theory that we will need. These include a number of related results; we include full proofs as later proofs are sensitive to the details. The concepts covered in this section are generally well-known, although some of the details are specific to the case at hand.
\begin{definition}[\cite{przytycki-2010} Section 3] Let $M$ be a connected 3--manifold, and let $S,S'$ be (possibly disconnected) surfaces properly embedded in $M$.
Call $S$ and $S'$ \textit{almost transverse} if, given a component $S_0$ of $S$ and a component $S'_0$ of $S'$, they either coincide or intersect transversely.
Call the surfaces \textit{almost disjoint} if, given a component $S_0$ of $S$ and a component $S'_0$ of $S'$, they either coincide or are disjoint. Say they are \textit{$\partial$--almost disjoint} if $\partial S=\partial S'$ and, given a component $S_0$ of $S$ and a component $S'_0$ of $S'$, they either coincide or have disjoint interiors.
Say $S$ and $S'$ \textit{bound a product region} if the following holds. There is a compact surface $T$, a finite collection $\rho_T\subseteq \partial T$ of arcs and simple closed curves and a map of $N=(T\times\mathrm{I})/\!\sim$ into $M$ that is an embedding on the interior of $N$ and has the following properties. \begin{itemize}
\item $T\times\{0\}=S\cap N$ and $T\times\{1\}=S'\cap N$.
\item $\partial N\setminus (T\times\partial\mathrm{I})\subseteq\partial M$. \end{itemize} Here $\sim$ collapses $\{x\}\times\mathrm{I}$ to a point for each $x\in\rho_T$. The \textit{horizontal boundary} of $N$ is $(T\times\partial\mathrm{I})/\!\sim$. Say $S$ and $S'$ have \textit{simplified intersection} if they do not bound a product region. \end{definition}
\begin{proposition}[\cite{MR1315011} Proposition 4.8; see also \cite{MR0224099} Proposition 5.4 and Corollary 3.2]\label{productregionsprop} Let $M$ be a $\partial$--irreducible Haken manifold. Let $S,S'$ be incompressible, $\partial$--incompressible surfaces properly embedded in $M$ in general position. \begin{enumerate}
\item If $S$ and $S'$ are isotopic then there is a product region between them.
\item Suppose $S\cap S'\neq\emptyset$, but $S$ can be isotoped to be disjoint from $S'$. Then there is a product region between $S$ and $S'$. \end{enumerate} \end{proposition}
\begin{remark} We will usually apply this proposition with $M=\mathbb{S}^3\setminus\nhd(L)$ for a link $L$. If $L$ is neither a split link nor the unknot then $M$ is Haken and $\partial$--irreducible. Furthermore, if $S,S'$ are taut Seifert surfaces for $L$ then they are properly embedded, incompressible and $\partial$--incompressible. This remains true if we only consider some components of such surfaces. \end{remark}
\begin{remark}
If $\partial S$ and $\partial S'$ are transverse, the product region $N$ given by Proposition \ref{productregionsprop} is always embedded in $M$. However, we will want to apply the proposition when Seifert surfaces $S$ and $S'$ may have components that either coincide or have boundaries that coincide. It continues to hold in this situation, but may result in a product region that is not embedded. There are two ways that this can occur.
One option is that the components $S_0$ of $S$ and $S'_0$ of $S'$ that bound $N$ coincide. Then $N$ is the whole of $M$, the surfaces $S$ and $S'$ are connected, and the link $L$ is fibred with fibre $S$. As we are only interested in non-fibred links, this case will not arise.
The second possibility is that $S_0$ and $S'_0$ do not coincide but their boundaries do. Then $\partial N$ covers an entire component of $\partial\!\nhd(L)$, and $T\times\{0\}$ meets $T\times\{1\}$ along at least one component of $\partial T$. See Figure \ref{edgespic1}. \end{remark}
\begin{corollary}\label{isotopyrelbdycor} Suppose $L$ is not fibred. If $S,S'$ are isotopic by an isotopy fixing their boundaries then there is a product region $N=(T\times\mathrm{I})/\!\sim$ between them with $\rho_T=\partial T$. \end{corollary} \begin{proof} Suppose no such product region exists. By Proposition \ref{productregionsprop} there is a product region $N$ between $S$ and $S'$. This product region $N$ meets one component of each of $S$ and $S'$, and the boundaries of these components coincide. By deleting other components if necessary, we may assume $S,S'$ are connected.
Let $\tilde{S}$ be a lift of $S$ to the infinite cyclic cover. The isotopy from $S$ to $S'$ lifts to an isotopy from $\tilde{S}$ to a lift $\tilde{S}'$ of $S'$. Note that $\partial\tilde{S}=\partial\tilde{S}'$. By hypothesis, the isotopy from $S'$ to $S$ defined by the product region $N$ moves the boundary of $S'$. As the boundaries of $S$ and $S'$ coincide, this isotopy therefore takes each component of $\partial S'$ once around the torus component of $\partial M$ on which it lies. Hence the isotopy defined by $N$ lifts to an isotopy from $\tilde{S}'$ to either $\tau(\tilde{S})$ or $\tau^{-1}(\tilde{S})$. Thus $\tilde{S}$ is isotopic to $\tau(\tilde{S})$. Again by Proposition \ref{productregionsprop} there is a product region between $\tilde{S}$ and $\tau(\tilde{S})$. This contradicts that $L$ is not fibred. \end{proof}
\begin{remark} The condition that $L$ is not fibred is not actually necessary for this result, only for our proof. \end{remark}
\begin{proposition}[\cite{MR0224099} Corollary 3.2]\label{surfaceinproductprop} Suppose surfaces $S_0,S_1$ bound a product region $N$. Let $S'$ be an incompressible surface that is transverse to $S_0,S_1$. Suppose $S'\cap \Int(N)\neq\emptyset$ but $S'\cap (S_1\cap N)=\emptyset$. Then each component of $S'\cap\Int(N)$ bounds a product region with a subsurface of $S_0$. In particular, if additionally $S'\cap (S_0\cap N)=\emptyset$ then this component of $S'$ is parallel to those of $S_0,S_1$ that bound $N$. \end{proposition}
\begin{proposition}[\cite{przytycki-2010} Corollary 3.4]\label{disjointinteriorsprop} Let $M_a,M_b$ be proper 3--submanifolds of $\tilde{M}$ such that $\partial M_a$ and $\partial M_b$ are unions of lifts of minimal genus Seifert surfaces that are almost transverse with simplified intersection. If $M_a$ can be isotoped to have interior disjoint from $M_b$ then the interior of $M_a$ is disjoint from $M_b$. \end{proposition}
\begin{lemma}[\cite{przytycki-2010} Lemma 3.5]\label{maketransverselemma} Let $R_1,R_2,R_3$ be minimal genus Seifert surfaces. Then they can be isotoped to be pairwise almost transverse and have pairwise simplified intersection. \end{lemma}
\begin{proposition}[\cite{przytycki-2010} Proposition 3.2]\label{realisedistanceprop} In the notation of Proposition \ref{msdistanceprop}, if $R'$ is almost transverse to and has simplified intersection with $R$ then it minimises $\max\{n:V_{R'}\cap V_n\neq\emptyset\}-\min\{n:V_{R'}\cap V_n\neq\emptyset\}$. \end{proposition}
The following criterion allows us to test for adjacency under Definition \ref{fullmsdefn}.
\begin{lemma}\label{arcintersectionlemma} Let $R_1, R_2$ be fixed, almost disjoint, taut Seifert surfaces for a link $L$. Then $R_1,R_2$ demonstrate that their isotopy classes are at most distance 1 apart in $\ms(L)$ if and only if the following holds for all pairs $(x,y)$ of points of $R_1\setminus R_2$.
Choose a product neighbourhood $\nhd(R_1)=R_1\times[-1,1]$ in $M$ for $R_1$ with $R_1=R_1\times\{0\}$, such that $R_1\times\{1\}$ lies on the positive side of $R_1$. Let $\rho\colon \mathrm{I}\to M$ be any path with $\rho(0)=(x,1)$ and $\rho(1)=(y,-1)$ that is disjoint from $R_1$ and transverse to $R_2$. Then the algebraic intersection number of $\rho$ and $R_2$ is 1. \end{lemma} \begin{proof} Suppose the condition holds for all pairs $(x,y)$. If $R_1,R_2$ coincide everywhere then they are distance 0 apart. Assume otherwise, and let $x_0$ be a point of $R_1\setminus R_2$. Choose a lift $V_{R_2}$ of $M\setminus R_2$, and let $\tilde{x}_0$ be the lift of $x_0$ that lies in $V_{R_2}$. Let $V_{R_1}$ be the lift of $M\setminus R_1$ that lies above $\tilde{x}_0$, and let $\tilde{R}_1$ be the lift of $R_1$ that lies between $V_{R_1}$ and $\tau^{-1}(V_{R_1})$. Then $\tilde{x}_0$ lies on $\tilde{R}_1$, and $V_{R_1}$ meets $V_{R_2}$ and $\tau(V_{R_2})$. We wish to show that these are the only lifts of $M\setminus R_2$ that $V_{R_1}$ meets. Suppose otherwise.
First suppose that $V_{R_1}$ meets $\tau^{-1}(V_{R_2})$. Then there is a path $\rho$ in $M$ that runs from just above $x_0$ to the projection of a point in $V_{R_1}\cap\tau^{-1}(V_{R_2})$, that is disjoint from $R_1$ and that has algebraic intersection $-1$ with $R_2$. There is also a path $\rho'$ from this point back to above $x_0$ that is disjoint from $R_2$. We may assume both paths are transverse to $R_2$. This forms a closed curve that has algebraic intersection $-1$ with $R_2$. It therefore has algebraic intersection $-1$ with $R_1$.
Consider the first point $x_1$ at which $\rho'$ crosses $R_1$. Then $x_1\in R_1\setminus R_2$. If $\rho'$ passes through $R_1$ in the positive direction at $x_1$ then the section of the path $\rho\cup\rho'$ from $(x_0,1)$ to $(x_1,-1)$ contradicts the hypothesis as it has algebraic intersection $-1$ with $R_2$ instead of $1$. Thus $\rho'$ passes through $R_1$ in the negative direction at $x_1$. Stop the path just above $x_1$, at $(x_1,1)$, and add in a path that runs to $(x_1,-1)$ in $M\setminus R_1$. This final section of path has algebraic intersection $1$ with $R_2$ by the hypothesis. Then the complete path gives a contradiction with the hypothesis, as it has algebraic intersection 0 with $R_2$. Hence $V_{R_1}$ lies entirely above $\tau^{-1}(V_{R_2})$.
Now suppose $V_{R_1}$ meets $\tau^2(V_{R_2})$. Then there is a path $\rho$, disjoint from $R_1$, from $(x_0,1)$ to the projection of a point in $V_{R_1}\cap\tau^2(V_{R_2})$ that has algebraic intersection $2$ with $R_2$. Again, there is a second path $\rho'$ from there to $(x_0,1)$ that is disjoint from $R_2$. The closed curve has algebraic intersection 2 with $R_2$ and $R_1$. Consider the first point $x_1$ at which $\rho'$ crosses $R_1$. Then $x_1\in R_1\setminus R_2$. If $\rho'$ passes through $R_1$ in the positive direction at $x_1$ then the path up to this point contradicts the hypothesis, as it has intersection 2 with $R_2$ instead of 1. Thus it passes through $R_1$ in the negative direction. Stop the path just above $x_1$, at $(x_1,1)$, and add in a path that runs to $(x_1,-1)$ in $M\setminus R_1$. This final section of path has algebraic intersection $1$ with $R_2$ by the hypothesis. Thus the complete path gives a contradiction with the hypothesis, as it has intersection 3 with $R_2$. Hence we have the required result.
Conversely, suppose any lift of $M\setminus R_1$ meets at most two lifts of $M\setminus R_2$. Choose $x,y,\rho$ as in the condition to be checked. Take the lift $\tilde{\rho}$ of $\rho$ with $\tilde{\rho}(0)$ in a fixed lift $V_{R_2}$ of $M\setminus R_2$. We may use $\tilde{\rho}$ in defining the lift $V_{R_1}$ of $M\setminus R_1$. Therefore $\tilde{\rho}$ is contained in two lifts of $M\setminus R_2$ and the lift of $R_2$ between them. One of these two lifts is $V_{R_2}$, since this contains the lift $\tilde{\rho}(0)$ of $(x,1)$. In addition, the lift of $(x,-1)$ lies in $\tau(V_{R_2})$. Thus the lift $\tilde{\rho}(1)$ of $(y,-1)$ is in $V_{R_2}$ or in $\tau(V_{R_2})$. We must show that it is in $\tau(V_{R_2})$. If not then it lies in $V_{R_2}$. Then the lift of $(y,1)$ lies in $\tau^{-1}(V_{R_2})$, which is a contradiction. \end{proof}
\begin{lemma}\label{almostdisjointlemma} Let $R_a$ be a taut Seifert surface for $L$. Suppose $R_{b,0}^{}, R_{b,0}'$ are two copies of a component of a taut Seifert surface for $L$ that are disjoint from $R_a$ and are not isotopic to any component of it. Then $R_{b,0}^{},R_{b,0}'$ are isotopic by an isotopy that does not meet $R_a$. \end{lemma} \begin{proof} By a small isotopy disjoint from $R_a$, we may ensure that $R_{b,0}^{},R_{b,0}'$ are transverse. Since they are isotopic, there is a product region $N$ between them.
If $N$ meets $R_a$, it contains a whole component of $R_a$, which is then isotopic to each of the horizontal boundary components of $N$. This contradicts that $R_{b,0}^{}$ and $R_{b,0}'$ are not isotopic to any component of $R_a$. Thus $N$ is disjoint from $R_a$. If $R_{b,0}^{}\cap R_{b,0}'\neq \emptyset$ then the isotopy defined by $N$ reduces $|R_{b,0}^{}\cap R_{b,0}'|$. If $R_{b,0}^{}\cap R_{b,0}'=\emptyset$ then the isotopy makes $R_{b,0}^{}$ and $R_{b,0}'$ coincide. \end{proof}
\begin{lemma}\label{makedisjointlemma} Let $R_a,R_b$ be adjacent vertices of $\ms(L)$. Then $R_a,R_b$ can be isotoped so they are disjoint and realise their adjacency.
Suppose there are components $R_{a,0}$ of $R_a$ and $R_{b,0}$ of $R_b$ that can be made to coincide, so there is a product region between these components. The side of $R_a$ on which this product region lies is determined by the choice of $R_a,R_b$. \end{lemma} \begin{proof} We regard $R_b$ as fixed, and isotope $R_a$. Isotope $R_a$ to realise the adjacency. Suppose they are not disjoint, and pick a component $R_{a,0}$ of $R_a$ that is not disjoint from $R_b$. Because the surfaces realise their adjacency, $R_{a,0}$ cannot cross $R_b$. Therefore $R_{a,0}$ can be pushed off $R_b$ by a small isotopy. If the two components do not coincide, there is no choice as to which direction to push $R_{a,0}$, and it is clear that the condition in Lemma \ref{arcintersectionlemma} continues to hold.
If $R_{a,0}$ coincides with a component of $R_b$, it is possible to push it off in either direction, creating a product region. We will see that the choice of direction is forced upon us by wanting the condition in Lemma \ref{arcintersectionlemma} to continue to hold.
As $R_a\neq R_b$, at least one component $R_{a,1}$ of $R_a$ does not coincide with any component of $R_b$. Fix a point $x_0$ of $R_{a,1}$, and choose a product neighbourhood $\nhd(R_a)=R_a\times[-1,1]$ such that $R_a=R_a\times\{0\}$. For each point $y$ of $R_{a,0}$, choose a path $\rho$ from $(x_0,1)$ to $(y,-1)$ that is disjoint from $R_a$ and transverse to $R_b$. Since $\rho$ is contained in $M\setminus R_a$, it has algebraic intersection 0 or 1 with $R_b$. Suppose that this number is not both well-defined and constant on $R_{a,0}$. That is, suppose there are such points $y_0,y_1$ and paths $\rho_0,\rho_1$ such that one gives the value 0 while the other gives the value 1. Let $\rho$ be a path from $y_0$ to $y_1$ in $R_{a,0}\times\{-1\}$. Then $\rho\cup\rho_0\cup\rho_1$ forms a closed curve that has intersection 0 with $R_a$ but intersection 1 with $R_b$. This is not possible. We therefore have a well-defined value for the algebraic intersection of $R_b$ and a path as described. If this value is 1, push $R_{a,0}$ downwards, and otherwise push it upwards. Then, for $y\in R_{a,0}$, any path from $(x,1)$ to $(y,-1)$ that is disjoint from $R_a$ has intersection 1 with $R_b$.
Now pick points $x,y$ and path $\rho$ as in the condition in Lemma \ref{arcintersectionlemma}. Isotope $\rho$ so that it decomposes into three paths disjoint from $R_a$, one from $(x,1)$ to $(x_0,-1)$, one from $(x_0,-1)$ to $(x_0,1)$ and one from $(x_0,1)$ to $(y,-1)$. The outer two paths have intersection $1$ with $R_b$ while the middle one has intersection $-1$ with $R_b$. Thus $\rho$ has intersection $1$ with $R_b$. \end{proof}
\section{An ordering on the vertices of $\ms(L_i)$}\label{orderingsection}
In \cite{przytycki-2010}, a partial ordering $<_{S}$ is defined on the vertices of $\ms(L)$ for a link $L$, relative to a fixed vertex $S$. This ordering only compares adjacent vertices.
\begin{definition}[\cite{przytycki-2010} Section 5]\label{orderingdefn} Let $R,R',S$ be vertices of $\ms(L)$ with $R,R'$ adjacent. Isotope the surfaces so that $R,R'$ are almost transverse to and have simplified intersection with $S$, and so that $R,R'$ are almost disjoint with simplified intersection. Set $M=\mathbb{S}^3\setminus\nhd(L)$. Let $\tilde{M}$ denote the infinite cyclic cover of $M$, and let $\tau$ be the generating covering transformation (in the positive direction). Let $V_{S}$ be a lift of $M\setminus S$.
Let $V_{R}$ be the lift of $M\setminus R$ such that $V_{R}\cap V_{S}\neq\emptyset$ but $V_{R}\cap \tau(V_{S})=\emptyset$. Finally, let $V_{R'}$ be the lift of $M\setminus R'$ such that $V_{R'}\cap V_{R}\neq\emptyset$ but $V_{R'}\cap\tau(V_{R})=\emptyset$. See Figure \ref{orderingpic1}.
Then $R' <_{S} R$ if $V_{R'}\cap V_S\neq\emptyset$.
\begin{figure}\label{orderingpic1}
\end{figure}
\end{definition}
\begin{lemma}[\cite{przytycki-2010} Lemma 5.3]\label{orderinglemma1} Let $R, R'$ be adjacent vertices, and $S$ any vertex. Then $R' <_{S}R$ or $R <_{S}R'$. \end{lemma}
\begin{lemma}[\cite{przytycki-2010} Lemma 5.4]\label{orderdistancelemma} If $\dist_{\ms(L)}(R',S)<\dist_{\ms(L)}(R,S)$ then ${R<_S R'}$. \end{lemma}
\begin{lemma}[\cite{przytycki-2010} Lemma 5.5]\label{orderinglemma2} There are no $R_1,\cdots,R_k$, for $k\geq 2$, with $R_1 <_{S} R_2 <_{S}\cdots<_{S}R_k <_{S}R_1$. \end{lemma}
Now choose $L,L_1,L_2$ as in the statement of Theorem \ref{connectsumthm}. These will remain fixed for the remainder of this paper.
\begin{definition} For $i=1,2$, let $K_i$ be the link component of $L_i$ along which the connected sum is performed. Let $T_0$ be a fixed copy of $\mathbb{S}^2$ that divides $L$ into $L_1$ and $L_2$, and choose a product neighbourhood $T_0\times[1,2]$ such that $T_0=T_0\times\left\{\frac{3}{2}\right\}$ and $T_0\times\{i\}$ lies on the same side of $T_0$ as $L_i$ for $i=1,2$. We further require that both arcs of $L\cap(T_0\times[1,2])$ are of the form $\{x\}\times[1,2]$ for some $x\in T_0$.
Next, choose a regular neighbourhood $\nhd(L)$ of $L$, and let $M=\mathbb{S}^3\setminus\nhd(L)$. In addition, let $M_0$ denote $M\cap (T_0\times[1,2])$ and for $i=1,2$ denote by $M_i$ the component of $M\setminus(T_0\times(1,2))$ that meets $L_i$. See Figure \ref{orderingpic2}a. \end{definition}
\begin{figure}\label{orderingpic2}
\end{figure}
\begin{definition} Choose a taut Seifert surface $S_0$ for $L$. We will use $S_0$ as a basepoint for $\ms(L)$. Isotope $S_0$ to have minimal intersection with $T_0$. Then $S_0\cap T_0$ is a single arc $\sigma^*$. Further ensure that $S_0\cap M_0=\sigma^*\times[1,2]$. Let $S_0^*$ denote this copy of $S_0$, considered as a fixed surface rather than up to isotopy. Again, see Figure \ref{orderingpic2}a.
The link made up of the part of $L$ on the $M_1$ side of $T_0$ together with the arc $\sigma^*$ is $L_1$, and $M_1$ is homeomorphic to $\mathbb{S}^3\setminus\nhd(L_1)$. The same is true for $L_2$.
The sphere $T_0$ divides $S_0^*$ into Seifert surfaces $S_1,S_2$ for $L_1,L_2$ respectively. Since $S_0$ is taut, so are $S_1,S_2$. Let $S_1^*, S_2^*$ be these surfaces, again considered as fixed. Define curves $\lambda^*,\lambda_1^*,\lambda_2^*$ on $\partial M,\partial M_1,\partial M_2$ respectively, also seen as fixed, by $\lambda^*=\partial S_0^*$ and $\lambda_i^*=\partial S_i^*$ for $i=1,2$. By an appropriate choice of $\nhd(L_i)$ for $i=1,2$ we may ensure that $(\sigma^*\times\{i\})\cap M\subset\lambda_i^*$. See Figure \ref{orderingpic2}b. \end{definition}
\begin{definition} Define $\leq$ on $\V(\ms(L_1))$ by ${R\leq R'} \Leftrightarrow {(R<_{S_1}R'} \textrm{ or }{R=R')}$. Similarly, define $\leq$ on $\V(\ms(L_2))$ by ${R\leq R'} \Leftrightarrow {(R<_{S_2}R'} \textrm{ or } {R=R')}$. \end{definition}
\begin{corollary} The pairs $(\ms(L_1),\leq),(\ms(L_2),\leq)$ are ordered simplicial complexes. \end{corollary} \begin{proof} Lemma \ref{orderinglemma1} gives (P2)$'$, and Lemma \ref{orderinglemma2} gives (P1). Together they give (P3). \end{proof}
These two orderings, together with that on $\V(\mathcal{Z})$, allow us to apply Theorem \ref{productcomplextheorem}. We will use this to give a triangulation of $|\ms(L_1)|\times|\ms(L_2)|\times\mathbb{R}$ that agrees with the triangulation of $\ms(L)$ in a natural way.
\section{Choosing representatives of isotopy classes}\label{repssection}
Given taut Seifert surfaces $R_i$ of $L_i$ for $i=1,2$, we can isotope the surfaces so that $\partial R_i=\lambda_i^*$. Having done so, we can glue each of the two surfaces to the rectangle $\sigma^*\times[1,2]$ to form a taut Seifert surface $R$ of $L$ with $\partial R=\lambda^*$.
An isotopy of any Seifert surface $R$ for $L$ can be split into an isotopy that fixes $\partial R$ and an isotopy supported in a neighbourhood of $\partial M$. Thus an isotopy class relative to the boundary corresponds to a fixed element of the isotopy class together with a winding number for each boundary component. To measure the winding numbers, we need to decide what it means to have winding number 0. We use $S^*_i$ as a basepoint for $\ms(L_i)$, setting this to have winding number 0 at each boundary component. We want to define what it means for another surface to have winding number 0. In practice we will only be concerned with the winding number at $K_i$, but it is convenient to choose surfaces fixed at every boundary component.
Thus our present aim is to find a fixed representative $R^*$ for each vertex $R$ of $\ms(L_i)$. We choose these such that $\partial R^*=\lambda_i^*$. We also want these representatives to interact well with regard to the ordering $\leq$.
\begin{definition} Let $R,R'$ be $\partial$--almost disjoint taut Seifert surfaces for $L_i$. Pick a component $K'$ of $L_i$, and consider $R,R'$ near $K'$. It may be that the components that meet $K'$ coincide. If not, one of the two surfaces lies `above' the other, where this is measured in the positive direction around $K'$. Write $R\leq_{K'} R'$ if either $R'$ lies above $R$ or the two coincide. \end{definition}
\begin{definition} Define a relation $\leq_{\partial}$ on isotopy classes of taut Seifert surfaces relative to the boundary by $R\leq_{\partial} R'$ if there are representatives $R_b$ of $R$ and $R'_a$ of $R'$ such that $R_b,R'_a$ are $\partial$--almost disjoint and $R_b\leq_{K'}R'_a$ for each component $K'$ of $L_i$. \end{definition}
\begin{lemma} The relation $\leq_{\partial}$ is antisymmetric. \end{lemma} \begin{proof} Suppose otherwise. Choose $R,R'$ with $R\leq_{\partial}R'\leq_{\partial}R$ and $R\neq R'$. Consider $R'$ as fixed, and choose representatives $R_a,R_b$ of $R$ such that $R_a$ shows that $R'\leq_{\partial} R$ and $R_b$ shows that $R\leq_{\partial}R'$. Note that $\partial R'=\partial R_a=\partial R_b=\lambda^*_i$.
The surface $R_a$ might be disconnected. However, at least one component of $R_a$ is not isotopic to any component of $R'$. Remove all other components of $R_a$, and the corresponding ones of $R_b$. Also remove all components of $R'$ that are disjoint from the new $R_a$.
As $R_a,R_b$ only meet a neighbourhood of $R'$ along their boundaries, where we know how they are positioned, we see that $R_a,R_b$ can be put into general position by an isotopy away from a neighbourhood of $R'$. Choose this isotopy to minimise $|R_a\cap R_b|$.
Consider the product region $N$ between $R_a$ and $R_b$ given by Corollary \ref{isotopyrelbdycor}. The isotopy of $R_a$ defined by $N$ does not move $\partial R_a$, so our positioning of $R_a,R_b$ means that $N$ must meet $R'$. Let $R'_0$ be a component of $R'$ that meets $N$. Then $R'_0\subset N$. Since $R'_0$ is $\partial$--almost disjoint from $R_a$ and from $R_b$, Proposition \ref{surfaceinproductprop} gives that $R'_0$ is isotopic to $R_a$, which is a contradiction. \end{proof}
\begin{proposition}\label{representativesprop} It is possible to choose a representative $R^*$ for each vertex $R$ of $\ms(L_i)$ such that the following conditions hold for any pair $(R_a,R_b)$ of adjacent vertices of $\ms(L_i)$. \begin{itemize}
\item $\partial R_a^*=\partial R_b^*=\lambda_i^*$.
\item There are $\partial$--almost disjoint copies $R'_j$ of $R_j$, for $j=a,b$, that are isotopic to $R_j^*$ via isotopies fixing $\partial R'_j$ and that demonstrate their adjacency.
\item $R_b^*\leq_{\partial} R_a^*$ (equivalently, $R'_b\leq_{\partial} R'_a$) if and only if $R_b\leq R_a$. \end{itemize} \end{proposition} \begin{proof} Without loss of generality, $i=1$. Let $\tilde{M}_1$ be the infinite cyclic cover of $M_1$, with covering transformation $\tau_1$. As in Proposition \ref{msdistanceprop}, construct a lift $V_n$ of $M_1\setminus S_1^*$ for $n\in\mathbb{Z}$. Let $\tilde{S}_1$ be the lift of $S_1^*$ that lies between $V_0$ and $V_1$.
We have already chosen the representative $S_1^*$ for $S_1$. Let $R$ be a vertex of $\ms(L_1)$ other than $S_1$. Isotope $R$ to be almost transverse to and have simplified intersection with $S_1^*$. Let $V_R$ be the lift of $M_1\setminus R$ such that $V_R\cap V_0\neq\emptyset$ but $V_R\cap V_1=\emptyset$, and let $\tilde{R}$ be the lift of $R$ that lies between $V_R$ and $\tau_1(V_R)$. From Proposition \ref{realisedistanceprop} we see that $R$ minimises $\max\{n:V_{R}\cap V_n\neq\emptyset\}-\min\{n:V_{R}\cap V_n\neq\emptyset\}$. By Proposition \ref{msdistanceprop} this means that $\{n:V_R\cap V_n\neq\emptyset\}=\{0,-1,\ldots,-\dist_{\ms(L_1)}(S_1,R)\}$.
By an isotopy close to the boundary, move $\partial R$ around $\partial\!\nhd(L_1)$ until $\partial \tilde{R}$ coincides with $\partial \tilde{S}_1$. Note that this does not change $\{n:V_R\cap{} V_n\neq\emptyset\}$. Take the resulting surface to be $R^*$.
Our first aim is to show that the choice of $R^*$ is well-defined up to isotopy relative to the boundary. That is, we wish to check that the choice of winding number is independent of the choice of isotopy made when constructing $R^*$. Let $R'$ be any other copy of $R$. Construct $(R')^*$ as described. Then $\partial \tilde{R}=\partial \tilde{R}'=\partial \tilde{S}_1$. Note that we do not know how $R$ and $R'$ intersect. However, $R$ is isotopic to $R'$ in $M_1$. This isotopy lifts to an isotopy in $\tilde{M}_1$ from $\tilde{R}$ to a lift of $R'$. This lift is $\tau_1^m(\tilde{R}')$ for some $m\in\mathbb{Z}$. The isotopy also takes $V_{R}$ to $\tau_1^m(V_{R'})$.
Suppose $m\neq 0$. Without loss of generality, $m>0$. Then the submanifold $\tau_1^m(V_{R'}\cup\tilde{R}'\cup\tau_1^{-1}(\tilde{R}'))$ of $\tilde{M}_1$ has interior disjoint from the submanifold $\tau_1^{-k}(V_0\cup \tilde{S}_1\cup\tau_1^{-1}(\tilde{S}_1))$, where $k=\dist_{\ms(L_1)}(S_1,R)$. Thus by Proposition \ref{disjointinteriorsprop} we see that $V_R$ is disjoint from $V_{-k}$. This contradicts that $\{n:V_R\cap V_n\neq\emptyset\}=\{0,-1,\ldots,-k\}$. Hence $m=0$. We can therefore modify the isotopy from $R$ to $R'$ near the boundary to keep $\partial R$ fixed throughout. Thus $R^*$ and $(R')^*$ are isotopic relative to the boundary, as required.
It remains to show that our chosen representatives have the required properties. Let $R_a,R_b$ be adjacent vertices of $\ms(L_1)$ with $R_b\leq R_a$. Position them relative to $S_1^*$ and each other as in Lemma \ref{maketransverselemma}. Then by Proposition \ref{realisedistanceprop} we know that $R_a,R_b$ realise their adjacency, and so are almost disjoint.
Now consider the lifts used to demonstrate that $R_b<_{S_1} R_a$, as in Definition \ref{orderingdefn}. Taking $V_0$ as the lift of $M_1\setminus S_1^*$ we see that $V_{R_a}$ is the required lift of $M_1\setminus R_a$. Let $V_{R_b}'$ be the required lift of $M_1\setminus R_b$. Then $V_{R_b}'$ meets $V_0$ but does not meet $V_1$. Thus $V_{R_b}'=V_{R_b}$. This means that $\tilde{R}_b$ lies below $\tilde{R}_a$ in $\tilde{M}_1$. Now isotope $R_a$ and $R_b$ near the boundary to form $R_a^*$ and $R_b^*$. Then it is clear that $\partial R_a^*=\partial R_b^*=\lambda_1^*$ and $R_b^*<_{\partial} R_a^*$. In addition, they continue to realise their adjacency and are $\partial$--almost disjoint. \end{proof}
\begin{remark}\label{pushdownremark} Suppose $R_a$ and $R_b$ are positioned as required, and that $R_b\leq R_a$. Suppose a component of $R_b$ coincides with a component of $R_a$. By Lemma \ref{makedisjointlemma} there is a unique direction we can push the component of $R_b$ off that of $R_a$ so that they continue to realise their adjacency. By examining the construction above we see that this direction is downwards. To see this, note that the components must coincide when we lift them to the infinite cyclic cover, as the boundaries of the lifts coincide. \end{remark}
\section{Mapping the vertices}\label{verticessection}
\begin{definition}\label{psidefn} Define a map $\Psi\colon\V(\ms(L_1))\times\V(\ms(L_2))\times\mathbb{Z}\to\V(\ms(L))$ as follows. Let $(R_1,R_2,n)\in\V(\ms(L_1))\times\V(\ms(L_2))\times\mathbb{Z}$. Take the copy of $R_1^*$ in $M_1$, and the copy of $R_2^*$ in $M_2$. Join these together by a rectangle in $M_0$ that winds $n$ times around $L$. Here we measure the winding number of the rectangle around the arc of $L$ that runs through $M_0$ and is oriented from $L_1$ to $L_2$, with respect to where $\lambda^*$ lies on the boundary of the neighbourhood of this arc. Let $R$ be the resulting surface. Note that $R$ is a taut Seifert surface for $L$. If $n\neq 0$ then $\partial R\neq\lambda^*$, but $[\partial R]=[\lambda^*]$. We set $\Psi(R_1,R_2,n)$ to be the isotopy class of $R$. \end{definition}
\begin{lemma}\label{surjectivelemma} $\Psi$ is surjective. \end{lemma} \begin{proof} Let $R$ be a taut Seifert surface for $L$. Isotope $R$ to have minimal intersection with $T_0$, and so that $R\cap M_0=\rho\times[1,2]$ for some arc $\rho\subset T_0$. Then $T_0$ cuts $R$ into taut Seifert surfaces $R_i$ of $L_i$ for $i=1,2$. Isotope $R_i$ to $R_i^*$. This isotopy may move $\partial R_i$ in $M_i$. However, the isotopy of $M_i$ can be extended to an isotopy of $M$ by also isotoping the rectangle $R\cap M_0$ in $M_0$ near $T_0\times \{i\}$. After these isotopies, we can read off the winding number $n$ of the rectangle. Then $\Psi(R_1,R_2,n)=R$. \end{proof}
\pagebreak
\begin{lemma}\label{injectivelemma} $\Psi$ is injective. \end{lemma} \begin{proof} Suppose $\Psi(R_{a,1},R_{a,2},n_a)=\Psi(R_{b,1},R_{b,2},n_b)$. We first aim to show that $R_{a,1}=R_{b,1}$ and $R_{a,2}=R_{b,2}$. Let $R_a,R_b$ be the fixed surfaces constructed by $\Psi$. Note that, by construction, $R_a$ and $R_b$ each meet $T_0$ in a single arc, and these arcs either coincide or are disjoint. We can ensure that the arcs are disjoint using an isotopy of $R_a$ that breaks into an isotopy of $R_{a,1}$ and an isotopy of $R_{a,2}$. By an isotopy of $R_a$ keeping $T_0$ fixed pointwise, $R_a$ and $R_b$ can be made transverse.
As the surfaces are isotopic, there is a product region $N$ between them. If $\Int(N)$ meets $T_0$ then $T_0\cap N$ is a product disc in the sutured manifold $N$, and divides $N$ into product regions $N_1,N_2$.
The isotopy of $R_a$ defined by $N$ therefore breaks into an isotopy of $R_{a,1}$ and an isotopy of $R_{a,2}$. It either reduces $|R_a\cap R_b|$ or makes components of $R_a$ and $R_b$ coincide. Inductively we find that $R_{a,1}=R_{b,1}$ and $R_{a,2}=R_{b,2}$.
It remains to show that $\Psi(R_{a,1},R_{a,2},n_a)\neq\Psi(R_{a,1},R_{a,2},n_b)$ for $n_a> n_b$. Suppose otherwise. Without loss of generality, $n_b=0$. Consider the fixed surfaces $R_a=\Psi(R_{a,1},R_{a,2},n_a),R_0=\Psi(R_{a,1},R_{a,2},0)$. Push the component of the copy of $R_{a,1}$ in $R_a$ that meets $K_1$ upwards off $R_0$, and the copy of $R_{a,2}$ downwards. Delete all other components of each surface. By the assumption that $\Psi(R_{a,1},R_{a,2},n_a)=\Psi(R_{a,1},R_{a,2},0)$, the two remaining surfaces are isotopic, so there is a product region $N$ between them. Note that $N$ meets $\partial\!\nhd(K_1\# K_2)$. By considering the boundary curves of the surfaces on $\partial\!\nhd(K_1\# K_2)$, we can restrict the possibilities for the location of $N$ relative to $R_a,R_0$. To see this, note that $N$ only meets one side of each of the orientable surfaces $R_a,R_0$. Figure \ref{verticespic1} shows the boundary patterns in the cases $n_a\in\{1,2,3,4\}$.
\begin{figure}\label{verticespic1}
\end{figure}
In general we see that $N$ must meet the complement of one of the $L_i$ in the complement of the surface $R_{a,i}$. Suppose this is $L_1$. Then $T_0\times\{1\}$ is a product disc in $N$. This means $\mathbb{S}^3\setminus\nhd(R_{a,1})$ is a product region, showing that $L_1$ is fibred, which is a contradiction. \end{proof}
\section{Mapping the edges}\label{edgessection}
\begin{lemma}\label{threearcslemma} Let $R_{a,i},R_{b,i}$ be fixed almost disjoint taut Seifert surfaces that demonstrate that their isotopy classes are adjacent in $\ms(L_i)$ for $i=1,2$. Suppose that there are arcs $\rho_j\subset T_0$ for $j=a,b$ such that $R_{j,i}\cap(T_0\times\{i\})=\rho_j\times\{i\}$ for $i=1,2$. Suppose further that $\rho_a$ and $\rho_b$ are disjoint. Let $R_j=R_{j,1}\cup R_{j,2}\cup(\rho_j\times[1,2])$ for $j=a,b$. Then $R_a,R_b$ demonstrate that their isotopy classes are adjacent in $\ms(L)$. \end{lemma} \begin{proof} The pairs $R_{a,i},R_{b,i}$ satisfy the condition given in Lemma \ref{arcintersectionlemma}. We wish to show that $R_a,R_b$ also satisfy this condition. First note that $R_a$ and $R_b$ are almost disjoint.
Choose points $x,y\in R_a\setminus R_b$, an appropriate product neighbourhood of $R_a$ and a path $\rho$ from $(x,1)$ to $(y,-1)$ that is disjoint from $R_a$ and transverse to $R_b$. By a small isotopy, $\rho$ also can be made transverse to $T_0$. If $\rho$ is disjoint from $T_0$, it has algebraic intersection 1 with $R_b$, as required.
Suppose otherwise. Let $x_0$ be the first point at which $\rho$ meets $T_0$. Find an arc $\rho_0$ from $R_a\times\{1\}$ to $R_a\times\{-1\}$ such that $\rho_0$ lies entirely in $T_0$ and passes through $x_0$. We can then use $\rho_0$ to split $\rho$ into three paths as in the condition in Lemma \ref{arcintersectionlemma}, each of which (up to isotopy) has strictly fewer points of intersection with $T_0$. The first path runs along $\rho$ as far as $x_0$, and then follows $\rho_0$ up to $R_a\times\{-1\}$. The second is $\rho_0$ traversed backwards. The third runs along $\rho_0$ to $x_0$, and then runs along the rest of $\rho$. The first two paths can be made disjoint from $T_0$, and run in opposite directions. Hence removing them does not change the algebraic intersection number with $R_b$. In this way we can remove all points of $\rho\cap T_0$. \end{proof}
\begin{corollary}\label{edgescor} Let $(R_{a,1},R_{a,2},n_a)$ and $(R_{b,1},R_{b,2},n_b)$ be in $\V(\ms(L_1))\times\V(\ms(L_2))\times\mathbb{Z}$ with $R_{b,2}\leq R_{a,2}$. Suppose these triples are distinct and one of the following conditions holds. \begin{itemize}
\item $R_{a,1}\geq R_{b,1}$ and $n_a=n_b$.
\item $R_{a,1}\leq R_{b,1}$ and $n_a=n_b-1$. \end{itemize} Then $\Psi(R_{a,1},R_{a,2},n_a)$ and $\Psi(R_{b,1},R_{b,2},n_b)$ are adjacent in $\ms(L)$. \end{corollary} \begin{proof} Viewing $\Psi(R_{a,1},R_{a,2},n_a)^*$ as fixed, we build a copy of $\Psi(R_{b,1},R_{b,2},n_b)$ satisfying the hypotheses of Lemma \ref{threearcslemma}. Without loss of generality, $n_a=0$.
Isotope $R_{b,2}^*$ as in Proposition \ref{representativesprop} to realise its adjacency with $R_{a,2}^*$. For any components of $R_{b,2}$ that do not coincide with those of $R_{a,2}^*$, perform a small isotopy near the boundary to move the boundary of $R_{b,2}$ to below that of $R_{a,2}^*$, making the components disjoint. Consider the pairs of components that meet $K_2$. If these components of $R_{a,2}^*,R_{b,2}$ coincide then push that in $R_{b,2}$ downwards off $R_{a,2}^*$. By Remark \ref{pushdownremark}, $R_{a,2}$ and $R_{b,2}$ still realise their adjacency.
Case 1: $R_{a,1}\geq R_{b,1}$ and $n_b=n_a=0$. Then $R_{b,1}\leq_{\partial}R_{a,1}$. Perform isotopies on $R_{b,1}^*$ analogous to those performed on $R_{b,2}^*$. A flat rectangle can then be inserted connecting the boundaries of the surfaces $R_{b,1},R_{b,2}$.
Case 2: $R_{a,1}\leq R_{b,1}$ and $n_b=n_a+1=1$. Then $R_{a,1}\leq_{\partial}R_{b,1}$. This time isotope $R_{b,1}^*$ upwards instead of downwards, so the boundary of $R_{b,1}$ lies above that of $R_{a,1}^*$ wherever they do not coincide. Add in a rectangle that wraps nearly once around $K_1\# K_2$, to again join up the boundaries of $R_{b,1},R_{b,2}$.
It is clear that, in either case, $\Psi(R_{b,1},R_{b,2},n_b)^*$ is isotopic to the surface $R_b$ we have constructed. We may now apply Lemma \ref{threearcslemma} to complete the proof. \end{proof}
\begin{proposition}\label{mainedgesprop} Let $(R_{a,1},R_{a,2},n_a)$ and $(R_{b,1},R_{b,2},n_b)$ be in $\V(\ms(L_1))\times\V(\ms(L_2))\times\mathbb{Z}$. Suppose $\Psi(R_{a,1},R_{a,2},n_a)$ and $\Psi(R_{b,1},R_{b,2},n_b)$ are adjacent in $\ms(L)$. Then one of the conditions in Corollary \ref{edgescor} holds. \end{proposition} \begin{proof} Without loss of generality, $n_a=0$. Fix $\Psi(R_{a,1},R_{a,2},n_a)^*$ and isotope $\Psi(R_{b,1},R_{b,2},n_b)$ to be disjoint from it, realising the adjacency in $\ms(L)$. Since $T_0\cap (M_0\setminus \Psi(R_{a,1},R_{a,2},n_a)^*)$ is a disc, by standard methods we can also ensure that this copy of $\Psi(R_{b,1},R_{b,2},n_b)$ meets $T_0$ in a single arc. As in the proof of Lemma \ref{surjectivelemma}, dividing the surface along $T_0$ gives (fixed) Seifert surfaces $R_{b,i}'$ in $\ms(L_i)$ for $i=1,2$ and there is an integer $n_b'$ such that $\Psi(R_{b,1}',R_{b,2}',n_b')$ is isotopic to $\Psi(R_{b,1},R_{b,2},n_b)$. See Figure \ref{edgespic2}. By Lemma \ref{injectivelemma} we have in particular that $R_{b,i}'$ is isotopic to $R_{b,i}$ in $M_i$ for $i=1,2$. From this we see that $R_{a,i}^*,R_{b,i}'$ demonstrate that $R_{a,i},R_{b,i}$ are at most distance 1 apart in $\ms(L_i)$. We may now assume that $R_{b,2}\leq R_{a,2}$.
It now remains to verify that $n_b$ takes the required value. Our approach is to position $\Psi(R_{b,1},R_{b,2},n_b)$ `close to' $R_{b,i}^*$ for $i=1,2$ without affecting the relative positions of $\Psi(R_{b,1},R_{b,2},n_b)$ and $\Psi(R_{a,1},R_{a,2},n_a)^*$. Having done so, we will be able to read off the value of $n_b$ from $\Psi(R_{b,1},R_{b,2},n_b)$.
For $i=1,2$, consider $R_{a,i}^*$ and $R_{b,i}^*$, which satisfy the conclusions of Proposition \ref{representativesprop}. As an isotopy of $R_{b,i}^*$ that fixes its boundary will not affect the winding number $n_b$, we may assume that no such isotopy is needed in this case. That is, we assume that $R_{a,i}^*$ and $R_{b,i}^*$ are $\partial$--almost disjoint and realise their adjacency. We further assume that components of $R_{a,i}^*$ and $R_{b,i}^*$ coincide whenever this is possible without moving the boundary of either surface.
Suppose that a component of $R_{b,i}^*$ and a component of $R_{a,i}^*$ bound a product region $N$ in $M_i$, but that any isotopy from one to the other moves the boundary of the surface. Since the boundaries of $R_{a,i}^*$ and $R_{b,i}^*$ coincide, the component of $R_{b,i}^*$ is given by taking a parallel copy of that of $R_{a,i}^*$ and moving its boundary once around $L_i$, as shown in Figure \ref{edgespic1}.
\begin{figure}\label{edgespic1}
\end{figure}
Note that $N$ lies below $R_{a,i}^*$ if $R_{a,i}^*\leq_{\partial}R_{b,i}^*$ and $N$ lies above $R_{a,i}^*$ if $R_{b,i}^*\leq_{\partial}R_{a,i}^*$. In addition, a component of $R_{b,i}'$ must be contained in the product region $N$ and be isotopic to the component of $R_{a,i}^*$. In particular, from Proposition \ref{representativesprop} we see that if $R_{a,i}^*\leq R_{b,i}^*$ then the component of $R_{b,i}'$ lies above that of $R_{b,i}^*$ and its boundary can be seen as lying above that of $R_{a,i}^*$, whereas if $R_{b,i}\leq R_{a,i}$ then the component of $R_{b,i}'$ lies below that of $R_{b,i}^*$ and its boundary can be seen as lying below that of $R_{a,i}^*$.
For $i=1,2$, temporarily ignore the components of each surface that meet $K_i$. If a component of $R_{b,i}^*$ coincides with a component of $R_{a,i}^*$, then there is a product region between it and the corresponding component of $R_{b,i}'$. Such components will play no part in the rest of the proof, so we may assume that there are none.
For each other component of $R_{b,i}^*$, there are two possibilities. If it is isotopic to a component of $R_{a,i}^*$, we have already seen how it relates to $R_{b,i}'$. Otherwise, take a copy of this component of $R_{b,i}$ that lies parallel to $R_{b,i}^*$ and is disjoint from $R_{a,i}^*$. That is, there is a product region between $R_{b,i}^*$ and this copy of $R_{b,i}$ that is contained within a product neighbourhood of $R_{b,i}^*$. Furthermore, this product region lies above $R_{b,i}^*$ if $R_{a,i}^*\leq_{\partial} R_{b,i}^*$ and lies below $R_{b,i}^*$ if $R_{b,i}^*\leq_{\partial} R_{a,i}^*$. By Lemma \ref{almostdisjointlemma}, the corresponding component of $R_{b,i}'$ is now isotopic to that of $R_{b,i}$ by an isotopy disjoint from $R_{a,i}^*$. We may therefore assume that these components of $R_{b,i}^{},R_{b,i}'$ now coincide. Hence for each component of $R_{b,i}'$ away from $K_i$ we have the following. If $R_{a,i}\leq R_{b,i}$ then the boundary of $R_{b,i}'$ lies above that of $R_{a,i}^*$. If $R_{b,i}\leq R_{a,i}$ then the boundary of $R_{b,i}'$ lies below that of $R_{a,i}^*$. Recall that we have assumed that no such components exist if $R_{a,i}=R_{b,i}$, and that $R_{b,2}\leq R_{a,2}$.
We now turn our attention to the components that meet $K_i$.
Case 1: This component of $R_{b,i}^*$ does not coincide with $R_{a,i}^*$ for $i=1,2$. Note that this means $R_{a,i}\neq R_{b,i}$, because $R_{a,i}^*$ and $R_{b,i}^*$ are not isotopic by an isotopy fixing their boundaries. In this case, the corresponding component of $R_{b,i}'$ can also be isotoped as just described. There is now an essentially unique way to join the two surfaces by a rectangle in $M_0$ that is disjoint from $\Psi(R_{a,1},R_{a,2},n_a)^*$. If $R_{b,1}\leq R_{a,1}$ then the rectangle must be flat, so $n_b=0=n_a$. If $R_{a,1}\leq R_{b,1}$ then the rectangle must twist nearly once around $\partial\!\nhd(K_1\# K_2)$ from above $R_{a,1}^*$ to below $R_{a,2}^*$ (that is, twisting in the positive direction around $K_1\# K_2$), so $n_b=1=n_a+1$. See Figure \ref{edgespic2}.
\begin{figure}\label{edgespic2}
\end{figure}
Case 2: The component of $R_{b,2}^*$ coincides with that of $R_{a,2}^*$, but those of $R_{a,1}^*,R_{b,1}^*$ do not coincide. Then $R_{a,1}\neq R_{b,1}$. Again isotope $R_{b,1}'$ as above. The component of $R_{b,2}'$ that meets $K_2$ is isotopic in $M_2$ to that of $R_{a,2}^*$, meaning there is a product region $N$ between them in $M_2$. Note that the interior of $N$ is disjoint from the three surfaces $R_{a,2}^*,R_{b,2}^*,R_{b,2}'$. Once we know which side of $R_{a,2}^*$ the product region $N$ lies (this is well-defined, by Lemma \ref{makedisjointlemma}), there is only one possibility for the rectangle joining the surfaces $R_{b,1}',R_{b,2}'$, as in Case 1.
If $R_{a,2}=R_{b,2}$ then we may further assume that $R_{b,1}\leq R_{a,1}$, and in particular that the boundary of $R_{b,1}'$ lies below that of $R_{a,1}^*$. If $N$ lies below $R_{a,2}^*$ then the rectangle is flat, so $n_b=0=n_a$. If $N$ lies above $R_{a,2}^*$ then the rectangle runs in the negative direction around $K_1\# K_2$, from below $R_{a,1}^*$ to above $R_{a,2}^*$. Hence $n_b=-1=n_a-1$. Note that this satisfies the condition in Corollary \ref{edgescor} with the two surfaces switched.
Suppose instead that $R_{a,2}\neq R_{b,2}$. If $N$ lies below $R_{a,2}^*$ then there are two possible situations. One is that $R_{b,1}\leq R_{a,1}$ and $n_b=n_a$. The other is that $R_{a,1}\leq R_{b,1}$ and $n_b=n_a+1$. It therefore remains to rule out the possibility that $N$ lies above $R_{a,2}^*$. For this we must return to the definitions of $R_{a,2}^*,R_{b,2}^*$. Originally we chose these to be $\partial$--almost disjoint, with $R_{b,2}^*\leq_{\partial} R_{a,2}^*$. As they now coincide, the components that meet $K_2$ were isotopic by an isotopy keeping their boundaries fixed. Corollary \ref{isotopyrelbdycor} therefore shows there was a product region $N'$ between these components that lay below $R_{a,2}^*$ and above $R_{b,2}^*$. However, Lemma \ref{makedisjointlemma} says that the side of $R_{a,2}^*$ on which $N$ lies is determined by the choice of surfaces $R_{a,2},R_{b,2}$. Therefore $N$ cannot lie above $R_{a,2}^*$.
Case 3: The component of $R_{b,1}$ coincides with that of $R_{a,1}^*$, but those of $R_{a,2}^*,R_{b,2}$ do not coincide. This is similar to case 2.
Case 4: Both pairs of components coincide. This case again uses the same ideas as cases 1 and 2. The only situation that is very different is when $R_{a,1}=R_{b,1}$ and $R_{a_2}=R_{b,2}$, when we must show that $n_a\neq n_b$. However this is true since $\Psi(R_{a,1},R_{a,2},n_a)\neq\Psi(R_{b,1},R_{b,2},n_b)$. \end{proof}
Figure \ref{edgespic3} is a schematic picture of the local structure of $\ms(L)$. Figure \ref{edgespic3}a focuses on the edges radiating from a single vertex. Figure \ref{edgespic3}b shows one of the smaller cubes in Figure \ref{edgespic3}a, with all edges included.
\begin{figure}\label{edgespic3}
\end{figure}
\section{Completing and interpreting the proof}\label{proofsection}
\connectsumthm* \begin{proof} Define an ordering $\leq_1$ on $\V(\ms(L_1))\times\V(\mathcal{Z})$ by \[(R_b,n_b)\leq_1 (R_a,n_a)\iff \left(R_b\leq R_a\textrm{ and }n_b\leq n_a\right).\] By Theorem \ref{productcomplextheorem},
\[{\big|(\ms(L_1)\times\mathcal{Z},\leq_1)\big|}\cong{\big|(\ms(L_1),\leq)\big|\times\big|(\mathcal{Z},\leq)\big|}.\] Now define a second ordering $\leq_2$ on $\V(\ms(L_1))\times\V(\mathcal{Z})$ by \[ \begin{split} (R_b,n_b)\leq_2 &(R_a,n_a)\iff\\ &\big((R_b\leq R_a\textrm{ and }n_b=n_a)\textrm{ or }(R_a\leq R_b\textrm{ and }n_a=n_b-1)\big). \end{split} \] Note that this corresponds to the conditions in Corollary \ref{edgescor}. It can be checked that $\leq_2$ has properties (P1), (P2)$'$, (P3).
Define $\leq_3$ on $\V(\ms(L_1)\times\mathcal{Z})\times\V(\ms(L_2))$ by \[ \begin{split} \big((R_{a,1},n_a),R_{a,2}\big)\leq_3&\big((R_{b,1},n_b),R_{b,2}\big)\iff\\ &\big((R_{a,1},n_a)\leq_2 (R_{b,1},n_b)\textrm{ and }R_{a,2}\leq R_{b,2}\big). \end{split} \] Applying Theorem \ref{productcomplextheorem} again gives that
\[{\big|\big((\ms(L_1)\!\times\mathcal{Z})\!\times\ms(L_2),\leq_3\!\!\hspace{0.05em}\big)\big|}\cong{\big|(\ms(L_1)\times\mathcal{Z},\leq_2)\big|\times\big|(\ms(L_2),\leq)\big|}.\] Thus
\[|(\ms(L_1)\!\times\mathcal{Z})\!\times\ms(L_2)|\cong|\ms(L_1)|\times|\ms(L_2)|\times\mathbb{R}.\]
By Lemmas \ref{surjectivelemma} and \ref{injectivelemma}, the map $\psi\colon\V(\ms(L_1))\times\V(\ms(L_2))\times\V(\mathcal{Z})\to\V(\ms(L))$ defined in Definition \ref{psidefn} is a bijection. Recall that \[\V\!\big((\ms(L_1)\!\times\mathcal{Z})\!\times\ms(L_2)\big)=\V(\ms(L_1))\times\V(\ms(L_2))\times\V(\mathcal{Z}).\] From Corollary \ref{edgescor} and Proposition \ref{mainedgesprop} we see that $\psi$ extends to an isomorphism between the 1--skeleta of the complexes $(\ms(L_1)\!\times\mathcal{Z})\!\times\ms(L_2)$ and $\ms(L)$. It remains only to note that both of these complexes are flag. For $\ms(L)$ this is the case by definition. For $(\ms(L_1)\!\times\mathcal{Z})\!\times\ms(L_2)$ it follows from the fact that the three complexes $\ms(L_1)$, $\ms(L_2)$ and $\mathcal{Z}$ are flag. \end{proof}
By examining the proof of Theorem \ref{productcomplextheorem} in \cite{MR0050886} we can give the following geometric description of the extension of $\Psi$ to $(\ms(L_1)\!\times\mathcal{Z})\!\times\ms(L_2)$.
\begin{remark}
Let $x\in|\ms(L_1)|\times|\ms(L_2)|\times\mathbb{R}$. Without loss of generality, $\pi_{\mathcal{Z}}(x)\in[0,1)$. Let $\pi_{\ms(L_1)}(x)=a_0 A_0 +\cdots+a_m A_m$ where $A_i\in\V(\ms(L_1))$ and $a_i>0$ for $0\leq i\leq m$, with $\sum_{i=0}^m{a_i}=1$ and $A_0\leq A_1\leq\cdots\leq A_m$. Similarly let $\pi_{\ms(L_2)}(x)=b_0 B_0 +\cdots+b_n B_n$.
Consider the surfaces $A_0,\ldots,A_m$. As in Lemma \ref{maketransverselemma}, they can be positioned in $M_1$ so they are pairwise almost disjoint with simplified intersection. By Proposition \ref{realisedistanceprop}, they then realise their adjacencies. As in the proof of Lemma \ref{makedisjointlemma}, the surfaces can be made disjoint while still realising their adjacencies, and it can be shown that the boundaries of the surfaces occur in order around $\partial M_1$. For $i=0,\ldots,m$, thicken the Seifert surface $A_i$ to a product region $A_i\times [0,a_i]$, and view this as a `continuum of surfaces'.
Do the same for the Seifert surfaces $B_0,\ldots,B_n$ in $M_2$. Glue the thickened surfaces to give thickened Seifert surfaces for $L$ in $M$. In doing so, instead of aligning $A_0\times\{0\}$ with $B_0\times\{0\}$, introduce a shift of length $\pi_{\mathcal{Z}}(x)$. This creates a finite set of vertices of $\ms(L)$, each with a weight given by its thickness. Applying Lemma \ref{threearcslemma} shows that these vertices span a simplex. \end{remark}
\section{Incompressible surfaces}\label{incompressiblesection}
In addition to $\ms(L)$, Kakimizu defined a larger complex $\is(L)$, which records all incompressible Seifert surfaces for $L$ rather than just taut ones.
\begin{definition}[see \cite{MR1177053}, \cite{przytycki-2010}] Let $L$ be a link, and let $M=\mathbb{S}^3\setminus\nhd(L)$. Define $\is(L)$ of $L$ to be the following flag simplicial complex. Its vertices are ambient isotopy classes of incompressible Seifert surfaces for $L$. Two vertices span an edge if they have representatives $R,R'$ such that a lift of $M\setminus R'$ to the infinite cyclic cover of $M$ intersects exactly two lifts of $M\setminus R$. \end{definition}
Note that $\ms(L)$ is a subcomplex of $\is(L)$. Proposition \ref{msdistanceprop} holds for $\is(L)$ as well as for $\ms(L)$, and so do Propositions \ref{disjointinteriorsprop} and \ref{realisedistanceprop}, and Lemma \ref{maketransverselemma}. The same is true of Lemmas \ref{orderinglemma1}, \ref{orderdistancelemma} and \ref{orderinglemma2}.
Let $R$ be an incompressible Seifert surface for $L$. Isotope $R$ to have minimal intersection with $T_0$. Then $R\cap T_0$ is a single arc. Splitting $R$ along $T_0$ gives incompressible Seifert surfaces $R_1,R_2$ for $L_1,L_2$ respectively.
Now consider the converse situation. That is, take incompressible Seifert surfaces $R_1,R_2$ for $L_1,L_2$ respectively, and join them along an arc in $T_0$ to form a Seifert surface $R$ for $L$.
\begin{lemma} $R$ is incompressible. \end{lemma} \begin{proof}
Suppose otherwise. Choose a compressing disc $S$ for $R$ that minimises its intersection with $T_0$ over all compressing discs for $R$. Then $S\cap T_0$ does not include any simple closed curves. In addition, $S$ is not disjoint from $T_0$, as otherwise it would be a compressing disc for either $R_1$ or $R_2$. Let $\rho$ be an arc of $S\cap T_0$ that is outermost in the disc $T_0\cap (M_0\setminus R)$. Then $\rho$ cuts off a subdisc $S_T$ of $T_0\cap (M_0\setminus R)$ that is disjoint on its interior from $S$. It also divides $S$ into two discs $S_1$ and $S_2$. Since $|S\cap T_0|$ cannot be reduced, each of the discs $S_1\cup S_T$ and $S_2\cup S_T$ is a compressing disc for $R$. Furthermore, each can be isotoped to have a smaller intersection with $T_0$ than $S$ does, which is a contradiction. \end{proof}
Now replacing taut Seifert surfaces with incompressible Seifert surfaces in the proof of Theorem \ref{connectsumthm} gives the following.
\begin{theorem} Let $L_1,L_2$ be non-split, non-fibred, links in $\mathbb{S}^3$,
and let $L=L_1\#L_2$. Then $|\is(L)|$ is homeomorphic to $|\is(L_1)|\times|\is(L_2)|\times\mathbb{R}$. \end{theorem}
\noindent Mathematical Institute
\noindent University of Oxford
\noindent 24--29 St Giles'
\noindent Oxford OX1 3LB
\noindent England
\noindent \textit{jessica.banks[at]lmh.oxon.org}
\end{document} | arXiv |
Special right triangle
A special right triangle is a right triangle with some regular feature that makes calculations on the triangle easier, or for which simple formulas exist. For example, a right triangle may have angles that form simple relationships, such as 45°–45°–90°. This is called an "angle-based" right triangle. A "side-based" right triangle is one in which the lengths of the sides form ratios of whole numbers, such as 3 : 4 : 5, or of other special numbers such as the golden ratio. Knowing the relationships of the angles or ratios of sides of these special right triangles allows one to quickly calculate various lengths in geometric problems without resorting to more advanced methods.
Angle-based
Angle-based special right triangles are specified by the relationships of the angles of which the triangle is composed. The angles of these triangles are such that the larger (right) angle, which is 90 degrees or π/2 radians, is equal to the sum of the other two angles.
The side lengths are generally deduced from the basis of the unit circle or other geometric methods. This approach may be used to rapidly reproduce the values of trigonometric functions for the angles 30°, 45°, and 60°.
Special triangles are used to aid in calculating common trigonometric functions, as below:
degreesradiansgonsturnssincostancotan
0°00g0√0/2 = 0√4/2 = 10undefined
30°π/633+1/3g1/12√1/2 = 1/2√3/21/√3√3
45°π/450g1/8√2/2 = 1/√2√2/2 = 1/√211
60°π/366+2/3g1/6√3/2√1/2 = 1/2√31/√3
90°π/2100g1/4√4/2 = 1√0/2 = 0undefined0
45°–45°–90°
30°–60°–90°
The 45°–45°–90° triangle, the 30°–60°–90° triangle, and the equilateral/equiangular (60°–60°–60°) triangle are the three Möbius triangles in the plane, meaning that they tessellate the plane via reflections in their sides; see Triangle group.
45° - 45° - 90° triangle
In plane geometry, constructing the diagonal of a square results in a triangle whose three angles are in the ratio 1 : 1 : 2, adding up to 180° or π radians. Hence, the angles respectively measure 45° (π/4), 45° (π/4), and 90° (π/2). The sides in this triangle are in the ratio 1 : 1 : √2, which follows immediately from the Pythagorean theorem.
Of all right triangles, the 45° - 45° - 90° degree triangle has the smallest ratio of the hypotenuse to the sum of the legs, namely √2/2.[1]: p.282, p.358 and the greatest ratio of the altitude from the hypotenuse to the sum of the legs, namely √2/4.[1]: p.282
Triangles with these angles are the only possible right triangles that are also isosceles triangles in Euclidean geometry. However, in spherical geometry and hyperbolic geometry, there are infinitely many different shapes of right isosceles triangles.
30° - 60° - 90° triangle
This is a triangle whose three angles are in the ratio 1 : 2 : 3 and respectively measure 30° (π/6), 60° (π/3), and 90° (π/2). The sides are in the ratio 1 : √3 : 2.
The proof of this fact is clear using trigonometry. The geometric proof is:
Draw an equilateral triangle ABC with side length 2 and with point D as the midpoint of segment BC. Draw an altitude line from A to D. Then ABD is a 30°–60°–90° triangle with hypotenuse of length 2, and base BD of length 1.
The fact that the remaining leg AD has length √3 follows immediately from the Pythagorean theorem.
The 30°–60°–90° triangle is the only right triangle whose angles are in an arithmetic progression. The proof of this fact is simple and follows on from the fact that if α, α + δ, α + 2δ are the angles in the progression then the sum of the angles 3α + 3δ = 180°. After dividing by 3, the angle α + δ must be 60°. The right angle is 90°, leaving the remaining angle to be 30°.
Side-based
Right triangles whose sides are of integer lengths, with the sides collectively known as Pythagorean triples, possess angles that cannot all be rational numbers of degrees.[2] (This follows from Niven's theorem.) They are most useful in that they may be easily remembered and any multiple of the sides produces the same relationship. Using Euclid's formula for generating Pythagorean triples, the sides must be in the ratio
m2 − n2 : 2mn : m2 + n2
where m and n are any positive integers such that m > n.
Common Pythagorean triples
Main article: Pythagorean triple
There are several Pythagorean triples which are well-known, including those with sides in the ratios:
3:4:5
5:12:13
8:15:17
7:24:25
9:40:41
The 3 : 4 : 5 triangles are the only right triangles with edges in arithmetic progression. Triangles based on Pythagorean triples are Heronian, meaning they have integer area as well as integer sides.
The possible use of the 3 : 4 : 5 triangle in Ancient Egypt, with the supposed use of a knotted rope to lay out such a triangle, and the question whether Pythagoras' theorem was known at that time, have been much debated.[3] It was first conjectured by the historian Moritz Cantor in 1882.[3] It is known that right angles were laid out accurately in Ancient Egypt; that their surveyors did use ropes for measurement;[3] that Plutarch recorded in Isis and Osiris (around 100 AD) that the Egyptians admired the 3 : 4 : 5 triangle;[3] and that the Berlin Papyrus 6619 from the Middle Kingdom of Egypt (before 1700 BC) stated that "the area of a square of 100 is equal to that of two smaller squares. The side of one is ½ + ¼ the side of the other."[4] The historian of mathematics Roger L. Cooke observes that "It is hard to imagine anyone being interested in such conditions without knowing the Pythagorean theorem."[3] Against this, Cooke notes that no Egyptian text before 300 BC actually mentions the use of the theorem to find the length of a triangle's sides, and that there are simpler ways to construct a right angle. Cooke concludes that Cantor's conjecture remains uncertain: he guesses that the Ancient Egyptians probably did know the Pythagorean theorem, but that "there is no evidence that they used it to construct right angles".[3]
The following are all the Pythagorean triple ratios expressed in lowest form (beyond the five smallest ones in lowest form in the list above) with both non-hypotenuse sides less than 256:
11:60:61
12:35:37
13:84:85
15:112:113
16:63:65
17:144:145
19:180:181
20:21:29
20:99:101
21:220:221
24:143:145
28:45:53
28:195:197
32:255:257
33:56:65
36:77:85
39:80:89
44:117:125
48:55:73
51:140:149
52:165:173
57:176:185
60:91:109
60:221:229
65:72:97
84:187:205
85:132:157
88:105:137
95:168:193
96:247:265
104:153:185
105:208:233
115:252:277
119:120:169
120:209:241
133:156:205
140:171:221
160:231:281
161:240:289
204:253:325
207:224:305
Almost-isosceles Pythagorean triples
Isosceles right-angled triangles cannot have sides with integer values, because the ratio of the hypotenuse to either other side is √2 and √2 cannot be expressed as a ratio of two integers. However, infinitely many almost-isosceles right triangles do exist. These are right-angled triangles with integer sides for which the lengths of the non-hypotenuse edges differ by one.[5][6] Such almost-isosceles right-angled triangles can be obtained recursively,
a0 = 1, b0 = 2
an = 2bn−1 + an−1
bn = 2an + bn−1
an is length of hypotenuse, n = 1, 2, 3, .... Equivalently,
$({\tfrac {x-1}{2}})^{2}+({\tfrac {x+1}{2}})^{2}=y^{2}$
where {x, y} are solutions to the Pell equation x2 − 2y2 = −1, with the hypotenuse y being the odd terms of the Pell numbers 1, 2, 5, 12, 29, 70, 169, 408, 985, 2378... (sequence A000129 in the OEIS).. The smallest Pythagorean triples resulting are:[7]
3 :4: 5
20 :21: 29
119 :120: 169
696 :697: 985
4,059 :4,060: 5,741
23,660 :23,661: 33,461
137,903 :137,904: 195,025
803,760 :803,761: 1,136,689
4,684,659 : 4,684,660 : 6,625,109
Alternatively, the same triangles can be derived from the square triangular numbers.[8]
Arithmetic and geometric progressions
Main article: Kepler triangle
The Kepler triangle is a right triangle whose sides are in geometric progression. If the sides are formed from the geometric progression a, ar, ar2 then its common ratio r is given by r = √φ where φ is the golden ratio. Its sides are therefore in the ratio 1 : √φ : φ. Thus, the shape of the Kepler triangle is uniquely determined (up to a scale factor) by the requirement that its sides be in geometric progression.
The 3–4–5 triangle is the unique right triangle (up to scaling) whose sides are in arithmetic progression.[9]
Sides of regular polygons
Let
$a=2\sin {\frac {\pi }{10}}={\frac {-1+{\sqrt {5}}}{2}}={\frac {1}{\varphi }}\approx 0.618$
be the side length of a regular decagon inscribed in the unit circle, where $\varphi $ is the golden ratio. Let
$b=2\sin {\frac {\pi }{6}}=1$
be the side length of a regular hexagon in the unit circle, and let
$c=2\sin {\frac {\pi }{5}}={\sqrt {\frac {5-{\sqrt {5}}}{2}}}\approx 1.176$
be the side length of a regular pentagon in the unit circle. Then $a^{2}+b^{2}=c^{2}$, so these three lengths form the sides of a right triangle.[10] The same triangle forms half of a golden rectangle. It may also be found within a regular icosahedron of side length $c$: the shortest line segment from any vertex $V$ to the plane of its five neighbors has length $a$, and the endpoints of this line segment together with any of the neighbors of $V$ form the vertices of a right triangle with sides $a$, $b$, and $c$.[11]
See also
• Integer triangle
• Spiral of Theodorus
References
1. Posamentier, Alfred S., and Lehman, Ingmar. The Secrets of Triangles. Prometheus Books, 2012.
2. Weisstein, Eric W. "Rational Triangle". MathWorld.
3. Cooke, Roger L. (2011). The History of Mathematics: A Brief Course (2nd ed.). John Wiley & Sons. pp. 237–238. ISBN 978-1-118-03024-0.
4. Gillings, Richard J. (1982). Mathematics in the Time of the Pharaohs. Dover. p. 161.
5. Forget, T. W.; Larkin, T. A. (1968), "Pythagorean triads of the form x, x + 1, z described by recurrence sequences" (PDF), Fibonacci Quarterly, 6 (3): 94–104.
6. Chen, C. C.; Peng, T. A. (1995), "Almost-isosceles right-angled triangles" (PDF), The Australasian Journal of Combinatorics, 11: 263–267, MR 1327342.
7. (sequence A001652 in the OEIS)
8. Nyblom, M. A. (1998), "A note on the set of almost-isosceles right-angled triangles" (PDF), The Fibonacci Quarterly, 36 (4): 319–322, MR 1640364.
9. Beauregard, Raymond A.; Suryanarayan, E. R. (1997), "Arithmetic triangles", Mathematics Magazine, 70 (2): 105–115, doi:10.2307/2691431, JSTOR 2691431, MR 1448883.
10. Euclid's Elements, Book XIII, Proposition 10.
11. nLab: pentagon decagon hexagon identity.
External links
• 3 : 4 : 5 triangle
• 30–60–90 triangle
• 45–45–90 triangle – with interactive animations
| Wikipedia |
Cancer dormancy and criticality from a game theory perspective
Amy Wu1,
David Liao2,
Vlamimir Kirilin3,
Ke-Chih Lin4,
Gonzalo Torga5,
Junle Qu6,
Liyu Liu7,
James C. Sturm4,
Kenneth Pienta5 &
Robert Austin ORCID: orcid.org/0000-0002-4269-67933
Cancer Convergence volume 2, Article number: 1 (2018) Cite this article
The physics of cancer dormancy, the time between initial cancer treatment and re-emergence after a protracted period, is a puzzle. Cancer cells interact with host cells via complex, non-linear population dynamics, which can lead to very non-intuitive but perhaps deterministic and understandable progression dynamics of cancer and dormancy.
We explore here the dynamics of host-cancer cell populations in the presence of (1) payoffs gradients and (2) perturbations due to cell migration.
We determine to what extent the time-dependence of the populations can be quantitively understood in spite of the underlying complexity of the individual agents and model the phenomena of dormancy.
Dormancy is the relatively long period between treatment for cancer and the progression (return) and spreading of the cancer. After initial surgery and/or chemotherapy, the cancer apparently ceases to grow and is said to be in remission, or dormancy if the period is substantially longer than typical progression times for that cancer and treatment. Unfortunately often the cancer after this dormant period ends is resistant to the initial therapy that was used. We do not address the emergence of resistance here but rather the dynamics of dormancy and progression, although the emergence of resistance is a critical part of cancer progression (Han et al. 2016).
The main focus of this work in connecting cancer emergence and dormancy is the proposed phenomena of criticality in interacting cancer cell dynamics. Criticality has been used to describe many slow-driven, interaction-dominated, threshold dynamical systems (Jensen 1998) including evolution (Raup 1994) and morphogenesis (Krotov et al. 2014). Near the threshold of criticality strong amplification of fluctuations emerges in response to external perturbations (Sornette 2000). In a finite system exhibiting noncritical behavior, the distribution of systematic response to external perturbation can be characterized by the moments of mean and variance.. However, in critical systems, probability distributions of response follow power law decays, P(s)∼s−b. If the distribution has "thick tails", that is with power-law coefficients b<3, then the mean and variance do not exist. In that case, external perturbation can lead to a response of any size (Yang 2004). We propose that dormancy and recurrence is a criticality problem, and use a game theoretical approach to analytically describe the phenomena.
Simulations of Game Theory popualtion dynamics were run on a MacPro utilizing a 3.7 GHz Quad-Core Intel Xeon E5 processor. The coding was done using MaTLab 2016b.
Results: population dynamics in interacting Cancer/Host cell populations
In order to characterize mixed population dynamics some sort of simple model is necessary, we have chosen game theory (Axelrod et al. 2006). Although game theory may ignore many critical details (Adami et al. 2016), it is a beginning step towards addressing criticality in cancer. A simple evolutionary game model which includes the influence of different cell types on each other involves coupled ordinary non-linear differential equations (Maynard Smith 1982; Durrett and Levin 1994). First, we assume that when we can break a heterogenous tumor up into N small subpopulations and each subpopulationj is locally homogeneous in 2 different cell types. The local population of cancer cells (γ j ) and stromal cells (η j ) within the jth subpopulation can be described by the ordinary non-linear differential equations:
$$ \frac{d \gamma_{j}}{dt}=(A_{j}p_{\gamma j}+B_{j}p_{\eta j})\gamma_{j} $$
$$ \frac{d \eta_{j}}{dt}=(C_{j}p_{\gamma j}+D_{j}p_{\eta j})\eta_{j} $$
where \(p_{\gamma j} = \frac {\gamma _{j}}{\gamma _{j} + \eta _{j}}\) and \(p_{\eta _{j}} = \frac {\eta _{j}}{\eta _{j} + \gamma _{j}}\). The payoff coefficients, A j , B j , C j and D j have very transparent physical interpretations: they represent the result of pairwise interactions between cells in lattice j. Since p γ j +p η j =1, the dependence of the cancer cell fractional population p γ j can be written as:
$$ \frac{dp_{\gamma j}}{dt} = p_{\gamma j}(1 - p_{\gamma j}) \left [ (A_{j}-C_{j}) p_{\gamma j} + (B_{j}-D_{j}) (1 - p_{\gamma j}) \right ] $$
There are two obvious fixed points in the flow of the fraction of γ cells versus time, \(p^{*}_{\gamma } = 1, p^{*}_{\gamma } = 0\), these two fixed points simply represent an initially pure γ or σ population which cannot change in composition. However, in general there are four more principal end points for the progression of the tumor. Two of them are straightforward: (1) If (A j −C j )<0 and (B j −D j )<0, host cells η win over cancer cells γ (this is called prisoner's dilemma in Game Theory jargon), in our case the cancer cells are out-competed by the host cells, perhaps by immunosurveilance or impaired vascularization amongst other reasons; (2) if (A j −C j )>0 and (B j −D j )>0, cancer cells γ win over host cells η (this is called harmony in Game Theory jargon, but alas here the "harmony" means that cancer cells out-compete the host cells and then recurrence emerges). In both the prisoners's dilemma and harmony outcomes, at infinite time only one cell type remains.
There are two other fixed points with non-zero numbers of both γ cells and η cells which give rise to stationary values. The fraction of cancer cells γ at this fixed point is:
$$ \tilde{p}^{*}_{\gamma j}=\frac{1}{1-\frac{A_{j}-C_{j}}{B_{j}-D_{j}}} $$
However, If A j −C j >0,B j −D j <0 this is an unstable fixed point and sensitive to perturbations. Since this point is unstable there is no residence time of the system at this point (this is known as a stag-hunt in Game Theory jargon). If (A j −C j )<0,(B j −D j )>0 the fixed point is stable. This case is called the hawk-dove game in Game Theory jargon, it is the only one allowing for stable coexistence of two populations. In terms of cancer population dynamics you would like to have coefficients such that optimally (A j −C j )<0,(B j −D j )<0, or at least (A j −C j )<0,(B j −D j )>0 so that the γ/η ratio does not diverge. Figure 1 presents graphically these population stability landscapes as a function of the pay-off matrix values.
A 3D stability plot. The vertical axis encodes initial fraction of γ cells. The 2 planar axes are A−C and B−D parameters. The planar part shows the division into four quadrants which give rise to different scenarios for the fixed points. The blue surface represents the surfaces of unstable fixed points, and the brown surface represents the surfaces of stable fixed points
It is a reasonable assumption that payoffs at neighboring subpopulations (j vs. j+1) change incrementally. Experimental evidence of payoff gradients has been demonstrated in a co-culture system of multiple myeloma and stromal cells within a linear drug gradient landscape (Wu et al. 2014). Here we will discuss two game transition scenarios across the landscapes of payoffs: (1) from cancer wins to stable coexistence, (2) from host wins, unstable bifurcation to cancer wins. First, as shown in Fig. 2a, the payoffs A,B,C,D are equal to 0.3,0.2,−0.3,−0.2 at position 0, and the payoffs change linearly to −0.1,0.4,0.1,−0.3 at position 1. Based on the payoffs coefficients at each position j, we can calculate which quadrant (the type of game) in Fig. 1 can represent the lattice j. The phase plane of cancer cell density vs. host cell density in Fig. 2b and the dynamics of cancer fraction in Fig. 2c shows cancer cells win at position 0 and 0.5 and coexist with host cells at position 1 independent of initial population densities. Secondly, in Fig. 3, the payoffs A,B,C,D are equal to 0.1,−0.3,0.4,0.1 at position 0, and the payoffs change linearly to 0.3,0.2,−0.3,−0.1 at position 1. Since we sweep through the unstable bifurcation zone in this case, whether cancer will win becomes sensitive to initial population fraction, as shown black lines in Fig. 3b and c.
Payoffs Cancer wins (CW) to stable coexistence (SC) Blue: cancer wins (case 1), light blue: cancer wins (case 2), orange: stable coexisistence. a Payoffs vs. position. b The phase plane of cancer cell density vs. host cell density. The arrows indicate the fitness at given populations and payoffs. c Cancer fraction vs. time. Solid line: initial cancer fraction is 0.02. Dotted line: initial cancer fraction is 0.85
Unstable bifurcation (UB) to cancer wins (CW) Red: host wins, black: unstable bifurcation, blue: cancer wins. a Payoffs vs. position. b The phase plane of cancer cell density vs. host cell density. The arrows indicate the fitness at given populations and payoffs. c Cancer fraction vs. time. Solid line: initial cancer fraction is 0.02. Dotted line: initial cancer fraction is 0.85
Integration of Eq. (3) yields the equilibration time τ it takes for these fixed points to be approached:
$$ \tau= \int^{p_{fin}}_{p_{in}} \frac{dp}{p_{}(1-p_{})((A-C)p_{} + (B-D)(1-p_{}))} $$
Near the stable and unstable fixed points τ diverges, the systems slows down and criticality may occur. In the case of the unstable fixed point (the stag-hunt), we can identify the dormancy period as the time spent in the vicinity of the unstable fixed point, and the recurrence of the cancer as the population moves away from the unstable fixed point. On the other hand, if the system has matrix elements such that we are in a hawk-dove quadrant the cancer while not "cured" (which is the prisoner's dilemma end-point) but rather the cancer cells are in a stable equilibrium: chronically present but not life threatening, a region of permanent dormancy.
In the basic model presented above, we assumed each lattice j of tumor is a closed and homogenous region, no exchange of cells is involved. To gain more physiological relevance, we introduce cancer cell migration between lattices as a perturbation to the system. Such perturbation can also be a format of temporal varying payoffs, which are not discussed in this work.
At each time point, we assume cancer cells migrate with probabilities m+ (to the right neighboring lattice) and m− (to the left neighboring lattice), and the migration of host cells are negligible. The equation of cancer cell density γ becomes:
$$ \gamma_{j}(t+1)=\gamma_{j}(t)+dt\left[A_{j}p_{\gamma j}(t)+B_{j}p_{\eta j}(t)\right]\gamma_{j}(t)+M_{j}(t) $$
where migration term is:
$$ M_{j}(t)=-\left[m^{-}_{j}(t)+m^{+}_{j}(t)\right]C_{j}(t)+\left[m^{-}_{j+1}(t)C_{j+1}(t)+m^{+}_{j-1}(t)C_{j-1}(t)\right] $$
We assume here weak migration: that is we assume m+ and m− are normal random distributed with a mean equal 0 and standard deviation equal 0.03. That means 99.7% of simulated migration rates (percentage of cells migrate to neighboring lattices) is less than 9%. The effect of migration on spatio-temporal dynamics of cancer is shown in Figs. 4 and 5.
Cancer fraction vs. space and time: a and c no migration, b and d with migration. The initial cancer fraction is 0.02. Initial cancer fraction: 0.02. a and b Transitions from "host wins" to "unstable bifurcation" to "cancer wins." c and d Transitions from "cancer wins" to "stable coexistence"
Cancer fraction vs. space and time in the vicinity of the unstable fix point (unstable bifurcation): a and c no migration, b and d with migration. a and b Initial cancer fraction: 0.51. c and d Initial cancer fraction: 0.49
In Fig. 4a and b, the payoffs A,B,C,D are equal to 0.22,−0.1,−0.22,0.06, respectively at x=0.6 (where host wins, cancer fraction p→0), and the payoffs change linearly to 0.28,0.15,−0.23,−0.06 at x=0.9 (where cancer wins, cancer fraction p→1). As the position is close to the vicinity of bifurcation regime (x=0.65 in Fig. 4a), equilibration time to reach stationary state (τ in Eq. 5) increases. For example, τ=15 at x=0.9 as cancer fraction reaches stationary state p∗=1, while τ=30 at x=0.7. The migration of cancer cells is simulated in Fig. 4b, resulting a noisy pattern which is similar to Fig. 4a. In Fig. 4c and d, the payoffs A,B,C,D are equal to 0.08,0.32,−0.06,−0.22, respectively at x=0.6 (where cancer wins, cancer fraction p→1), and the payoffs change linearly to −0.08,0.38,0.06,−0.28 at x=0.9 (where host and cancer stably coexist, cancer fraction p→0.825). Likewise, same perturbation due to cancer cell migration as shown in Fig. 4d results in a noisy pattern similar to Fig. 4c.
Figure 5 demonstrate the effect of migration on spatio-temporal dynamics of cancer in a critical state near the vicinity of unstable equilibrium (p≈0.5). The payoffs A,B,C,D are equal to 0.14,−0.11,−0.01,0.04 at x=0.6, and the payoffs change linearly to 0.26,0.01,0.11,0.16 at x=0.9. First of all, the system slows down near the equilibrium. After 40 generations, cancer fraction slightly changes from 0.51 to 0.67 in Fig. 5a and from 0.49 to 0.33 Fig. 5c. After perturbation is introduced in Fig. 5b and c, we observe the amplification of fluctuation near the equilibrium. Within 40 generations, neighboring lattices can be dominated by either cancer p→1 or host cells p→0, exhibiting the critical behavior, a perturbation response of any size.
Cancer dormancy is a slow-driven, interaction-dominated threshold system. Frequency of breast cancer recurrence rate indicates while non-metastatic instance follows exponential decay, metastatic instance may be a critical system which follows power law. We modeled cancer dormancy inspired by evolutionary game theory, and found that the payoffs modulated by microenvironmental factors (such as drug, oxygen, nutrients) dictate the dynamics of cancer cells vs. host cells (including stromal and immune cells). Perturbation (due to cancer cell migration) in the vicinity of equilibrium is associated with the loss of global stability and may lead to recurrence of metastatic cancer.
Much work remains to be done to map the landscape of the interaction coefficients and classify between stable regime and unstable regime, here we provide a first step towards identifying the dynamical signatures that could be used for prediction of emergence from dormancy.
We hope that this work will inspire more measurements, improve predictive power of cancer recurrence, and assist the control of cancer progression.
Adami, C, Schossau J, Hintze A. Evolutionary game theory using agent-based methods. Phys Life Rev. 2016; 19:1–26.
ADS Article Google Scholar
Axelrod, R, Axelrod DE, Pienta KJ. Evolution of cooperation among tumor cells. Proc Natl Acad Sci U S A. 2006; 103(36):13474–13479.
Durrett, R, Levin S. The Importance of Being Discrete (and Spatial). Theor Popul Biol. 1994; 46(3):363–94. https://doi.org/10.1006/tpbi.1994.1032. Accessed 25 Feb 2016.
Han, J, Jun Y, Kim SH, Hoang HH, Jung Y, Kim S, Kim J, Austin RH, Lee S, Park S. Rapid emergence and mechanisms of resistance by u87 glioblastoma cells to doxorubicin in an in vitro tumor microfluidic ecology. Proc Natl Acad Sci U S A. 2016; 113(50):14283–8.
Jensen, HJ, Vol. 10. Self-organized Criticality: Emergent Complex Behavior in Physical and Biological Systems. Cambridge lecture notes in physics. Cambridge: Cambridge University Press; 1998.
Krotov, D, Dubuis JO, Gregor T, Bialek W. Morphogenesis at criticality. Proc Natl Acad Sci. 2014; 111(10):3683–688. https://doi.org/10.1073/pnas.1324186111. Accessed 25 Feb 2016.
Maynard Smith, J. Evolution and the Theory of Games. Cambridge: Cambridge University Press; 1982.
Book MATH Google Scholar
Raup, DM. The role of extinction in evolution. Proc Natl Acad Sci. 1994; 91(15):6758–763. https://doi.org/10.1073/pnas.91.15.6758. Accessed 25 Feb 2016.
Sornette, D. Critical Phenomena in Natural Sciences: Chaos, Fractals, Selforganization, and Disorder: Concepts and Tools. Springer series in synergetics. Berlin: Springer; 2000.
MATH Google Scholar
Wu, A, Liao D, Tlsty TD, Sturm JC, Austin RH. Game theory in the death galaxy: interaction of cancer and stromal cells in tumour microenvironment. Interf Focus. 2014; 4(4):20140028–0140028. https://doi.org/10.1098/rsfs.2014.0028. Accessed 25 Feb 2016.
Yang, CB. The origin of power-law distributions in self-organized criticality. J Phys A Math Gen. 2004; 37(42):523–9.
ADS MathSciNet Article Google Scholar
We would like to thank fruitful discussions with Donald Coffey, James Frost, Robert Gatenby and Robert Axelrod.
Funding This work was supported by NIH grants U54CA163214 (K.J.P.), 1PO1CA093900 (K.J.P.), U01CA143055 (K.J.P.), U54CA143803 (K.J.P, R.H.A), and the Prostate Cancer Foundation.
The MatLab2016b source code can be obtained from us.
Banter AI, 408 Florence St., Palo Alto CA, 94301, USA
Amy Wu
Department of Pathology, University of California at San Francisco, San Francisco, 94143, USA
David Liao
Department of Physics, Princeton University, Princeton, 08544, NJ, USA
Vlamimir Kirilin & Robert Austin
Department of Electrical Engineering, Princeton University, Princeton, 08544, USA
Ke-Chih Lin & James C. Sturm
The Johns Hopkins Hospital, 1800 Orleans St., Baltimore MD, 21287, USA
Gonzalo Torga & Kenneth Pienta
College of Optoelectronic Engineering, Shenzhen University, Shenzhen, 518060, China
Junle Qu
College of Physics, Chongqing University, Chongqing China, 400044, China
Liyu Liu
Vlamimir Kirilin
Ke-Chih Lin
Gonzalo Torga
James C. Sturm
Kenneth Pienta
AW, DL, and VK provided the initial theoretical concepts, and AW did much of the simulations. K-CL, GT, LL, and KJP provided much the cancer implications insight. JQ and JCS provided technological insight into possible experiments. RHA wrote the final MS. All authors read and approved the final manuscript.
Correspondence to Robert Austin.
Author's information
Although 3/4 of the authors are primarily physics based experimenters (Amy Wu, KC Lin, Junle Qu, Liyu Liu, James Sturm, Robert Austin) or clinicians (Gonzalo Torga MD, Kenneth J. Pienta MD), we felt it was important to write a paper on Game Theory from a theoretical/modeling perspective to highlight how game theory could be used to help understand dormancy and the role of criticality, using the insights of the clinician and experimenters
Wu, A., Liao, D., Kirilin, V. et al. Cancer dormancy and criticality from a game theory perspective. Cancer Converg 2, 1 (2018). https://doi.org/10.1186/s41236-018-0008-0
Dormancy
Perturbations | CommonCrawl |
\begin{document}
\title{Quantum double inclusions associated to a family of Kac algebra subfactors} \author{Sandipan De} \address{Stat-Math Unit\\Indian Statistical Institute, 8th Mile, Mysore Road\\ Bangalore-560059} \email{sandipan$\[email protected]}
\keywords{Subfactors, Hopf algebras, Planar algebras} \subjclass[2010]{46L37, 16S40, 16T05} \maketitle
\begin{abstract} In \cite{Sde2018} we defined the notion of \textit{quantum double inclusion} associated to a finite-index and finite-depth subfactor and studied the quantum double inclusion associated to the Kac algebra subfactor $R^H \subset R$ where $H$ is a finite-dimensional Kac algebra acting outerly on the hyperfinite $II_1$ factor $R$ and $R^H$ denotes the fixed-point subalgebra.
In this article we analyse
quantum double inclusions associated to the family of Kac algebra subfactors given by
$\{ R^H \subset R \rtimes \underbrace{H \rtimes H^* \rtimes \cdots}_{{\text{$m$ times}}} : m \geq 1 \}$. For each $m > 2$, we construct
a model $\mathcal{N}^m \subset \mathcal{M}$ for the quantum double inclusion of
$\{ R^H \subset R \rtimes \underbrace{H \rtimes H^* \rtimes \cdots}_{{\text{$m-2$ times}}} \}$ with $\mathcal{N}^m =
((\cdots \rtimes H^{-2} \rtimes H^{-1}) \otimes (H^m \rtimes H^{m+1} \cdots))^{\prime \prime}, \mathcal{M} = (\cdots \rtimes H^{-1} \rtimes H^0
\rtimes H^1 \rtimes \cdots)^{\prime \prime}$ and where for any integer $i$, $H^i$ denotes $H$ or $H^*$ according as $i$ is odd or even.
In this article, we give an explicit
description of $P^{\mathcal{N}^m \subset \mathcal{M}}$ ($m > 2$), the subfactor planar algebra associated to
$\mathcal{N}^m \subset \mathcal{M}$, which turns out to be a planar subalgebra of $^{*(m)}\!P(H^m)$
(the adjoint of the $m$-cabling of the planar algebra of $H^m$).
We then show that for $m > 2$, depth of $\mathcal{N}^m \subset \mathcal{M}$ is always two.
Observing that $\mathcal{N}^m \subset \mathcal{M}$ is reducible for all $m > 2$, we
explicitly
describe the weak Hopf $C^*$-algebra structure on $(\mathcal{N}^m)^{\prime} \cap \mathcal{M}_2$, thus obtaining a family of
weak Hopf $C^*$-algebras starting with a single Kac algebra $H$. \end{abstract}
\section*{Introduction}
The motivation for this article primarily stems from the work of the author in \cite{Sde2018}. Given a finite-index and finite-depth subfactor $N \subset M$ with $N ( = M_0) \subset M ( = M_1) \subset M_2 \subset M_3 \subset \cdots$ being the Jones' basic construction tower associated to $N \subset M$, we defined in \cite{Sde2018} the inclusion \begin{align*}
N \vee (M^{\prime} \cap M_{\infty}) \subset M_{\infty} \end{align*} to be the \textit{quantum double inclusion} associated to $N \subset M$ where $M_{\infty}$ denotes the $II_1$ factor obtained as the von Neumann closure $(\cup_{n = 0}^{\infty} M_n)^{\prime \prime}$ in the GNS representation with respect to the trace on $\cup_{n = 0}^{\infty} M_n$ and $N \vee (M^{\prime} \cap M_{\infty})$ denotes the von Neumann algebra generated by $N$ and $M^{\prime} \cap M_{\infty}$. In \cite{Sde2018} we studied the quantum double inclusion associated to the Kac algebra subfactor $R^H \subset R$ where $H$ is a finite-dimensional Kac algebra acting outerly on the hyperfinite $II_1$ factor $R$ and $R^H$ denotes the fixed-point subalgebra. The main result of \cite{Sde2018} states that the quantum double inclusion of $R^H \subset R$ is isomorphic to $R \subset R \rtimes D(H)^{cop}$ for some outer action of $D(H)^{cop}$ on $R$ where $D(H)$ denotes the Drinfeld double of $H$. This result seemed to be quite interesting and motivated us to analyse quantum double inclusions associated to a general class of Kac algebra subfactors given by $\{ R^H \subset R \rtimes \underbrace{H \rtimes H^* \rtimes \cdots}_{{\text{$m$ times}}} : m \geq 1 \}$.
One of the main steps towards understanding the quantum double inclusions associated to the family of subfactors $\{ R^H \subset R \rtimes \underbrace{H \rtimes H^* \rtimes \cdots}_{{\text{$m$ times}}} : m \geq 1 \}$ is to construct their models. Given any finite-dimensional Kac algebra $H$, let $H^i$, where $i$ is any integer, denote $H$ or $H^*$ according as $i$ is odd or even. For each positive integer $m > 2$, we construct in $\S 2$ a hyperfinite, finite-index subfactor $\mathcal{N}^m \subset \mathcal{M}$ where $\mathcal{N}^m = ((\cdots \rtimes H^{-3} \rtimes H^{-2} \rtimes
H^{-1}) \otimes (H^m \rtimes H^{m+1} \rtimes \cdots))^{\prime \prime}, \ \mathcal{M} = (\cdots \rtimes H^{-1} \rtimes H^0 \rtimes H^1 \rtimes
\cdots)^{\prime \prime}$ and show that $\mathcal{N}^m \subset \mathcal{M}$ is a model for the quantum double inclusion of $R^H \subset
R \rtimes \underbrace{H \rtimes H^* \rtimes \cdots}_{{\text{$m-2$ times}}}$.
The heart of the paper is $\S 3$ where
we compute the basic construction tower associated to $\mathcal{N}^m \subset \mathcal{M}$ and also compute the relative commutants.
The proofs all rely on explicit pictorial computations in the planar algebra of $H$ or $H^*$.
In $\S 4$, we explicitly describe the planar algebra
associated to the subfactor $\mathcal{N}^m \subset \mathcal{M}$ ($m > 2$) which turns out to be an interesting planar subalgebra of
$^{*(m)}\!P(H^m)$ (the adjoint of the $m$-cabling of the planar algebra of $H^m$).
It is evident from the main result of \cite{Sde2018} that the quantum double inclusion of $R^H \subset R$ is of depth two. It is thus a natural question to ask whether the quantum double inclusions associated to the family of subfactors $\{ R^H \subset
R \rtimes \underbrace{H \rtimes H^* \rtimes \cdots}_{{\text{$m$ times}}} : m \geq 1 \}$ have finite depth.
In this article we answer to this question in affirmative by proving that for $m > 2$, depth of $\mathcal{N}^m \subset \mathcal{M}$ is always $2$ (Theorem 11). This is the main result of $\S 5$. One primary ingredient of the proof is Lemma \ref{element} where we identify the commutant of the middle $H$ in $H^* \rtimes H \rtimes H^*$.
In \cite{Sde2018} we constructed a model $\mathcal{N} \subset \mathcal{M}$ for the quantum double inclusion of $R^H \subset R$. As an immediate consequence of the main result \cite[Theorem 40]{Sde2018} one obtains that the relative commutant $\mathcal{N}^{\prime} \cap \mathcal{M}_2$ is isomorphic to $D(H)^{cop*} (= D(H)^{*op})$ as Kac algebras. In $\S 6$ we explicitly describe the structure maps of $\mathcal{N}^{\prime} \cap \mathcal{M}_2$ which will be useful to achieve a simple and nice description of the weak Hopf $C^*$-algebra structures on $(\mathcal{N}^m)^{\prime} \cap \mathcal{M}_2$ ($m > 2$) in $\S 7$.
It is well-known (see \cite{NikVnr2000}, \cite{Das2004}) that if $N \subset M$ is a finite-index reducible depth $2$ inclusion of $II_1$ factors and if $N (=M_0) \subset M (= M_1) \subset M_2 \subset M_3 \subset \cdots$ is the Jones' basic construction tower associated to $N \subset M$, then the relative commutants $N^{\prime} \cap M_2$ and $M^{\prime} \cap M_3$ admit mutually dual weak Hopf $C^*$-algebra structures. Now, for each $m > 2$, the subfactor $\mathcal{N}^m \subset \mathcal{M}$ being reducible and of depth $2$, $(\mathcal{N}^m)^{\prime} \cap \mathcal{M}_2$ admits a weak Hopf $C^*$-algebra structure. The final $\S 7$ is devoted to recovering the weak Hopf $C^*$-algebra structure on
$(\mathcal{N}^m)^{\prime} \cap \mathcal{M}_2$ for all $m > 2$.
Here, in Theorem \ref{WHA}, we construct a family $\{K_m : m > 2\}$ of weak Hopf $C^*$-algebras, with underlying vector spaces
$A(H)_{m-2}^{op} \otimes D(H)^{*op} \otimes
A(H)_{m-2}^{op}$ or
$A(H^*)_{m-2}^{op} \otimes D(H)^{*op} \otimes
A(H)_{m-2}^{op}$ according as $m$ is odd or even,
such that $K_m \cong (\mathcal{N}^m)^{\prime} \cap \mathcal{M}_2$ as weak Hopf $C^*$-algebras where, for any positive integer $l$ and
any finite-dimensional Kac algebra $K$, $A(K)_l$ denotes
the finite crossed product algebra $\underbrace{K \rtimes K^* \rtimes \cdots }_{\text{$l$ times}}$.
\section{Preliminaries}
The prerequisites for this article can be found in $\S 1$ and $\S 2$ of \cite{Sde2018}.
For a convenient reading, below we briefly explain the notations and recall some
necessary facts that
will be frequently used in the sequel.
\subsection{Crossed product by Kac algebras.} Throughout this article $H (= H(\mu, \eta, \Delta, \varepsilon, S, *))$ will denote a finite-dimensional Kac algebra and $\delta$, the positive square root of $dim ~H$. We set $H^i = H$ or $H^*$ according as $i$ is odd or even. The unique non-zero idempotent integrals of $H^*$ and $H$ will be denoted by $\phi$ and $h$ respectively and moreover, for any non-negative integer $i$, the symbols $\phi^i$ and $h^i$ will always denote a copy of $\phi$ and $h$ respectively. It is a fact that $\phi(h) = \frac{1}{dim ~H}$. The letters $x, y, z, t$ will always denote an element of $H$ and for any integer $i$, the symbols $x^i, y^i, z^i, t^i$ will always represent an element of $H$. The letters $f, g, k$ will always denote an element of $H^*$ and for any integer $i$, the symbols $f^i, g^i$ and $k^i$ will always represent an element of $H^*$.
Given $x \in H, \Delta(x)$ is denoted by $x_1 \otimes x_2$ (a simplified version of the Sweedler coproduct notation). We draw the reader's attention to a notational abuse of which we will often be guilty. We denote elements of a tensor product as decomposable tensors with the understanding that there is an implied omitted summation (just as in our simplified Sweedler notation). Thus, when we write `suppose $f \otimes x \in H^* \otimes H$', we mean `suppose $\sum_i f^i \otimes x^i \in H^* \otimes H$' (for some $f^i \in H^*$ and $x^i \in H$, the sum over a finite index set).
We refer to \cite[$\S 1$]{Sde2018} for the notion of action of $H$ on a finite-dimensional complex $*$-algebra say, $A$ and the construction of the corresponding crossed product algebra, denoted $A \rtimes H$. Though the vector space underlying $A \rtimes H$ is $A \otimes H$, we denote a general element of $A \rtimes H$ by $a \rtimes x$ instead of $a \otimes x$. There is a natural action of $H^*$ on $H$ given by
$f . x = f(x_2) x_1$ for $f \in H^*, x \in H.$ Similarly we have action of $H$ on $H^*$. If $H$ acts on $A$,
then $H^*$ also acts on $A \rtimes H$ just by acting on $H$-part and ignoring the $A$-part, meaning that,
$f. (a \rtimes x) = a \rtimes f.x = f(x_2) a \rtimes x_1$ for $f$ in $H^*$ and $a \rtimes x \in A \rtimes H$ and consequently, we can construct $A \rtimes H \rtimes
H^*$. Continuing this way, we may construct $A \rtimes H \rtimes H^* \rtimes \cdots$.
For integers $i \leq j$, we define $H_{[i, j]}$ to be the crossed product algebra $H^i \rtimes H^{i+1} \rtimes \cdots \rtimes H^j$.
If $i = j$, we will simply write $H_i$ to denote $H_{[i, i]}$ and if $i > j$, we take $H_{[i, j]}$ to be $\mathbb{C}$.
A typical element of $H_{[i, j]}$ will be denoted by $x^i / f^i \rtimes f^{i+1} / x^{i+1} \rtimes \cdots$ ($j-i+1$ terms).
We use the symbol $A(H)_l$, where $l$ is any positive integer, to denote the crossed product algebra
$\underbrace{H \rtimes H^* \rtimes \cdots}_{\text{$l$ \mbox{terms}}}$.
Following \cite[$\S 1$]{Sde2018}, we denote by $H_{(-\infty, \infty)}$ the algebra which, by definition, is the `union' of all the $H_{[i, j]}$. Note
that a typical element of $H_{(-\infty, \infty)}$ is a finite sum of terms of the form $\cdots \rtimes x^{-1} \rtimes f^0 \rtimes x^1 \rtimes
\cdots$ where in any such term all but finitely many of the $f^i$ are $\epsilon$ and all but finitely many of the $x^i$ are $1$. For any
integer $m$, $H_{[m, \infty)}$ denotes the subalgebra of $H_{(-\infty, \infty)}$ which consists of all (finite sums of) elements
$\cdots \rtimes x^{-1} \rtimes f^0 \rtimes x^1 \rtimes
\cdots$ of $H_{(-\infty, \infty)}$ where for $i < m$, $f^i = \epsilon$ if $i$ even and $x^i = 1$ if $i$ is odd. Similarly, we define the subalgebra
$H_{(-\infty, m]}$ of $H_{(-\infty, \infty)}$.
It is worth mentioning that the family $\{H_{(-\infty, -1]} \otimes H_{[m, \infty)} \subset H_{(-\infty, \infty)} : m > 1 \}$ of inclusions
of infinite iterated crossed product algebras will be used in $\S 2$ to construct models for quantum double inclusions
associated to the family of Kac algebra subfactors given by
$\{ R^H \subset R \rtimes \underbrace{H \rtimes H^* \rtimes \cdots}_{{\text{$m$ times}}} : m \geq 1 \}$.
The following results will be very useful. We refer to \cite[Theorem 2.1, Corollary 2.3(ii)]{BlaMnt1985} for the proof of Lemma \ref{matrixalg}, \cite[Lemma 4.5.3]{Jjo2008} or \cite[Proposition 3]{DeKdy2015} for the proof of Lemma \ref{commutants} and \cite[Lemma 4.2.3]{Jjo2008} for the proof of Lemma \ref{anti1}. \begin{lemma}\label{matrixalg}
$H \rtimes H^* \rtimes H \rtimes \cdots$ ($2k$-terms) is isomorphic to the matrix algebra $M_{n^k}(\mathbb{C})$ where $n = dim ~H$. \end{lemma}
\begin{lemma}\label{commutants} For any $p\in {\mathbb Z}$, the subalgebras $H_{(-\infty,p]}$ and $H_{[p+2,\infty)}$ are mutual commutants in $H_{(-\infty, \infty)}$. \end{lemma}
Given integers $i \leq j$ and $p \leq q$ such that $j-i = q-p$ and assume that $j$ and $p$ (resp., $i$ and $q$) have the same parity. Given $X \in H_{[i, j]}$, let $X^{\prime}$ denote the element obtained by `flipping $X$ about $i$ (equivalently, $j$)' and then applying $S^{\otimes(j-i+1)}$ on this flipped element. For instance, if we assume $i$ to be odd and $j$ to be even and if $X = x^i \rtimes f^{i+1} \rtimes \cdots \rtimes f^j \in H_{[i, j]}$, then $X^{\prime}$ is given by $Sf^j \rtimes Sx^{j-1} \rtimes \cdots \rtimes Sf^{i+1} \rtimes Sx^i$. It is evident that $X^{\prime} \in H_{[p, q]}$. \begin{lemma}\label{anti1}
The map $X \mapsto X^{\prime}$ is a $*$-anti-isomorphism of $H_{[i, j]}$ onto $H_{[p, q]}$ .
\end{lemma}
The Fourier transform map $F_H : H \rightarrow H^*$ is defined by $F_H(a) = \delta \phi_1(a)\phi_2$ and satisfies $F_{H^*} F_H = S$. We will usually omit the subscript of $F_H$ and $F_{H^*}$ and write both as $F$ with the argument making it clear which is meant.
\subsection{Planar algebras.}
For the basics of (subfactor) planar algebras, we refer to \cite{Jns1999}, \cite{KdyLndSnd2003} and \cite{KdySnd2004}. We will use the older notion of planar algebras where $Col$, the set of colours, is given by $\{(0, \pm),1,2,\cdots\}$ (note that only $0$ has two variants, namely, $(0, +)$ and $(0,-)$). This is equivalent to the newer notion of planar algebras (see \cite[$\S 2.2$]{DeKdy2016}) where $Col = \{(k, \pm) : k \geq 0 \ \mbox{integer}\}$ and we refer to \cite[Proposition 1]{DeKdy2016} for the proof of this equivalence. We will use the notation $T^{k_0}_{k_1, k_2, \cdots, k_b}$ to denote a tangle $T$ of colour $k_0$ (i.e., the colour of the external box of $T$ is $k_0$) with $b$ internal
boxes ($b$ may be zero also) such that the colour of the $i$-th internal box is $k_i$. Given a tangle $ T = T^{k_0}_{k_1, k_2, \cdots, k_b}$ and
a planar algebra $P$, $Z^P_T$ will always denote the associated linear map from $P_{k_1} \otimes P_{k_2} \otimes \cdots \otimes P_{k_b}$ to $P_{k_0}$
induced by the tangle $T$.
We will also find it useful to recall the notions of cabling and adjoints for tangles and for planar algebras. Given any positive integer $m$, and a tangle $T$, say $T = T_{k_1, k_2, \cdots, k_b}^{k_0}$, the $m$-cabling of $T$, denoted by $T^{(m)}$, is the tangle obtained from $T$ by replacing each string of $T$ by a parallel cable of $m$-strings.
It is worth noting that the number of internal boxes of $T^{(m)}$ and $T$ are the same and that if $k_i(T^{(m)})$ denotes the colour of the $i$-th internal disc of $T^{(m)}$, then \begin{align*}
k_i(T^{(m)}) =
\begin{cases}
mk_i, & \mbox{if} \ k_i > 0 \\
(0, +), & \mbox{if} \ k_i = (0, +) \\
(0, -), & \mbox{if} \ k_i = (0, -) \ \mbox{and} \ m \ \mbox{is odd}\\
(0, +), & \mbox{if} \ k_i = (0, -) \ \mbox{and} \ m \ \mbox{is even}.\\
\end{cases}
\end{align*}
Now given any planar algebra $P$, construct a new planar algebra $^{(m)}\!P$, called $m$-cabling of $P$, by setting
\begin{align*}
^{(m)}\!P_k =
\begin{cases}
P_{mk}, & \mbox{if} \ k > 0 \\
P_{(0, +)}, & \mbox{if} \ k = (0, +) \\
P_{(0, -)}, & \mbox{if} \ k = (0, -) \ \mbox{and} \ m \ \mbox{is odd}\\
P_{(0, +)}, & \mbox{if} \ k = (0, -) \ \mbox{and} \ m \ \mbox{is even}\\
\end{cases}
\end{align*}
and defining $Z_T^{^{(m)}\!P} = Z_{T^{(m)}}^P$ for any tangle $T$. Similarly, given a planar algebra $P$, we construct a new planar algebra ${^*\!P}$, called the adjoint of $P$, where for
any $k \in Col, {(^*\!P)_k} = P_k$ as vector
spaces and given any tangle $T$, the action $Z_T^{^*\!P}$ of $T$ on $^*\!P$ is specified by $Z_{T^*}^P$ where $T^*$ is the tangle obtained by
reflecting the tangle $T$ across any line in the plane.
\begin{figure}
\caption{trace tangle : $tr_k^{(0, +)}$(left) and rotation tangle : $R_k^k (k \geq 2)$(right)}
\label{fig:pic67}
\end{figure}
\begin{figure}
\caption{The tangles: $T^3$ (left) and $T^4$ (right)}
\label{fig:pic599}
\end{figure}
\begin{figure}
\caption{The tangles $A = A(0, 0)$(left) and $A(2m, 2n)$(right) when $m, n \geq 1$}
\label{fig:pic43}
\end{figure}
Observe that Figures \ref{fig:pic599} and \ref{fig:pic43} show some elements of two families of tangles. In Figure \ref{fig:pic599} we have the tangles $T^n$ of colour $n$ for $n \geq 2$, with exactly $n-1$ internal $2$-boxes and no internal regions illustrated for $n = 3$ and $n = 4$. In Figure \ref{fig:pic43} we have tangles $A(2m,2n)$ defined for $m,n \geq 0$ of colour $2m+2n+4$ with exactly $2m+2n+3$ internal $2$-boxes and no internal regions.
If $P$ is a subfactor planar algebra of modulus $d$, then for each $k \geq 1$, we refer to the (faithful, positive, normalised) trace
$\tau : P_k \rightarrow \mathbb{C}$ defined for $x \in P_k$ by
$\tau(x) = d^{-k} Z_{tr_k^{(0, +)}}(x)$ as the normalised pictorial trace on $P_k$ where $tr_k^{(0, +)}$ denotes the $(0, +)$ tangle with a single
internal $k$-box as shown in Figure \ref{fig:pic67}.
\subsection{Planar algebra of a Kac algebra.}
Suppose that $H$ acts outerly on the hyperfinite $II_1$ factor $M$. Let $P(H, \delta)$ (or, simply, $P(H)$) denote the subfactor planar algebra associated to $M^H \subset M$ where $M^H$ is the fixed-point subalgebra of $M$. We recall from \cite[Theorem 8]{Sde2018} (see also \cite{KdySnd2006}) the construction of $P(H)$. The planar algebra $P(H)$ is defined to be the quotient of the universal planar algebra on the label set $L = L_2 = H$ by the set of relation in Figures \ref{fig:LnrMdl} - \ref{fig:XchNtp} (where (i) we write the relations as identities - so the statement $a = b$ is interpreted as $a - b$ belongs to the set of relations; (ii) $\zeta \in k$ and $a,b \in H;$ and (iii) the external boxes of all tangles appearing in the relations are left undrawn and it is assumed that all external $*$-arcs are the leftmost arcs. \begin{figure}
\caption{The L(inearity) and M(odulus) relations}
\label{fig:LnrMdl}
\end{figure}
\begin{figure}
\caption{The U(nit) and I(ntegral) relations}
\label{fig:NitNtg}
\end{figure}
\begin{figure}
\caption{The C(ounit) and T(race) relations}
\label{fig:CntTrc}
\end{figure}
\begin{figure}
\caption{The E(xchange) and A(ntipode) relations}
\label{fig:XchNtp}
\end{figure} Note that the modulus relation is a pair of relations - one for each choice of shading the circle. Finally, note that the interchange of $\delta$ and $\delta^{-1}$ between the (I) and (T) relations here and those of \cite{KdySnd2006} is due to the different normalisations of $h$ and $\phi$.
A reformulation of Lemma 16 from \cite{KdySnd2006} will be useful. Let ${\mathcal T}(k, p) (p \leq k-1)$ denote the set of $k$ tangles (interpreted as $0$ for $k=0$) with $p$ internal boxes of colour $2$ and no `internal regions'. If $p = k-1$, we will simply write ${\mathcal T}(k)$ instead of ${\mathcal T}(k, k-1)$. The result then asserts:
\begin{lemma}\label{iso1} For each tangle $X \in {\mathcal T}(k, p)$, the map $Z_X^{P(H)}: (P(H)_2)^{\otimes p} \rightarrow P(H)_k$ is an injective linear map and if $p = k-1$, then $Z_X^{P(H)}: (P(H)_2)^{\otimes k-1} \rightarrow P(H)_k$ is a linear isomorphism. \end{lemma}
The following lemma (a reformulation of \cite[Proposition 4.3.1]{Jjo2008}) establishes algebra isomorphisms between $P(H)_k$ and finite iterated crossed product algebras. \begin{lemma}\label{iso}
For each $k \geq 2$, the map
from $\underbrace{H \rtimes H^* \rtimes \cdots}_{\text{$k-1$ \mbox{terms}}}$ to $P(H)_k$ given by
\begin{align*}
\underbrace{x^1 \rtimes f^2 \rtimes \cdots}_{\text{$k-1$ \mbox{terms}}} \mapsto
Z^{P(H)}_{T^k}(\underbrace{x^1 \otimes Ff^2 \otimes \cdots}_{\text{$k-1$ \mbox{terms}}})
\end{align*}
is a $*$-algebra isomorphism. \end{lemma}
We will use this identification of $\underbrace{H \rtimes H^* \rtimes \cdots}_{\text{$k-1$ \mbox{terms}}}$ with $P(H)_k$ very frequently
without mention. Finally, for $i \leq j$, $tr_{H_{[i, j]}}$ denotes the faithful, positive, tracial state on $H_{[i, j]}$ given by \begin{align*}
\begin{cases}
h^i \otimes \phi^{i+1} \otimes h^{i+2} \otimes \cdots (j-i+1 \mbox{-terms}), & \mbox{if} \ i \ \mbox{is even}\\
\phi^i \otimes h^{i+1} \otimes \phi^{i+2} \otimes \cdots (j-i+1 \mbox{-terms}), & \mbox{if} \ i \ \mbox{is odd}.
\end{cases} \end{align*} Thus, for instance, if we assume $i$ to be odd, $j$ to be even and if $X \in H_{[i, j]}$, say, $X = x^i \rtimes f^{i+1} \rtimes \cdots \rtimes x^{j-1} \rtimes f^j$, then $tr_{H_{[i, j]}}(X) = \phi^i(x^i) f^{i+1}(h^{i+1}) \cdots \phi^{j-1}(x^{j-1}) f^j(h^j)$. \subsection{Drinfeld double construction.} The Drinfeld double or quantum double construction is a construction that builds a quasitriangular Hopf algebra out of any finite-dimensional Hopf algebra. The Drinfeld double of $H$ is denoted by $D(H)$. The definition of $D(H)$ is not uniform in the literature. As in \cite{DeKdy2016} what we actually is an isomorphic variant of the version of $D(H)$ in \cite{Mjd2002} which has underlying vector space $H^* \otimes H$ and the structure maps are given by the following formulae: \begin{eqnarray*} (f \otimes x)(g \otimes y) &=& g_1(x_1)g_3(Sx_3) (fg_2 \otimes yx_2),\\ \Delta(f \otimes x) &=& (f_2 \otimes x_2) \otimes (f_1 \otimes x_1), {\mbox { and}}\\ S(f \otimes x) &=& f_1(Sx_1)f_3(x_3) (S^{-1}f_2 \otimes Sx_2). \end{eqnarray*} One can easily verify that the structure maps of $D(H)^*$ are given by the following formulae: \begin{eqnarray*}
& (f \otimes x) (g \otimes y) = gf \otimes yx,\\
&\Delta(f \otimes x) = \delta^2 \phi_2(x_2) \phi_4(Sh_2) (\phi_1 f_2 S\phi_3 \otimes x_1) \otimes (f_1 \otimes h_1),\\
& S(f \otimes x) = \delta^2 \phi_4(x) \phi_2(h_2) \phi_1SfS\phi_3 \otimes h_1, \\
& \varepsilon(f \otimes x) = f(1) \epsilon(x).\\
\end{eqnarray*}
Consider the linear isomorphism $Id_{H^*} \otimes F_H^{-1} : H^* \otimes H^* \rightarrow D(H)^*$. We can make $H^* \otimes H^*$ into a Kac algebra
where the structure maps are obtained by transporting the structure maps on $D(H)^*$ using this linear isomorphism.
Thus, by construction, $H^* \otimes H^*$ is isomorphic to
$D(H)^*$ as a Kac algebra. The following lemma explicitly describes the structure maps on $H^* \otimes H^*$.
\begin{lemma}\label{dr}
The structure maps on $H^* \otimes H^*$ are given by the following formulae:
\begin{eqnarray*}
& (g \otimes f) (k \otimes p) = \delta (Sf_2 p)(h) kg \otimes f_1,\\
&\Delta(g \otimes f) = \delta (\phi_1 g_2 S\phi_3 \otimes f S\phi_2) \otimes (g_1 \otimes \phi_4),\\
& S(g \otimes f) = f_1 Sg Sf_3 \otimes Sf_2, \\
& \varepsilon(g \otimes f) = \delta f(h) g(1).\\
\end{eqnarray*}
\end{lemma}
\begin{proof}
Easy to verify and is left to the reader.
\end{proof}
\section{Construction of models for the quantum double inclusion}
In \cite[$\S 3$]{Sde2018} we defined the notion of quantum double inclusion associated to a finite-index and finite-depth subfactor and constructed a model for the quantum double inclusion of $R^H \subset R$. In a similar way we construct in this section models for the quantum double inclusions of the family of subfactors $\{ R^H \subset R \rtimes \underbrace{H \rtimes H^* \rtimes \cdots}_{{\text{$m$ times}}} : m \geq 1 \}$.
We begin with recalling from \cite{Sde2018} the notion of quantum double inclusion. Given a finite-index and finite-depth subfactor $N \subset M$, let $N ( = M_0) \subset M ( = M_1) \subset M_2 \subset M_3 \subset \cdots$ denote the basic construction tower of $N \subset M$. Let $M_{\infty}$ denote the $II_1$ factor obtained as the von Neumann closure $(\cup_{n = 0}^{\infty} M_n)^{\prime \prime}$ in the GNS representation with respect to the trace on $\cup_{n = 0}^{\infty} M_n$. Then the inclusion \begin{align*}
N \vee (M^{\prime} \cap M_{\infty}) \subset M_{\infty} \end{align*} is defined to be the quantum double inclusion associated to $N \subset M$.
It is well-known that for any positive integer $k$, $\mathbb{C} \subset H_k \subset H_{[k-1, k]} \subset H_{[k-2, k]} \subset H_{[k-3, k]} \subset \cdots$ is the basic construction tower associated to the initial (connected) inclusion $\mathbb{C} \subset H_k$ so that $H_{(-\infty, k]} ( = \cup_{i = 0}^{\infty} H_{[k-i, k]} )$ comes equipped with a tracial state and consequently, \begin{align*}
H_{(-\infty, k]}^{\prime \prime} := (\cup_{i=0}^{\infty} H_{[k-i, k]})^{\prime \prime} = (H_{(-\infty, k]})^{\prime \prime} \end{align*} turns out to be a hyperfinite $II_1$ factor. It is also well-known (see \cite[Theorem 4.11]{KdyLndSnd2003}) that the basic construction tower associated to $R^H \subset R$ is given by: \begin{align*}
R^H \subset R \subset R \rtimes H \subset R \rtimes H \rtimes H^* \subset R \rtimes H \rtimes H^* \rtimes H \subset \cdots. \end{align*} The following lemma (\cite[Lemma 17]{Sde2018}) describes models for $R^H \subset R$ as well as for the basic construction tower of $R^H \subset R$. \begin{lemma}\label{z}
$H_{(-\infty, -1]}^{\prime \prime} \subset H_{(-\infty, 0]}^{\prime \prime}$ is a model for $R^H \subset R$
for some outer action of $H$ on the hyperfinite $II_1$ factor $R$ and $H_{(-\infty, -1]}^{\prime \prime} \subset
H_{(-\infty, 0]}^{\prime \prime} \subset H_{(-\infty, 1]}^{\prime \prime} \subset H_{(-\infty, 2]}^{\prime \prime} \subset \cdots$ is a
model for the basic construction tower of $R^H \subset R$. \end{lemma} As an immediate consequence of Lemma \ref{z} we obtain that $(\cup_{i=-1}^{\infty} H_{(-\infty, i]}^{\prime \prime})^{\prime \prime}$ is a hyperfinite $II_1$ factor. It is not hard to see that $(\cup_{i=-1}^{\infty} H_{(-\infty, i]}^{\prime \prime})^{\prime \prime} = (H_{(-\infty, \infty)})^{\prime \prime}$. We set \begin{align*}
H^{\prime \prime}_{(-\infty, \infty)} := (H_{(-\infty, \infty)})^{\prime \prime}. \end{align*} It follows easily from Lemma \ref{z} and \cite[Proposition 4.3.6]{JnsSnd1997} that: \begin{lemma}
Given any positive integer $m$, $H_{(-\infty, -1]}^{\prime \prime} \subset H_{(-\infty, m]}^{\prime \prime}$ is a model for $R^H \subset
R \rtimes \underbrace{H \rtimes H^* \rtimes \cdots}_{\text{$m$ times}}$
and $H_{(-\infty, -1]}^{\prime \prime} \subset
H_{(-\infty, m]}^{\prime \prime} \subset H_{(-\infty, 2m+1]}^{\prime \prime} \subset H_{(-\infty, 3m+2]}^{\prime \prime} \subset \cdots$
is a model for the basic construction tower of $R^H \subset R \rtimes \underbrace{H \rtimes H^* \rtimes \cdots}_{\text{$m$ times}}$.
\end{lemma} Thus for any positive integer $m$, a model for the \textit{quantum double inclusion} of $R^H \subset R \rtimes \underbrace{H \rtimes H^* \rtimes \cdots}_{{\text{$m$ times}}}$ is given by
\begin{align*}
H_{(-\infty, -1]}^{\prime \prime} \vee ((H_{(-\infty, m]}^{\prime \prime})^{\prime} \cap H^{\prime \prime}_{(-\infty, \infty)}) \subseteq
H^{\prime \prime}_{(-\infty, \infty)}.
\end{align*}
By an appeal to \cite[Lemma 14(2)]{Sde2018}, one can easily see that
\begin{align*}
(H_{(-\infty, m]}^{\prime \prime})^{\prime} \cap H^{\prime \prime}_{(-\infty, \infty)} = H^{\prime \prime}_{[m+2, \infty)}
\end{align*}
and consequently,
\begin{align*}
H_{(-\infty, -1]}^{\prime \prime} \vee
((H_{(-\infty, m]}^{\prime \prime})^{\prime} \cap H^{\prime \prime}_{(-\infty, \infty)}) =
H_{(-\infty, -1]}^{\prime \prime} \vee H^{\prime \prime}_{[m+2, \infty)} =
(H_{(-\infty, -1]} \otimes H_{[m+2, \infty)})^{\prime \prime}.
\end{align*}
\begin{definition}\label{def1}
For each integer $m > 2$, set $\mathcal{N}^m = (H_{(-\infty, -1]} \otimes H_{[m, \infty)})^{\prime \prime}$ and
$\mathcal{M} = H_{(-\infty, \infty)}^{\prime \prime}$.
\end{definition}
We have thus shown that:
\begin{proposition}\label{qdim}
For each integer $m > 2$, the subfactor $\mathcal{N}^m \subset \mathcal{M}$ is a model for the quantum double inclusion of $R^{H} \subset
R \rtimes \underbrace{H \rtimes H^* \rtimes \cdots}_{{\text{$m-2$ times}}}$.
\end{proposition}
\section{Basic construction tower of $\mathcal{N}^m \subset \mathcal{M}, m > 2$ and relative commutants} The purpose of this section is to construct the basic construction tower associated to $\mathcal{N}^m \subset \mathcal{M} \ (m > 2)$ and also to compute the relative commutants. \subsection{Some finite-dimensional basic constructions.} This subsection is devoted to analysing the basic constructions associated to certain unital inclusions of finite-dimensional $C^*$-algebras. We begin with recalling the following lemma (a reformulation of Lemma 5.3.1 of \cite{JnsSnd1997}) which provides an abstract characterisation of the basic construction associated to a unital inclusion of finite-dimensional $C^*$-algebras. \begin{lemma}\cite[Lemma 5.3.1]{JnsSnd1997} \label{basic}
Let $A \subseteq B \subseteq C$ be a unital inclusion of finite-dimensional $C^*$-algebras. Let $tr_B$ denote a faithful tracial state on $B$ and
let $E_A$ denote the $tr_B$-preserving conditional expectation of $B$ onto $A$. Let $f \in C$ be a projection. Then $C$ is isomorphic to the
basic construction for $A \subseteq B$ with $f$ as the Jones projection if the following conditions are satisfied:
\begin{itemize}
\item[(i)] $f$ commutes with every element of $A$ and $a \mapsto af$ is an injective map of $A$ into $C$,
\item[(ii)] $f$ implements the trace-preserving conditional expectation of $B$ onto $A$ i.e., $fbf = E_A(b)f$ for all $b \in B$, and
\item[(iii)] $BfB = C$.
\end{itemize}
\end{lemma}
In the next lemma, we explicitly compute certain conditional expectation map. \begin{lemma}\label{exp}
Given integers $l, p \geq 1$ and $s \geq 0$, let $\psi_{l, s, p}$ denote the embedding of $H_{[-l, p+s]}$ inside
$H_{[-l, -1]} \otimes H_{[p, 3p+s]}$ specified as follows:
\begin{align*}
\mbox{Let} \ X = x^{-l}/f^{-l} \rtimes \cdots \rtimes x^{p+s}/f^{p+s} \in H_{[-l, p+s]}, \end{align*} then $\psi_{l, s, p}(X) \in H_{[-l, -1]} \otimes H_{[p, 3p+s]}$ is given by \begin{align*}
(x^{-l}/f^{-l} \rtimes \cdots \rtimes f^{-2} \rtimes x^{-1}_1) \otimes
(\underbrace{1 \rtimes \epsilon \rtimes 1 \rtimes \cdots \rtimes \epsilon}_\text{$p-1$ \mbox{terms}}
\rtimes \ x^{-1}_2 \rtimes f^0 \rtimes \cdots \rtimes x^{p+s}/f^{p+s}) \end{align*} or \begin{align*}
(x^{-l}/f^{-l} \rtimes \cdots \rtimes f^{-2} \rtimes x^{-1}_1) \otimes (\underbrace{\epsilon \rtimes 1 \rtimes \epsilon \rtimes \cdots \rtimes \epsilon}_\text{$p-1$ \mbox{terms}}
\rtimes \ x^{-1}_2 \rtimes f^0 \rtimes \cdots \rtimes x^{p+s}/f^{p+s}) \end{align*} according as $p$ is odd or even. Then the trace-preserving conditional expectation $E$ of $H_{[-l, -1]} \otimes H_{[p, 3p+s]}$ onto $H_{[-l, p+s]}$ is given by
\begin{align*}
& E((x^{-l}/ f^{-l} \rtimes \cdots \rtimes x^{-1}) \otimes (x^p/f^p \rtimes f^{p+1}/x^{p+1} \rtimes \cdots \rtimes x^{3p+s}/f^{3p+s}))\\
& = \phi(Sx_2^{-1}x^{2p-1}) tr_{H_{[p, 2p-2]}}(x^p/f^p \rtimes \cdots \rtimes f^{2p-2}) x^{-l}/f^{-l} \rtimes \cdots \rtimes f^{-2} \rtimes
x^{-1}_1 \rtimes f^{2p} \rtimes \cdots \rtimes x^{3p+s}/f^{3p+s}.
\end{align*}
\end{lemma}
\begin{proof}
In \cite[Lemma 21(ii)]{Sde2018} we proved the result for $p = 2$. The proof for the general case will follow in a similar fashion
and hence, we omit the proof.
\end{proof}
Next, we apply Lemma \ref{exp} to explicitly describe certain basic constructions and their associated Jones
projections.
\begin{proposition}\label{basic cons}
The following are instances of basic constructions with the Jones projections being specified pictorially in appropriate planar algebras.
\begin{itemize}
\item[1.] If $l \geq 1, s \geq 0$ are integers, then given any positive integer $p$, $H_{[-l, -1]} \otimes H_{[p, p+s]} \subset H_{[-l, p+s]} \subset H_{[-l, -1]} \otimes H_{[p, 3p+s]} (\subset H_{[-l, 3p+s]} \cong P(H^l)_{3p+s+l+2})$ is an instance of the basic construction with the Jones projection given by the following figure \begin{center} \begin{tikzpicture} \node at (1.3,0) {\tiny $\delta^{-p}$};
\draw [black,thick] (1.96,.5) -- (1.96,-.5);
\node [right] at (2,0) {\tiny $p+l$};
\draw [black,thick] (3,.5) to [out=275, in=265] (3.5,.5);
\draw [black,thick] (3,-.5) to [out=85, in=95] (3.5,-.5);
\node [below] at (3.3,.4) {\tiny $p$};
\node [above] at (3.3,-.4) {\tiny $p$};
\draw [black,thick] (4,.5) -- (4, -.5);
\node [right] at (4.002,0) {\tiny $s+2$};
\end{tikzpicture}
\end{center} where the first inclusion is natural and the second inclusion is given by the map $\psi_{l, s, p}$ as defined in the statement of Lemma \ref{exp}. Furthermore, $tr_{H_{[-l, p+s]}}$ is a Markov trace of modulus $\delta^{2p}$ for the inclusion $H_{[-l, -1]} \otimes H_{[p, p+s]} \subset H_{[-l, p+s]}$.
\item[2.] If $l \geq 1, s \geq 0$ are integers, then given any positive integer $p$,
$H_{[-l, p+s]} \subset H_{[-l, -1]} \otimes H_{[p, 3p+s]} \subset H_{[-l, 3p+s]} (\cong P(H^l)_{3p+s+l+2})$ is an instance of the
basic construction with the Jones projection given by
\begin{center}
\begin{tikzpicture}
\node at (.7,0) {\tiny $\delta^{-p}$};
\draw [black,thick] (1.4,.5) -- (1.4,-.5);
\node [right] at (1.42,0) {\tiny $l$};
\draw [black,thick] (2,.5) to [out=275, in=265] (2.5,.5);
\draw [black,thick] (2,-.5) to [out=85, in=95] (2.5,-.5);
\node [below] at (2.3,.4) {\tiny $p$};
\node [above] at (2.3,-.4) {\tiny $p$};
\draw [black,thick] (3,.5) -- (3, -.5);
\node [right] at (3.001,0) {\tiny $p+s+2$};
\end{tikzpicture}
\end{center}
where the first inclusion is given by the map $\psi_{l, s, p}$ as described in the statement of Lemma \ref{exp} and the second inclusion is the natural
inclusion. Also, $tr_{H_{[-l, -1]} \otimes H_{[p, 3p+s]}}$ is a Markov trace of modulus $\delta^{2p}$ for the inclusion
$H_{[-l, p+s]} \subset H_{[-l, -1]} \otimes H_{[p, 3p+s]}$.
\end{itemize}
\end{proposition}
\begin{proof}
In \cite[Proposition 22(2), 22(3)]{Sde2018} we proved the result for $p = 2$.
The proof for the general case will follow in a similar fashion. For the sake of completeness, we provide the proof of
only one part namely, part 2, of the proposition which is also the harder part.
\begin{itemize}
\item [2.] We only present the proof when
$l = 1$ and $s=0$, omitting the proof for the general case which is analogous. Let $e$ denote the projection defined in the statement
of Proposition \ref{basic cons}(2). We identify as usual $H_{[-1, 6m-3]}$ with $P(H)_{6m}$.
Given $X = x^{-1} \rtimes f^0 \rtimes \cdots \rtimes x^{2m-1} \in H_{[-1, 2m-1]}$,
its image in $H_{[-1, 6m-3]}$ is given by
\begin{align*}
x^{-1} \rtimes \underbrace{\epsilon \rtimes 1 \cdots \rtimes \epsilon}_{\text{$4m-3$ \ factors}} \rtimes \ x^{-1}_2 \rtimes
f^0 \rtimes \cdots \rtimes x^{2m-1}.
\end{align*}
The element $eX$ is shown on the left in Figure \ref{fig:D44}. An application of the relation (E) shows that $eX$ equals the element on the
right in Figure \ref{fig:D44}. Similarly, by an appeal to the relations (A) and (E), one can easily see that the element $Xe$ equals the element on the
right in Figure \ref{fig:D44} so that $e X = X e$. Thus, we conclude that $e$ commutes with $X$. Further, it is evident from the pictorial representation
of the element $Xe$ as shown on the right in Figure \ref{fig:D44} that the map $X \mapsto Xe$ of $H_{[-1, 2m-1]}$ into $H_{[-1, 6m-3]}$
is injective, verifying condition (i) of Lemma \ref{basic}.
\begin{figure}
\caption{$eX = Xe$}
\label{fig:D44}
\end{figure}
Given $X = x^{-1} \otimes (x^{2m-1} \rtimes f^{2m} \rtimes \cdots \rtimes x^{6m-3})\in H_{-1} \otimes H_{[2m-1, 6m-3]}$,
the element $eXe$ is shown in Figure \ref{fig:D444}.
\begin{figure}
\caption{$eXe$}
\label{fig:D444}
\end{figure}
Repeated application of the relations (T), (C), and (A) reduces the element in Figure \ref{fig:D444} to that on the left
in Figure \ref{fig:pic29}
\begin{figure}
\caption{$eXe$}
\label{fig:pic29}
\end{figure} where $\alpha = \delta^{-2m} tr_{H_{[2m-1, 4m-4]}}(x^{2m-1} \rtimes f^{2m} \rtimes \cdots \rtimes f^{4m-4})$. Again repeated application of the relations (E) and (A), and finally, an application of the relation (T) reduces the element on the left in Figure \ref{fig:pic29} to that on the right in Figure \ref{fig:pic29}.
It follows from Lemma \ref{exp} that if $E$ denotes the trace-preserving conditional expectation of $H_{[-1, -1]} \otimes H_{[2m-1, 6m-3]}$ onto $H_{[-1, 2m-1]}$, then \begin{align*}
E(X) = \phi(Sx_2^{-1}x^{4m-3}) tr_{H_{[2m-1, 4m-4]}}(x^{2m-1} \rtimes \cdots \rtimes f^{4m-4}) \ x^{-1}_1 \rtimes f^{4m-2} \rtimes \cdots
\rtimes x^{6m-3}.
\end{align*} Now observe that $E(X)e$ equals the element as given by Figure \ref{fig:pic32} \begin{figure}
\caption{$E(X) e$}
\label{fig:pic32}
\end{figure} which, after a straightforward computation using relations (E) and (A), is easily seen to be equal to the element on the right in Figure \ref{fig:pic29}. Consequently, $eXe = E(X)e$, verifying condition (ii) of Lemma \ref{basic}.
In order to verify condition (iii) of Lemma \ref{basic}, we just need to show that
$(H_{[-1, -1]} \otimes H_{[2m-1, 6m-3]}) e (H_{[-1, -1]} \otimes H_{[2m-1, 6m-3]}) = H_{[-1, 6m-3]}$. Consider the elements $X, Y$ in $H_{[-1, -1]} \otimes H_{[2m-1, 6m-3]}$ given by
\begin{align*}
& X = x^{-1} \otimes (x^{2m-1} \rtimes f^{2m} \rtimes \cdots \rtimes x^{6m-3}),\\
& Y = 1 \otimes (y^{2m-1} \rtimes g^{2m} \rtimes \cdots \rtimes y^{4m-3} \rtimes
\underbrace{\epsilon \rtimes 1 \rtimes \cdots \rtimes \epsilon \rtimes 1}_\text{$2m$ \mbox{terms}}).
\end{align*}
Representing the element $XeY$ pictorially in $P(H)_{6m}$ one can easily see that $XeY$ equals
\begin{align*}
Z_T^{P(H)}(x^{-1}\otimes_{i=m}^{3m-2}(x^{2i-1} \otimes Ff^{2i}) \otimes x^{6m-3} \otimes_{i=m}^{2m-2}(y^{2i-1} \otimes Fg^{2i}) \otimes y^{4m-3})
\end{align*}
where $Z_T$ is the linear
isomorphism induced by the tangle $T \in {\mathcal{T}}{(6m)}$ as shown in Figure \ref{fig:pic33}. Thus,
we see that $(H_{[-1, -1]} \otimes H_{[2m-1, 6m-3]}) e (H_{[-1, -1]} \otimes H_{[2m-1, 6m-3]})$ contains the image of
$Z_T$. Then by comparing dimensions of spaces we have that
$ H_{[-1, 6m-3]} = (H_{[-1, -1]} \otimes H_{[2m-1, 6m-3]}) e (H_{[-1, -1]} \otimes H_{[2m-1, 6m-3]})$.
\begin{figure}
\caption{Tangle T}
\label{fig:pic33}
\end{figure}
Finally, a routine computation shows that for any $X \in H_{[-1, -1]} \otimes H_{[2m-1, 6m-3]}, \ tr(Xe) = \delta^{-2(2m-1)} tr(X)$,
so that $tr_{H_{[-1, -1]} \otimes H_{[2m-1, 6m-3]}}$ is a Markov trace of modulus $\delta^{2(2m-1)}$ for the inclusion $H_{[-1, 2m-1]} \subset H_{[-1, -1]} \otimes H_{[2m-1, 6m-3]}$, completing the proof.
\end{itemize}
\end{proof}
\subsection{Jones' basic construction tower of $\mathcal{N}^m \subset \mathcal{M}$ and relative commutants}
Throughout this subsection, $m > 2$ denotes a fixed positive integer.
The goal of this subsection is to explicitly determine the basic construction tower of $\mathcal{N}^m \subset \mathcal{M}$.
We set $A_{0, 0} = \mathbb{C}, A_{0, 1} = H_{[0, m-1]}, A_{1, 0} = H_{-1} \otimes H_m$ or $H_{[-2, -1]} \otimes H_m$ according as $m$ is even or odd and $A_{1, 1} = H_{[-1, m]}$ or $H_{[-2, m]}$ according as $m$ is even or odd. It follows from \cite[Lemma 23]{Sde2018} that the square in Figure \ref{fig:pic37} is a symmetric commuting square with respect to $tr_{A_{1, 1}}$ which is a Markov trace for the inclusion $A_{0, 1} \subset A_{1, 1}$. Further, here all the inclusions are connected since the lower left corner is $\mathbb{C}$ while the upper right corner is a matrix algebra by Lemma \ref{matrixalg}. \begin{figure}
\caption{Commuting square}
\label{fig:pic37}
\end{figure}
For $k \geq 2$, we set
\begin{align*}
A_{k, 1} = H_{[-k, m+k-1]} \ \mbox{or} \ H_{[-2k, m+k-1]} \ \mbox{according as} \ m \ \mbox{is even or odd}.
\end{align*}
It is then a consequence of \cite[Proposition 22(1)(i)]{Sde2018} that
$A_{0, 1} \subset A_{1, 1} \subset A_{2, 1} \subset A_{3, 1} \subset \cdots$ is the basic construction tower associated to
the initial inclusion $A_{0, 1} \subset A_{1, 1}$ and for any $k \geq 0$, if $e^{\prime}_{k+2}$ denotes the Jones projection lying in
$A_{k+2, 1}$ for the basic construction of $A_{k, 1} \subset A_{k+1, 1}$, then $e^{\prime}_{k+2}$ is given by Figure \ref{fig:pic100}.
\begin{figure}
\caption{$e^{\prime}_{k+2} :$ with $m$ even (left) and with $m$ odd (right)}
\label{fig:pic100}
\end{figure}
Further, we define inductively
\begin{align*}
A_{k+2, 0} = < A_{k+1, 0}, e^{\prime}_{k+2} >
\end{align*}
for each $k \geq0$.
It is well-known that $A_{0, 0} \subset A_{1, 0} \overset{e^{\prime}_2} {\subset} A_{2, 0} \overset{e^{\prime}_3} {\subset}
A_{3, 0} \overset{e^{\prime}_4} \subset \cdots$
is the basic construction tower of $A_{0, 0} \subset A_{1, 0}$. Proceeding along the same line as in the proof of \cite[Lemma 24]{Sde2018},
one can show that:
\begin{lemma}For any $k > 0$,
\begin{align*}
A_{k, 0} =
\begin{cases}
H_{[-2k, -1]} \otimes H_{[m, m+k-1]}, & \mbox{if} \ m \ \mbox{is odd},\\
H_{[-k, -1]} \otimes H_{[m, m+k-1]}, & \mbox{if} \ m \ \mbox{is even}.\\
\end{cases}
\end{align*}
\end{lemma}
At this point we need to recall from \cite{KdySnd2008} the notion of finite pre-von Neumann algebras. By a finite pre-von Neumann algebra, we will mean a pair $(A, \tau)$ consisting of a complex $*$-algebra $A$ that is equipped with a normalised trace $\tau$ such that (i) the sesquilinear form defined by $<a, b> = \tau(b^* a)$ defines an inner-product on $A$ and such that (ii) for each $a \in A$, the
left-multiplication map $\lambda_A(a) : A \longrightarrow A$ is bounded for the trace induced norm of $A$.
By a compatible pair of finite pre-von Neumann algebras, we will mean a pair $(A, \tau_A )$ and $(B, \tau_B)$ of finite pre-von Neumann algebras such that $A \subseteq B$ and ${\tau_B|}_{A} = \tau_A$.
If $A$ is a finite pre-von Neumann algebra with trace $\tau_A$, the symbol $L^2(A)$ will always denote the Hilbert space completion of $A$ for the associated norm. Obviously, the left regular representation $\lambda_A : A \rightarrow \mathcal{L}(L^2(A))$ is well-defined, i.e., for each $a \in A, \lambda_A(a) : A \rightarrow A$ extends to a bounded operator on $L^2(A)$. The notation $A^{\prime \prime}$ will always denote the von Neumann algebra $(\lambda_A(A))^{\prime \prime} \subset \mathcal{L}(L^2(A))$. The following lemma (a reformulation of \cite[Proposition 4.6(1)]{KdySnd2008})
will be useful. \begin{lemma}\label{pre}\cite[Proposition 4.6(1)]{KdySnd2008} Let $(A, \tau_A )$ and $(B, \tau_B)$ be a compatible pair of finite pre-von Neumann algebras. The inclusion $A \subseteq B$ extends uniquely to a normal inclusion of $A^{\prime \prime}$ into $B^{\prime \prime}$ with image $(\lambda_B(A))^{\prime \prime}$. \end{lemma}
Note that $\cup_{k = 0}^{\infty} A_{k, 0} ( = H_{(-\infty, -1]} \otimes H_{[m, \infty)})$ and $\cup_{k = 0}^{\infty} A_{k, 1}
( = H_{(-\infty, \infty)})$ are finite pre-von Neumann algebras and $\cup_{k = 0}^{\infty} A_{k, 0} \subset \cup_{k = 0}^{\infty} A_{k, 1}$
is a compatible pair so that by Lemma \ref{pre} the inclusion $\cup_{k = 0}^{\infty} A_{k, 0} \subset \cup_{k = 0}^{\infty} A_{k, 1}$
extends uniquely to a normal inclusion $(\cup_{k = 0}^{\infty} A_{k, 0})^{\prime \prime} \subset
(\cup_{k = 0}^{\infty} A_{k, 1})^{\prime \prime}$. It follows from Definition \ref{def1} that
\begin{align*}
(\cup_{k = 0}^{\infty} A_{k, 0})^{\prime \prime} = (H_{(-\infty, -1]} \otimes H_{[m, \infty)})^{\prime \prime} = \mathcal{N}^m \ \mbox{and} \
(\cup_{k = 0}^{\infty} A_{k, 1})^{\prime \prime} = (H_{(-\infty, \infty)})^{\prime \prime} = \mathcal{M}.
\end{align*}
Thus we have proved that:
\begin{lemma}\label{hyper}
$\mathcal{N}^m$ and $\mathcal{M}$ are hyperfinite $II_1$ factors.
\end{lemma}
The following lemma shows that $\mathcal{N}^m \subset \mathcal{M}$ is of finite index
equal to $\delta^{2m}$.
\begin{lemma}\label{index}
$[\mathcal{M} : \mathcal{N}^m] = \delta^{2m}$.
\end{lemma}
\begin{proof}
It is well-known that (see \cite[Corollary 5.7.4]{JnsSnd1997}) $[\mathcal{M} : \mathcal{N}^m]$ equals the square of the norm of the
inclusion matrix for
$A_{0, 0} \subset A_{0, 1}$ which further equals the modulus of the Markov trace $tr_{A_{0, 1}}$ for the inclusion
$A_{0, 0} ( = \mathbb{C}) \subset A_{0, 1} ( = H_{[0, m -1]})$ which, again, by an application of \cite[Proposition 22(1)(ii)]{Sde2018},
equals $\delta^{2m}$.
\end{proof}
For each $k \geq 0$ and $n \geq 2$, we now define a finite-dimensional $C^*$-algebra, denoted $A_{k, n}$, as follows.
\begin{itemize}
\item \textit{Case (i): $m$ is odd}
\begin{align*}
A_{k, n} =
\begin{cases}
H_{[-2k, -1]} \otimes H_{[m, (n+1)m +k - 1]}, & \mbox{if} \ n \ \mbox{is even and} \ k > 0, \\
H_{[-2k, nm +k - 1]}, & \mbox{if} \ n \ \mbox{is odd and} \ k > 0, \\
H_{[-(n-1)m, m-1]}, & \mbox{if} \ k=0. \\
\end{cases}
\end{align*}
\item \textit{Case (ii): $m$ is even}
\begin{align*}
A_{k, n} =
\begin{cases}
H_{[-k, -1]} \otimes H_{[m, (n+1)m +k - 1]}, & \mbox{if} \ n \ \mbox{is even and} \ k > 0, \\
H_{[-k, nm +k - 1]}, & \mbox{if} \ n \ \mbox{is odd and} \ k > 0, \\
H_{[-(n-1)m, m-1]}, & \mbox{if} \ k=0. \\
\end{cases}
\end{align*}
\end{itemize}
We have already seen that for any $k \geq 0$, both the inclusions - the inclusion of $A_{k, 0}$ inside $A_{k, 1}$ and that
of $A_{k, n}$ inside $A_{k+1, n}$ for $n = 0, 1$, are natural. We describe below the embedding of $A_{k, n}$ inside $A_{k+1, n}$
for any $k \geq 0, n \geq 2$ and that of $A_{k, n}$ inside $A_{k, n+1}$ for any $k \geq 0, n \geq 1$.
\begin{itemize}
\item For any $n \geq 2, k > 0, A_{k, n}$ sits inside $A_{k+1, n}$ in the natural way.
\item If $n \geq 1$ is even and $k > 0$, then $A_{k, n}$ sits inside $A_{k, n+1}$ in the natural way.
\item If $n > 0$ is odd and $k > 0$, the embedding of $A_{k, n}$ inside $A_{k, n+1}$ is given by $\psi_{l, s, p}$ as
defined in the statement of Lemma \ref{exp} with $p = m, s = (n-1)m+k-1$, and $l = 2k$ or $k$ according as $m$ is odd or even.
\item If $n \geq 2$ is odd, then $A_{0, n}$ is identified with the subalgebra $H_{[0, mn-1]}$ of $A_{1, n}$.
\item If $n \geq 2$ is even, then $A_{0, n}$ is identified with the subalgebra $H_{[m, (n+1)m - 1]}$ of
$A_{1, n}$.
\item Embedding of $A_{0, n}$ inside $A_{0, n+1}$ is natural for all $n \geq 1$.
\end{itemize}
Thus, we have a grid $\{ A_{k, n} : k, n \geq 0 \}$ of finite-dimensional $C^*$-algebras.
The following remark contains several useful facts concerning the grid $\{ A_{k, n} : k, n \geq 0 \}$.
\begin{remark}\label{B}
\begin{itemize}
\item[(i)] We have already seen that the square of finite-dimensional $C^*$-algebras as shown
in Figure \ref{fig:pic37} is a symmetric commuting square with respect to $tr A_{1,1}$ which is a Markov trace
for the inclusion $A_{0,1} \subset A_{1,1}$ and all the inclusions are connected. Further, by Lemmas \ref{hyper} and \ref{index},
$(\cup_{k=0}^{\infty} A_{k, 0})^{\prime \prime} ( = \mathcal{N}^m)$ as well as $(\cup_{k=0}^{\infty} A_{k, 1})^{\prime \prime}
( = \mathcal{M})$
are hyperfinite $II_1$ factors with $[\mathcal{M} : \mathcal{N}^m] = \delta^{2m}$.
\item[(ii)]It follows from the embedding prescriptions that the following diagram (see Figure \ref{fig:cd}) commutes for all $k, n \geq 0$.
\begin{figure}
\caption{Commutative diagram}
\label{fig:cd}
\end{figure}
\item[(iii)]It is a direct consequence of \cite[Proposition 22]{Sde2018} that
for any $k, n \geq 0, A_{k, n} \subset A_{k, n+1} \subset A_{k, n+2}$ is an instance of the basic construction
and further, $tr_{A_{k, n+1}}$ is a Markov trace of modulus $\delta^{2m}$ for the inclusion
$A_{k, n} \subset A_{k, n+1}$. Let $e_{k, n+2}^m$ ($k \geq 0, n \geq 0$) denote the Jones projection lying in $A_{k, n+2}$ applied to
the basic construction $A_{k, n} \subset A_{k, n+1}$.
\item[(iv)] For any $k \geq 0, n \geq 2$, the embedding of $A_{k, n}$ inside $A_{k+1, n}$ carries $e_{k, n}^m$ to $e_{k+1, n}^m$.
\end{itemize}
\end{remark}
Obviously for any $n \geq 0$, $\cup_{k = 0}^{\infty}A_{k, n}$ is a finite pre-von Neumann algebra.
Consider the tower of finite pre-von Neumann algebras
\begin{align*}
\cup_{k = 0}^{\infty}A_{k, 0} \subset \cup_{k = 0}^{\infty}A_{k, 1}
\subset \cup_{k = 0}^{\infty}A_{k, 2} \subset \cdots.
\end{align*} Observe that for any $n \geq 0, \
\cup_{k = 0}^{\infty}A_{k, n} \subset \cup_{k = 0}^{\infty}A_{k, n+1}$ is a compatible pair so that by Lemma \ref{pre},
the inclusion $\cup_{k = 0}^{\infty}A_{k, n} \subset \cup_{k = 0}^{\infty}A_{k, n+1}$ extends uniquely to a normal extension
$({\cup_{k = 0}^{\infty}A_{k, n}})^{\prime \prime} \subset ({\cup_{k = 0}^{\infty}A_{k, n+1}})^{\prime \prime}$. Note also that
$\cup_{k = 0}^{\infty} A_{k, n} = H_{(-\infty, -1]} \otimes H_{[m, \infty)} \ \mbox{or} \
H_{(-\infty, \infty)}$ according as $n$ is even or odd.
For each $n \geq 0$, we define $\mathcal{M}_n:= (\cup_{k = 0}^{\infty}A_{k, n})^{\prime \prime}$. Then $\mathcal{M}_n = (H_{(-\infty, -1]}
\otimes H_{[m, \infty)})^{\prime \prime} \ \mbox{or} \ H^{\prime \prime}_{(-\infty, \infty)}$ according as $n$ is even or odd.
In view of the facts concerning the grid $\{ A_{k, n} : k, n \geq 0 \}$ as mentioned in Remark \ref{B}, one can conclude that:
\begin{proposition}\label{towerbasic}
$\mathcal{M}_0 (=\mathcal{N}^m) \subset \mathcal{M}_1 (=\mathcal{M}) \subset \mathcal{M}_2 \subset \mathcal{M}_3 \subset \cdots$ is the
basic construction tower of $\mathcal{N}^m \subset \mathcal{M}$.
\end{proposition}
\subsection{Computation of the relative commutants.}
We now proceed to compute the relative commutants. By virtue of Ocneanu's compactness theorem (see \cite[Theorem 5.7.6]{JnsSnd1997}), the relative commutant $(\mathcal{N}^m)^{\prime} \cap \mathcal{M}_k$ ($k > 0$) is given by \begin{align*}
(\mathcal{N}^m)^{\prime} \cap \mathcal{M}_k = A_{0,k} \cap (A_{1,0})^{\prime}, \ k \geq 1. \end{align*}
The following proposition describes the spaces $ A_{0, k} \cap (A_{1, 0})^{\prime}, k \geq 1$.
The proof of the proposition is similar to that of \cite[Proposition 29]{Sde2018} and we omit its proof.
\begin{proposition}\label{rel comm}
Let $k \geq 1$ be an integer and set
\begin{align*}
&\tilde{Q}^m_{2k} = \{X \in H_{[m, (2k+1)m-2]} :
X \ \mbox{commutes with} \ \Delta_{k-1}(x) \in \otimes_{i = 1}^{k} H_{2im-1}, \forall x \in H \},\\
&\tilde{Q}^m_{2k-1} = \{X \in H_{[0, (2k-1)m-2]} : X \ \mbox{commutes with} \ \Delta_{k-1}(x) \in
\otimes_{i = 0}^{k-1} H_{2im-1}, \forall x \in H\}.
\end{align*}
Then,
$A_{0, 2k} \cap (A_{1, 0})^{\prime} \cong \tilde{Q}^m_{2k}$ and $A_{0, 2k-1} \cap (A_{1, 0})^{\prime} \cong
\tilde{Q}^m_{2k-1}$.
\end{proposition}
It follows from Remark \ref{B}(iii) that the Jones projection lying in $\mathcal{N}^{\prime} \cap \mathcal{M}_{n+2} =
A_{0, n+2} \cap (A_{1, 0})^{\prime}$ ($n \geq 0$)
is given by $e_{0, n+2}^m$ (see Figure \ref{fig:newpic73}), which, under the identification of
$A_{0, n+2} \cap (A_{1, 0})^{\prime}$ with $\tilde{Q}_{n+2}^m$ as given by Proposition \ref{rel comm}, is easily seen to be identified with the
projection $\tilde{e}_{n+2}^m$ in $\tilde{Q}_{n+2}^m$ as shown on the right in Figure \ref{fig:newpic73}.
\begin{figure}
\caption{$e_{0, n+2}^m$ (left) and $\tilde{e}_{n+2}^m$ (right), $n \geq 0$}
\label{fig:newpic73}
\end{figure} \begin{remark}
It is worth knowing the embedding of $\tilde{Q}^m_k$ inside $\tilde{Q}^m_{k+1}$ ($k \geq 1$).
It follows easily from the embedding formulae of $A_{1, k}$ inside $A_{1, k+1}$ and $A_{0, k}$ (resp., $A_{0, k+1}$) inside
$A_{1, k}$ (resp., $A_{1, k+1}$) and Proposition \ref{rel comm} that given
$X \in \tilde{Q}^m_k$,
it sits inside $\tilde{Q}^m_{k+1}$ as
\begin{align*}
\underbrace{\epsilon \rtimes 1 \rtimes \cdots \rtimes 1}_{\text{$m$ factors}} \rtimes X, \ \mbox{if} \ m \ \mbox{is even}
\end{align*}
and if $m$ is odd, then the image of $X \in \tilde{Q}^m_k$ inside $\tilde{Q}^m_{k+1}$ is given by
\begin{align*}
\underbrace{\epsilon \rtimes 1 \rtimes \cdots \rtimes \epsilon}_{\text{$m$ factors}} \rtimes X \ \ \mbox{or} \ \
\underbrace{1 \rtimes \epsilon \rtimes \cdots \rtimes 1}_{\text{$m$ factors}} \rtimes X
\end{align*}
according as $k$ is even or odd. Also, the diagram in Figure \ref{fig:nnpic73} commutes
where each horizontal arrow indicates the $*$-isomorphism.
\begin{figure}
\caption{A commutative diagram}
\label{fig:nnpic73}
\end{figure}
\end{remark}
For each integer $n \geq 1$, we define a subspace $Q^m_n$ of $H_{[1, mn-1]}$ or $H_{[0, mn-2]}$ according as $m$ is odd or even as follows:
\begin{itemize}
\item \textit{Case (i): $m$ is odd}
\begin{align*}
Q^m_{n} := \{ & X \in H_{[1, mn-1]} : X \leftrightarrow \Delta_{k-1}(x) \in \otimes_{i = 1}^k H_{m(2i-1)},
\forall x \in H \text{ where } \\
&k = \frac{n}{2} \ \mbox{if} \ n \ \mbox{is even or} \ \frac{n+1}{2} \ \mbox{if} \ n \ \mbox{is odd} \}
\end{align*}
\item \textit{Case (ii): $m$ is even}
\begin{align*}
Q^m_{n} := \{ & X \in H_{[0,mn-2]} : X \leftrightarrow \Delta_{k-1}(x) \in \otimes_{i = 1}^k H_{m(2i-1)-1}, \forall x \in H \text{ where } \\
&k = \frac{n}{2} \ \mbox{if} \ n \ \mbox{is even or} \ \frac{n+1}{2} \ \mbox{if} \ n \ \mbox{is odd} \}
\end{align*}
\end{itemize}
This is an immediate consequence of Lemma \ref{anti1} that for any $n \geq 1$, $\tilde{Q}^m_n$ is $*$-anti-isomorphic to
$Q^m_n$ and let $\gamma_n^m: \tilde{Q}_n^m \rightarrow Q_n^m$ denote this anti-isomorphism. We then have the following commutative diagram.
\begin{figure}
\caption{Commutative diagram}
\label{fig:D8}
\end{figure}
Further, if $e_n^m \in Q_n^m (n \geq 2)$ denotes the projection which is the image of $\tilde{e}_n^m \in \tilde{Q}_n^m$ under $\gamma_n^m$,
it is then not hard to see that $e_n^m$ is given by Figure \ref{fig:pic77}.
\begin{figure}
\caption{$e_n^m$}
\label{fig:pic77}
\end{figure}
Obviously, the identity map of $Q_n^m$ onto its opposite algebra, denoted $Q_n^{m^{op}}$, is a anti-$*$-isomorphism.
For each $n \geq 1$, let $\Psi_n^m: (\mathcal{N}^m)^{\prime} \cap \mathcal{M}_n \rightarrow$ $Q_n^{m^{op}}$ denote the following composite map:
\begin{align*}
(\mathcal{N}^m)^{\prime} \cap \mathcal{M}_n \xrightarrow{*-\mbox{isom}} \tilde{Q}_n^m \xrightarrow{\gamma_n (*- \mbox{anti-isom})}
Q_n^m \xrightarrow{\mbox{Identity}} Q_n^{m^{op}}.
\end{align*}
Obviously $\Psi_n^m$ is $*$-isomorphism for each $n \geq 1$ and for $n \geq 2$, it carries $e_{0, n}^m$ to $e_n^m$. The commutative
diagrams in Figures \ref{fig:nnpic73} and \ref{fig:D8} together imply commutativity of the diagram
in Figure \ref{fig:D9}. \begin{figure}
\caption{Commutative diagram}
\label{fig:D9}
\end{figure}
It will be useful to identify the spaces $\mathcal{M}^{\prime} \cap \mathcal{M}_n (n \geq 2)$ also. Once again applying Ocneanu's compactness theorem we obtain that $\mathcal{M}^{\prime} \cap \mathcal{M}_n (n \geq 2) = A_{0, n} \cap (A_{1, 1})^{\prime} (n \geq 2)$. Proceeding along the same line of argument as in the proof of Proposition \ref{rel comm}, one can show that: \begin{lemma}
If $n$ is even, then $A_{0, n} \cap (A_{1, 1})^{\prime}$ can be identified with
\begin{align*}
\{X \in H_{[m, mn-2]}: X \ \mbox{commutes with} \ \Delta_{k-1}(x) \in \otimes_{i=1}^k H_{2mi-1}, k = \frac{n}{2} \},
\end{align*} and if $n$ is odd, then $A_{0, n} \cap (A_{1, 1})^{\prime}$ can be identified with
\begin{align*}
\{X \in H_{[0, m(n-1)-2]}: X \ \mbox{commutes with} \ \Delta_{k}(x) \in \otimes_{i=0}^k H_{2mi-1}, k = \frac{n-1}{2} \}.
\end{align*}
\end{lemma}
As an immediate consequence of this lemma, we obtain that:
\begin{lemma}\label{second} The $*$-isomorphism $\Psi_n^m$ of $(\mathcal{N}^m)^{\prime} \cap \mathcal{M}_n$ onto $Q_n^{m^{op}}$ carries $\mathcal{M}^{\prime} \cap \mathcal{M}_n (n \geq 2)$ onto the subspace of $Q_n^{m^{op}}$ given by \begin{itemize}
\item[(i)] $m$ is odd:
\begin{align*}
\{X \in H^{op}_{[m+1, mn-1]} : X \ \mbox{commutes with} \ \Delta_k(x) \in \otimes_{i=1}^k H_{(2i-1)m}, \forall x \in H\},
\end{align*}
\item[(ii)] $m$ is even:
\begin{align*}
\{X \in H^{op}_{[m, mn-2]} : X \ \mbox{commutes with} \ \Delta_k(x) \in \otimes_{i=1}^k H_{(2i-1)m-1}, \forall x \in H\},
\end{align*}
where, in either case, $k = \frac{n}{2}$ or $\frac{n+1}{2}$ according as $n$ is even or odd. \end{itemize} \end{lemma} In the next lemma we consider the question of irreducibility of $\mathcal{N}^m \subseteq M$ for $m \geq 2$. \begin{lemma}\label{irr}
$\mathcal{N}^m \subset \mathcal{M}$ is reducible for all $m > 2$.
\end{lemma}
\begin{proof}
Applying Lemma \ref{commutants}, one can easily observe that for $m > 2$, $Q^m_1 = H_{[1, m-2]}$ or
$H_{[0, m-3]}$ according as $m$ is odd or even and consequently, $\mathcal{N}^m \subset \mathcal{M}$ is not irreducible.
\end{proof}
\section{Planar algebra of $\mathcal{N}^m \subset \mathcal{M} (m > 2)$}
Let $m > 2$ be an integer. In this section we explicitly describe the subfactor planar algebra associated to the subfactor
$\mathcal{N}^m \subset \mathcal{M}$
which turns out to be a planar subalgebra of $^{(m)*}\!P(H^m)$.
For each $n \geq 1$, consider the linear map $\alpha^{m, n} : H \rightarrow End(P(H^m)_{mn})$ defined for $x \in H$ and
$X \in P(H^m)_{mn}$ by Figure \ref{fig:D10new} where the notation $\alpha^{m, n}_x$ stands for
$\alpha^{m, n}(x)$.
\begin{figure}
\caption{$\alpha^{m, k}_x(X)$, $m$ odd (Left) and $\alpha^{m, k}_x(X)$, $m$ even (Right)}
\label{fig:D10new}
\end{figure}
With the help of the maps $\alpha^{m, n}$ defined above we give an equivalent description of the spaces $Q^m_n$.
\begin{proposition}\label{descrip}
For any $k \geq 1$,
\begin{align*}
Q_k^m = \{X \in P(H^m)_{mk} : \alpha^{m, k}_h(X) = X \}.
\end{align*}
\end{proposition}
Before we proceed to prove Proposition \ref{descrip}, we pause for a simple Hopf algebraic lemma whose proof is similar to that of
\cite[Lemma 34]{Sde2018} and hence, we omit the proof.
\begin{lemma}\label{Hopf}
Let $k \geq 1$ be an integer.
\begin{itemize}
\item[(a)] If $m$ is odd, then for $X \in H_{[1, m(2k-1) - 1]}$, the following are equivalent :
\begin{itemize}
\item[(i)] $X \rtimes 1$ commutes with $\Delta_{k-1}(x) \in \otimes_{i = 1}^{k} H_{m(2i-1)}, \forall x \in H$,
\item[(ii)]$\Delta_{k-1}(h_1)(X \rtimes 1) \Delta_{k-1}(Sh_2) = X \rtimes 1$, where
$\Delta_{k-1}(h_1) \otimes \Delta_{k-1}(Sh_2) \in (\otimes_{i = 1}^k H_{m(2i-1)})^{\otimes 2}$.
\end{itemize}
\item[(b)] If $m$ is odd, then for $X \in H_{[1, 2mk - 1]}$, the following are equivalent :
\begin{itemize}
\item[(i)] $X$ commutes with $\Delta_{k-1}(x) \in \otimes_{i = 1}^{k} H_{m(2i-1)}, \forall x \in H$,
\item[(ii)]$\Delta_{k-1}(h_1) X \Delta_{k-1}(Sh_2) = X$, where
$\Delta_{k-1}(h_1) \otimes \Delta_{k-1}(Sh_2) \in (\otimes_{i = 1}^k H_{m(2i-1)})^{\otimes 2}$.
\end{itemize}
\item[(c)] If $m$ is even, then for $X \in H_{[0, m(2k-1) - 2]}$, the following are equivalent :
\begin{itemize}
\item[(i)] $X \rtimes 1$ commutes with $\Delta_{k-1}(x) \in \otimes_{i = 1}^{k} H_{m(2i-1)-1}, \forall x \in H$,
\item[(ii)]$\Delta_{k-1}(h_1)(X \rtimes 1) \Delta_{k-1}(Sh_2) = X \rtimes 1$, where
$\Delta_{k-1}(h_1) \otimes \Delta_{k-1}(Sh_2) \in (\otimes_{i = 1}^k H_{m(2i-1)-1})^{\otimes 2}.$
\end{itemize}
\item[(d)] If $m$ is even, then for $X \in H_{[0, 2mk - 2]}$, the following are equivalent :
\begin{itemize}
\item[(i)] $X$ commutes with $\Delta_{k-1}(x) \in \otimes_{i = 1}^{k} H_{m(2i-1)-1}, \forall x \in H$,
\item[(ii)]$\Delta_{k-1}(h_1) X \Delta_{k-1}(Sh_2) = X$, where
$\Delta_{k-1}(h_1) \otimes \Delta_{k-1}(Sh_2) \in (\otimes_{i = 1}^k H_{m(2i-1)-1})^{\otimes 2}.$
\end{itemize}
\end{itemize}
\end{lemma}
We are now ready to prove Proposition \ref{descrip}.
\begin{proof}[Proof of Proposition \ref{descrip}]
When $m > 2$ is even, the proof of the proposition is similar to that of \cite[Proposition 33]{Sde2018}. Thus, we prove the
proposition only when $m > 2$ is odd, leaving the other case for the reader.
It is an immediate consequence of Lemma \ref{Hopf}(a) that the space $Q^m_{2k-1}$ can equivalently be described as
\begin{align*}
Q^m_{2k-1} = \{ X \in H_{[1, m(2k-1)-1]}: \Delta_{k-1}(h_1) (X \rtimes 1) \Delta_{k-1}(Sh_2) = X \rtimes 1 \}
\end{align*}
where $\Delta_{k-1}(h_1) \otimes \Delta_{k-1}(h_2) \in (\otimes_{i=1}^k H_{m(2i-1)})^{\otimes 2}$.
Interpreting this equivalent description of $Q^m_{2k-1}$ in the language of the planar algebra of $H$,
we note that $Q^m_{2k-1}$ consists of
precisely those elements $X \in P(H)_{m(2k-1)}$ such that the equation of Figure \ref{fig:pic41} holds.
Now applying the conditional expectation tangle $E^{m(2k-1)}_{m(2k-1)+1}$, we reduce the element on the left in Figure \ref{fig:pic41}
to that on the
left in Figure \ref{fig:pic75}. On the other hand an application of the conditional expectation tangle $E^{m(2k-1)}_{m(2k-1)+1}$
to the element on the right
in Figure \ref{fig:pic41} and then an appeal to the modulus relation reduces the element on the right in Figure \ref{fig:pic41}
to $\delta X$ as shown on the right in Figure \ref{fig:pic75}. Now applying the exchange relation first and then the modulus relation,
one can easily see that
the element on the left in Figure \ref{fig:pic75} indeed equals $\delta \alpha^{m, 2k-1}_h(X)$
and the desired description of $Q^m_{2k-1}$ follows.
Similarly, it follows immediately from Lemma \ref{Hopf}(b) that the space $Q^m_{2k}$ can equivalently be described as
\begin{align*}
Q^m_{2k} = \{ X \in H_{[1, 2km-1]}: \Delta_{k-1}(h_1) X \Delta_{k-1}(Sh_2) = X \}
\end{align*}
where $\Delta_{k-1}(h_1) \otimes \Delta_{k-1}(h_2) \in (\otimes_{i=1}^k H_{m(2i-1)})^{\otimes 2}$.
Now the desired description of $Q^m_{2k}$ follows at once from the definition of $\alpha^{m, 2k}_h(X)$ and by interpreting this
equivalent
description of $Q^m_{2k}$ in the language of $P(H)$,
completing the proof.
\begin{figure}
\caption{Characterisation of $X$}
\label{fig:pic41}
\end{figure}
\begin{figure}
\caption{Equivalent characterisation of $X$}
\label{fig:pic75}
\end{figure} \end{proof}
Thus for each $m > 2$, we have a family $\{Q^m_n : n \geq 1 \}$ of vector spaces where for $n \geq 1, Q^m_n$
is a subspace of $P(H^m)_{mn} = ^{(m)}\!\!\!P(H^m)_{n}$. Setting $Q^m_{0, \pm} = \mathbb{C}$, we note that
$Q^m := \{Q_n^m: n \in Col \}$ is a subspace of $^{(m)}\!P(H^m)$. The following proposition, whose proof is similar to that of
, shows that $Q^m$ is indeed a planar subalgebra of
$^{(m)}\!P(H^m)$ and we omit its proof.
\begin{proposition}\label{planar}
For $m > 2$, $Q^m$ is a planar subalgebra of $^{(m)}\!P(H^m)$.
\end{proposition}
\begin{proof}
By an appeal to Theorem \cite[Theorem 3.5]{KdySnd2004}, it suffices to prove that $Q^{m}$ is closed under the action of the following set of tangles
\begin{align*}
\{1^{0,+}, 1^{0, -}\} \cup \{R_k^k:k \geq 2\} \cup \{M_{k, k}^k, E^k_{k+1}, I_k^{k+1}: k \in Col \}
\end{align*}
where we refer to \cite[Figures 2, 3 and 5]{Sde2018} for the definition of tangles $M_{k, k}^k, E^k_{k+1}, I_k^{k+1},
1^{0,+}$ and $1^{0, -}$.
When $m$ is even, the proof of the proposition is similar to that of \cite[Proposition 35]{Sde2018}. Thus we prove the result only when $m$
is odd.
It is obvious to see that $Q^m$ is closed under the action of the tangles $1^{0, \pm}$ and $M_{k, k}^k, I_k^{k+1} (k \in Col)$.
To see that $Q^m$ is closed under the action of the rotation tangle $R_k^k \ (k \geq 2)$, we note that for any
$X \in Q^m_k \ (k \geq 2)$, we have
\begin{align*}
Z^{^{(m)}\!P(H)}_{R_k^k}(X) = Z^{^{(m)}\!P(H)}_{R_k^k}(\alpha^{m, k}_h(X)) = \alpha^{m, k}_h( Z^{^{(m)}\!P(H)}_{R_k^k}(X))
\end{align*}
where the first equality follows from the fact that $\alpha^{m, k}_h(X) = X$ and to see the second equality we need to use the Hopf algebra
identity $h_1 \otimes h_2 \otimes \cdots \otimes h_l = h_2 \otimes h_3 \otimes
\cdots \otimes h_l \otimes h_1$ ($l \geq 2$) which basically follows from $h_1 \otimes h_2 = h_2 \otimes h_1$ (which essentially
expresses traciality of $h$).
Verifying that $Q^m$ is closed under the action of $E^k_{k+1} (k \geq 1)$ amounts to verification of the
following identity
\begin{align*}
Z_{E^k_{k+1}}^{^{(m)}\!P(H)}(X) = \alpha^{m, k}_h(Z_{E^k_{k+1}}^{^{(m)}\!P(H)}(X))
\end{align*}
for any $X \in Q^m_{k+1}$.
Note that since $\alpha^{m, k+1}_h(X) = X$, we have that
\begin{align*}
Z_{E^k_{k+1}}^{^{(m)}\!P(H)}(X) = Z_{E^k_{k+1}}^{^{(m)}\!P(H)}(\alpha^{m, k+1}_h(X))
= Z_{{E^k_{k+1}}^{(m)}}^{P(H)}(\alpha^{m, k+1}_h(X)).
\end{align*}
When $k$ is odd (resp., even), representing the element
$Z_{{E^k_{k+1}}^{(m)}}^{P(H)}(\alpha^{m, k+1}_h(X))$ pictorially in $P(H)$ and then applying relation (E) (resp., (C)) one can easily see
that
\begin{align*}
Z_{{E^k_{k+1}}^{(m)}}^{P(H)}(\alpha^{m, k+1}_h(X)) = \alpha^{m, k}_h(Z_{E^k_{k+1}}^{^{(m)}\!P(H)}(X)),
\end{align*}
finishing the proof.
\end{proof}
As an immediate corollary of Proposition \ref{planar} we obtain that
\begin{corollary} \label{planar1}
$^*\!Q^m$, the adjoint of $Q^m$, is a planar subalgebra of $^{*(m)}\!P(H^m)$.
\end{corollary}
Finally, similar argument as in the proof of \cite[Proposition 36]{Sde2018} shows that
$P^{\mathcal{N}^m \subset \mathcal{M}}$, the planar algebra associated to $\mathcal{N}^m \subset \mathcal{M}$,
is given by the adjoint of the planar algebra $Q^m$. Thus we have:
\begin{proposition}\label{pl alg}
$P^{\mathcal{N}^m \subset \mathcal{M}} = {^*\!Q^m}$, $m > 2$.
\end{proposition}
We collect the results of the previous statements into a single main theorem.
\begin{theorem}\label{maintheorem}
For any integer $m > 2$,
$^*\!Q^m$ is a planar subalgebra of $^{*(m)}\!P(H^m)$ and $^*\!Q^m = P^{\mathcal{N}^m \subset \mathcal{M}}$.
If $m$ is odd, $^*\!Q^m_k$ ($k \geq 1$) consists of all $X \in
P(H)_{mk}$ such that the element on the left in Figure \ref{fig:charac2} equals $X$
and if $m$ is even, $^*\!Q^m_k$ ($k \geq 1$) consists of all $X \in
P(H^*)_{mk}$ such that the element on the right in Figure \ref{fig:charac2} equals $X$.
\begin{figure}\label{fig:charac2}
\end{figure}
\end{theorem}
\begin{proof}
It follows immediately from Proposition \ref{pl alg} after observing that $\alpha^{m, k}_h(X)$ for $m$ odd
(resp., even)
in Figure \ref{fig:D10new} is equivalent to the element on the left (resp., right)
in Figure \ref{fig:charac2}.
\end{proof}
\section{Depth of $\mathcal{N}^m \subset \mathcal{M}, m > 2$ }
In this section we investigate the depth of the subfactors $\mathcal{N}^m \subset \mathcal{M}$ for $m > 2$.
The main result of this section is contained in the following theorem.
\begin{theorem}\label{depth}
For $m > 2$, the subfactor $\mathcal{N}^m \subset \mathcal{M}$ is of depth 2.
\end{theorem}
By virtue of the commutative diagram in Figure \ref{fig:D9}, one can easily see that
$\mathcal{N}^{m} \subset \mathcal{M}$ has depth $k$, $k \geq 1$ an integer, is equivalent to $k$ being the smallest positive integer such that $Q_{k-1}^{m^{op}} \subset Q_k^{m^{op}} \subset Q_{k+1}^{m^{op}}$ is an instance of the basic construction with the Jones projection $e^m_{k+1}$ which obviously is equivalent to saying that $Q^m_{k-1} \subset Q^m_k \subset Q^m_{k+1}$ is an instance of the basic construction with the same Jones projection. Thus, in order to prove Theorem \ref{depth}, it suffices to show that $2$ is the smallest positive integer such that $Q^m_1 \subset Q^m_2 \subset Q^m_3$ is an instance of the basic construction with the Jones projection $e^m_3$.
We find it necessary to explicitly know the elements of the space $Q^m_2$. The following lemma is the main step to this end.
\begin{lemma}\label{element}
Let $\mathcal{S}$ be the space defined by
\begin{align*}
\mathcal{S} = \{X \in H^* \rtimes H \rtimes H^* (\cong P(H^*)_4) :
X \ \mbox{commutes with} \ \epsilon \rtimes x \rtimes \epsilon, \forall x \in H \}.
\end{align*} Then $\mathcal{S}$ precisely consists of elements of the form $Z_A^{P(H^*)}(f_2 \otimes f_1 \otimes g)$
where $A \in {\mathcal{T}}{(4)}$ is the tangle as shown on the left in Figure \ref{fig:pic43} and $f \otimes g \in H^* \otimes H^*$.
Consequently, $\mathcal{S}$ has dimension $(dim ~H)^2$.
\end{lemma}
\begin{proof}
Let $X = Z_A^{P(H^*)}(f_2 \otimes f_1 \otimes g) \in P(H^*)_4$ where $f \otimes g \in H^* \otimes H^*$.
For any $t \in H,$ let $Y_t$ denote the image of $\epsilon\rtimes t \rtimes
\epsilon \in H^* \rtimes H \rtimes H^*$ in $P(H^*)_4$ under the algebra isomorphism between $H^* \rtimes H \rtimes H^*$ and $P(H^*)_4$ as
given by Lemma \ref{iso}, i.e., $Y_t = Z_{T^4}^{P(H^*)}(\epsilon \otimes Ft \otimes \epsilon)$ (see Figure \ref{fig:c9}) where
$T^4$ is the tangle of colour $4$ as shown on the right in Figure \ref{fig:pic599}.
Thus given $t \in H$, we need to show that $X$ commutes with $Y_t$. The element $Y_t X$ (resp., $X Y_t$) is shown on the left (resp., right)
in Figure \ref{fig:c9}.
\begin{figure}
\caption{Elements: $Y_t$ (left), $Y_t X$ (middle) and $XY_t$ (right)}
\label{fig:c9}
\end{figure}
Set $k = Ft$. An application of the relations (E) and (T) shows
that $XY_t$ equals the element
$\delta (Sf_2 k)(h) Z_A^{P(H^*)}(f_3 \otimes f_1 \otimes g)$ whereas $Y_tX$ equals the element
$\delta(Sk_1 f_1)(h) Z_A^{P(H^*)}(f_2 \otimes k_2 \otimes g)$.
Since, by Lemma \ref{iso1},
$Z_A^{P(H^*)}$ is a linear isomorphism of $H^{*\otimes3}$ onto $P(H^*)_4$, in order to see that $XY_t = Y_tX$, it suffices to verify that
\begin{align*}
\delta (Sf_2 k)(h) f_3 \otimes f_1 \otimes g = \delta(Sk_1 f_1)(h) f_2 \otimes k_2 \otimes g.
\end{align*}
Evaluating the expression on the left-hand side on $a \otimes b \otimes c \in H \otimes H \otimes H$ we obtain
$f(b Sh_1 a) k(h_2) g(c)$ whereas evaluating the
expression on the right-hand side on the same element we obtain the value $k(Sh_1b) f(h_2a) g(c)$
which, using the Hopf-algebraic formula $Sh_1x \otimes h_2 = Sh_1 \otimes xh_2$ and the fact that $Sh = h$, equals
$f(b Sh_1 a) k(h_2) g(c)$.
Thus we see that
\begin{align*}
\{Z_A^{P(H^*)}(f_2 \otimes f_1 \otimes g) : f \otimes g \in H^{*\otimes 2} \} \subseteq \mathcal{S}
\end{align*}
and consequently the dimension of the space $S$ is $\geq (dim ~H)^2$. To finish the proof we just need to see that the dimension of
$\mathcal{S}$ is $\leq (dim ~H)^2$. First observe that for any $X \in \mathcal{S}$,
\begin{align*}
X = X (\epsilon \rtimes h_1 \rtimes \epsilon) (\epsilon \rtimes Sh_2 \rtimes \epsilon) =
(\epsilon \rtimes h_1 \rtimes \epsilon) X (\epsilon \rtimes Sh_2 \rtimes \epsilon)
\end{align*}
and thus $\mathcal{S}$ is a subspace of
\begin{align*}
W = \{ X \in P(H^*)_4 : X = Y_{h_1} X Y_{Sh_2}\}.
\end{align*}
Thus it suffices to show that $dim ~W \leq (dim ~H)^2$. Let us consider the tangle $S \in {\mathcal{T}}{(4)}$ as given by Figure \ref{fig:pic15}.
\begin{figure}
\caption{Tangle $S$}
\label{fig:pic15}
\end{figure} Since $Z_S$ induces, by Lemma \ref{iso1}, linear isomorphism of $H^{*\otimes 3}$ onto $P(H^*)_4$, we conclude that
$dim ~W = dim ~\{x \otimes y \otimes z \in H^{\otimes 3} : Z_S(Fx \otimes Fy \otimes Fz) = Y_{h_1} Z_S(Fx \otimes Fy \otimes Fz) Y_{Sh_2} \}.$
One can easily see that $Y_{h_1} Z_S(Fx \otimes Fy \otimes Fz) Y_{Sh_2} = Z_S(Fx \otimes F(h_1 y) \otimes F(z Sh_2))$ and consequently, $dim ~W = dim ~U$ where \begin{align*}
U = \{x \otimes y \otimes z \in H^{\otimes 3} : x \otimes y \otimes z = x \otimes h_1 y \otimes z Sh_2 \}. \end{align*} Thus it suffices to see that $dim ~U \leq (dim ~H)^2$. Note that the space $U$, using the Hopf-algebraic formula $h_1 a \otimes b Sh_2 = h_1 \otimes b a Sh_2$, can alternatively described as \begin{align*}
U = \{x \otimes y \otimes z \in H^{\otimes 3} : x \otimes y \otimes z = x \otimes h_1 \otimes z y Sh_2 \}. \end{align*} Finally observe that $U$ is contained in the image of the injective linear map $\theta$ form $H \otimes H$ to $H \otimes H \otimes H$ given by $x \otimes y \rightarrow x \otimes h_1 \otimes y Sh_2$ for if $x \otimes y \otimes z \in U$, then clearly $x \otimes y \otimes z = \theta(x \otimes zy)$ and hence, $dim ~U \leq (dim ~H)^2$, finishing the proof. \end{proof} We now present a technical lemma that will be useful in order to precisely express the elements of $Q^m_2$, $m > 2$. \begin{lemma}\label{eq} The following equation holds in $P(H^*)_4$ for $f, g \in H^* :$ \begin{align*}
Z_A^{P(H^*)}(f_2 \otimes f_1 \otimes g) = Z_{T^4}^{P(H^*)}(f_1Sg_3Sf_3 \otimes f_2g_2 \otimes Sg_1). \end{align*} \end{lemma} \begin{proof}
Left as an exercise. \end{proof} Consequently, the space $\mathcal{S}$ can equivalently be described as: \begin{corollary}\label{Eq}
$\mathcal{S} = \{ f_1Sg_3Sf_3 \rtimes F^{-1}(f_2g_2) \rtimes Sg_1 \in H^* \rtimes H \rtimes H^*: f \otimes g \in H^{*\otimes2} \}.$ \end{corollary} \begin{proof}
Follows immediately from Lemmas \ref{element} and \ref{eq}. \end{proof} \begin{corollary}\label{Eq1}
Let $m > 2$.
\begin{itemize}
\item [(i)] If $m$ is odd, then $Q^m_2$ consists of elements of the form
\begin{equation}\label{Eq2}
x^1 \rtimes f^2 \rtimes \cdots \rtimes x^{m-2} \rtimes g_1 Sk_3 Sg_3 \rtimes
F^{-1}(g_2 k_2) \rtimes Sk_1 \rtimes x^{m+2} \rtimes f^{m+3} \rtimes \cdots \rtimes x^{2m-1} \in A(H)_{2m-1}
\end{equation}
with $(x^1 \otimes \cdots \otimes
x^{m-2} \otimes x^{m+2} \otimes \cdots \otimes x^{2m-1}) \otimes (f^2 \otimes \cdots \otimes f^{m-3} \otimes g \otimes k \otimes f^{m+3}
\otimes \cdots \otimes f^{2m-2}) \in H^{\otimes(m-1)} \otimes H^{* \otimes (m-1)}$.
\item [(ii)] If $m$ is even, $Q^m_2$ consists of elements of the form
\begin{equation}\label{Eq3}
f^1 \rtimes x^2 \rtimes \cdots \rtimes x^{m-2} \rtimes g_1 Sk_3 Sg_3 \rtimes
F^{-1}(g_2 k_2) \rtimes Sk_1 \rtimes x^{m+2} \rtimes \cdots \rtimes f^{2m-1} \in A(H^*)_{2m-1}
\end{equation}
with $(f^1 \otimes \cdots \otimes f^{m-3} \otimes g \otimes k \otimes f^{m+3} \otimes \cdots \otimes f^{2m-1}) \otimes
(x^2 \otimes \cdots x^{m-2} \otimes
x^{m+2} \otimes \cdots \otimes x^{2m-2}) \in H^{* \otimes m} \otimes H^{\otimes (m-2)}$ . In this case, the elements of $Q^{m}_2$
can equivalently be expressed as
\begin{equation}\label{Eq4}
Z_{A(m-2, m-2)}^{P(H^*)}(f^1 \otimes f^2 \otimes \cdots \otimes f^{m-2} \otimes f^{m-1}_2 \otimes
f^{m-1}_1 \otimes f^m \otimes f^{m+1} \otimes \cdots \otimes f^{2m-2})
\end{equation}
with $\otimes_{i = 1}^{2m-2} f^i \in H^{*\otimes(2m-2)}$ (see Figure \ref{fig:pic43} for the definition of tangles $A(m-2, m-2)$).
\end{itemize}
\end{corollary} \begin{proof}
\begin{itemize}
\item[(i)] Follows from the definition of $Q^m_2$ as given in $\S 4$ along with an application of Corollary \ref{Eq}.
\item[(ii)] Follows from the definition of $Q^m_2$ as given in $\S 4$ together with an appeal to Corollary \ref{Eq} and Lemma \ref{eq}. \end{itemize} \end{proof} \begin{remark}\label{imp}
Note that $^*\!Q^m_2 = Q^m_2$ as vector spaces, only the multiplication in $^*\!Q^m_2$ is opposite to that of $Q^m_2$. We make
no notational distinction between the elements of $^*\!Q^m_2$ and $Q^m_2$. Thus, a general element of $^*\!Q^m_2$ will always be
expressed in the form as given by \eqref{Eq2} when $m$ is odd, or in the form as given by \eqref{Eq3} or \eqref{Eq4} when $m$ is even. \end{remark} Note that $\mathcal{N}^m \subset \mathcal{M}$ can not be of depth $1$ for otherwise it must happen that $\mathbb{C} \subset Q^m_1 \subset Q^m_2$ is an instance of the basic construction and therefore, $dim ~Q^m_2$ must be equal to $dim ~(Q^m_1)^2$ which is not possible since $ dim ~Q^m_1 = (dim ~H)^{m-2}$ whereas $dim ~Q^m_2$, by an appeal to Corollary \ref{Eq1}, equals $(dim ~H)^{2(m-1)}$. Consequently, depth of $\mathcal{N}^m \subset \mathcal{M}$ is greater than $1$.
In the following two propositions, namely, Proposition \ref{even} and Proposition \ref{odd}, we prove that $Q^m_1 \subset Q^m_2 \subset Q^m_3$ is an instance of the basic construction where Proposition \ref{even} treats the case when $m$ is even while Proposition \ref{odd} treats the case when $m$ is odd.
We will use Lemma \ref{iso1} frequently without any mention in the proofs of both the propositions.
\begin{proposition}\label{even} If $m > 1$, then $Q^{2m}_1 \subset Q^{2m}_2 \subset Q^{2m}_3$ is an instance of the basic construction with the Jones projection $e^{2m}_3$. \end{proposition} \begin{proof}
For notational convenience we use the symbol $e$ to denote $e^{2m}_3$.
Since the conditions (i) and (ii) of Lemma \ref{basic} are automatically satisfied, we only need to verify the condition (iii).
Let us consider the elements of $Q^{2m}_2$ given by
\begin{align*}
&X = Z_{A(2m-2, 2m-2)}^{P(H^*)}(f^0 \otimes f^1 \otimes \cdots \otimes f^{2m-3} \otimes f^{2m-2}_2 \otimes f^{2m-2}_1 \otimes f^{2m-1} \otimes \cdots \otimes f^{4m-3}), \\ &Y = Z_{A(2m-2, 2m-2)}^{P(H^*)}((\epsilon \otimes F(1))^{\otimes (m-1)} \otimes g^{2m-2}_2 \otimes g^{2m-2}_1 \otimes g^{2m-1} \otimes g^{2m} \otimes \cdots \otimes g^{4m-3}).
\end{align*} A simple computation shows that $XeY$ equals the element \begin{align*} \delta^{-2m} Z_S^{P(H^*)}(& f^0 \otimes \cdots \otimes f^{2m-3} \otimes f^{2m} \otimes \cdots \otimes f^{4m-3} \otimes g^{2m} \otimes \cdots \otimes g^{4m-3} \otimes g^{2m-1} \otimes f^{2m-1}\\ & \otimes f^{2m-2}_1 \otimes Sg^{2m-2}_1 f^{2m-2}_2 \otimes g^{2m-2}_2) \end{align*} where $S \in \mathcal{T}(6m)$ is as shown in Figure \ref{fig:pic42}. Consider the linear map $\theta : H^{*\otimes 2} \rightarrow H^{*\otimes 3}$ given by \begin{align*}
f \otimes g \mapsto f_1 \otimes Sg_1 f_2 \otimes g_2. \end{align*}
\begin{figure}
\caption{Tangle $S$}
\label{fig:pic42}
\end{figure} We assert that $\theta$ is injective. To see this one can easily verify that the map from $H^{*\otimes 3} \rightarrow H^{*\otimes 2}$ given by \begin{align*}
f \otimes g \otimes k \mapsto f(1) k_1 g \otimes k_2 \end{align*} is a left inverse of $\theta$, proving the assertion. Thus clearly $Q^{2m}_2 e Q^{2m}_2$ contains the image of the injective linear map \begin{align*} Z_S \circ ({Id_{H^*}}^{\otimes(6m-4)} \otimes \theta) : H^{*\otimes (6m-2)} \rightarrow P(H^*)_{6m} \end{align*} and consequently, $dim ~(Q^{2m}_2 e Q^{2m}_2) \geq (dim ~H)^{6m-2}$. Thus in order to see that $Q^{2m}_2 e Q^{2m}_2 = Q^{2m}_3$ we just need to show that $dim ~Q^{2m}_3 \leq (dim ~H)^{6m-2}$. To this end we consider the tangle $P \in \mathcal{T}(6m)$ as shown in Figure \ref{fig:pic44}. \begin{figure}
\caption{Tangle $P$}
\label{fig:pic44}
\end{figure} Since $Z_P^{P(H^*)}$ induces a linear isomorphism of $H^{*\otimes(6m-1)}$ onto $P(H^*)_{6m}$, we observe that, in view of Proposition \ref{descrip}, the space $Q^{2m}_3$ is linearly isomorphic to \begin{align*}
\{f^1 \otimes \cdots \otimes f^{6m-4} \otimes x \otimes y \otimes z \in H^{*\otimes(6m-4)} \otimes H^{\otimes3} :
\alpha_h^{2m, 3}(Z_P(f^1 \otimes \cdots \otimes f^{6m-4}\\
\otimes Fx \otimes Fy \otimes Fz)) =
Z_P(f^1 \otimes \cdots \otimes f^{6m-4} \otimes Fx \otimes Fy \otimes Fz)\}.
\end{align*}
A trivial computation in $P(H^*)$ shows that
\begin{align*}
\alpha_h^{2m, 3}(Z_P^{P(H^*)}(\otimes_{i=1}^{6m-4}f^i \otimes Fx \otimes Fy \otimes Fz)) =
Z_P^{P(H^*)}(\otimes_{i=1}^{6m-4}f^i \otimes F(h_1x) \otimes F(h_2y) \otimes F(h_3z)).
\end{align*}
Now using injectivity of $Z_P$ and invertibility of $F$, we conclude that $Q^{2m}_3$ is linearly isomorphic to the space $W$ defined by
\begin{align*}
W = \{x \otimes y \otimes z \otimes f^1 \otimes \cdots \otimes f^{6m-4} \in H^{\otimes3} \otimes H^{*\otimes(6m-4)} :\\
h_1x \otimes h_2y \otimes h_3z \otimes f^1 \otimes \cdots \otimes f^{6m-4} =
x \otimes y \otimes z \otimes f^1 \otimes \cdots \otimes f^{6m-4} \}.
\end{align*} Thus it suffices to see that $dim ~W \leq (dim ~H)^{6m-2}$. Note that the space $W$, using the Hopf-algebraic formula $h_1 a \otimes h_2 = h_1 \otimes h_2 Sa$, can equivalently be described as \begin{align*}
W = \{x \otimes y \otimes z \otimes f^1 \otimes \cdots \otimes f^{6m-4} \in H^{\otimes3} \otimes H^{*\otimes(6m-4)} :\\
h_1 \otimes h_2Sx_2y \otimes h_3Sx_1z \otimes f^1 \otimes \cdots \otimes f^{6m-4} =
x \otimes y \otimes z \otimes f^1 \otimes \cdots \otimes f^{6m-4}\}. \end{align*} Further we observe that $W$ is contained in the range of the linear map $\rho: H^{\otimes2} \otimes H^{*\otimes(6m-4)} \rightarrow H^{\otimes3} \otimes H^{*\otimes(6m-4)}$ given by \begin{align*}
a \otimes b \otimes_{i= 1}^{6m-4}f^i \mapsto h_1 \otimes h_2 a \otimes h_3 b \otimes_{i= 1}^{6m-4}f^i, \end{align*} for, if $X = x \otimes y \otimes z \otimes f^1 \otimes \cdots \otimes f^{6m-4} \in W$, then $\rho(Sx_2y \otimes Sx_1z \otimes f^1 \otimes \cdots \otimes f^{6m-4}) = h_1 \otimes h_2Sx_2y \otimes h_3Sx_1z \otimes f^1 \otimes \cdots \otimes f^{6m-4} = X$ and consequently, $dim ~W \leq$ rank of $\rho \leq (dim ~H)^{6m-2}$, completing the proof. \end{proof}
\begin{proposition}\label{odd}
Given $m > 1, Q^{2m-1}_1 \subset Q^{2m-1}_2 \subset Q^{2m-1}_3$ is an instance of the basic construction with the Jones projection $e^{2m-1}_3$.
\end{proposition}
\begin{proof}
Since the conditions (i) and (ii) of Lemma \ref{basic} are
automatically satisfied, we just need to verify the condition (iii).
Note that $Q^{2m-1}_1 = H_{[1, 2m-3]}$, $Q^{2m-1}_2 = \{ X \in H_{[1, 4m-3]} : X \leftrightarrow H_{2m-1} \}$, $Q^{2m-1}_3 = \{ X \in
H_{1, 6m-4]} : X \leftrightarrow \Delta(x) \in H_{2m-1} \otimes H_{6m-3}, \forall x \in H \}$.
Now an application of Lemma \ref{anti1} shows that the tower $Q^{2m-1}_1 \subset Q^{2m-1}_2 \overset{e^{2m-1}_3}{\subset}Q^{2m-1}_3$ is
$*$-anti-isomorphic to the tower $A \subset B \overset{e}{\subset} C$ with $e = \tilde{e}^{2m-1}_3$ where
$A = H_{[-(2m-3), -1]}, B = \{X \in H_{[-(4m-3,) -1]}: X \leftrightarrow H_{-(2m-1)} \}$ and $C = \{ X \in H_{[-(6m-4), -1]} :
X \leftrightarrow \Delta(x) \in H_{-(6m-3)} \otimes H_{-(2m-1)}, \forall x \in H \}$. Thus it suffices to prove that
$BeB = C$, or equivalently, $dim ~BeB = dim ~C$.
Identify $H_{[-(6m-4), -1]}$ with $P(H^*)_{6m-3}$ and regard $A, B, C$ as subalgebras of $P(H^*)_{6m-3}$. In view of Corollary \ref{Eq1} we see that a general element of $B$ is of the form \begin{align*} Z_U^{P(H^*)}(f^1 \otimes f^2 \otimes \cdots \otimes f^{2m-3} \otimes f^{2m-2}_2 \otimes f^{2m-2}_1 \otimes f^{2m-1} \otimes f^{2m} \otimes \cdots \otimes f^{4m-4}) \end{align*} where $\otimes_{i=1}^{4m-4}f^i \in H^{* \otimes (4m-4)}$ and $U$ is the tangle with exactly $4m - 3$ internal $2$-boxes as shown in Figure \ref{fig:pic17}. \begin{figure}
\caption{Tangle $U$}
\label{fig:pic17}
\end{figure}\\ Now, given \begin{align*} &X = Z_U(f^1 \otimes f^2 \otimes \cdots \otimes f^{2m-3} \otimes f^{2m-2}_2 \otimes f^{2m-2}_1 \otimes f^{2m-1} \otimes \cdots \otimes f^{4m-4}), \\ &Y = Z_U(g^1 \otimes g^2 \otimes \cdots \otimes g^{2m-3} \otimes g^{2m-2}_2 \otimes g^{2m-2}_1 \otimes g^{2m-1} \otimes \underbrace{F(1) \otimes \epsilon \otimes F(1) \otimes \cdots \otimes \epsilon \otimes F(1)}_{\text{$2m-3$ factors}}), \end{align*} a little manipulation with the relations (E) and (A) shows that the element $XeY$ equals \begin{align*}
\delta^{-(2m-1)} &Z_Q^{P(H^*)}(\otimes_{i=1}^{2m-3}f^i \otimes_{i= 2m+1}^{4m-4} f^i \otimes_{i=1}^{2m-3} g^i \otimes g^{2m-2}_1Sg^{2m-1} \otimes g^{2m-2}_2f^{2m}\\ &\otimes g^{2m-2}_3f^{2m-1} \otimes g^{2m-2}_4Sf^{2m-2}_2 \otimes g^{2m-2}_5 \otimes f^{2m-2}_1) \end{align*}
where $Q$ is the tangle as shown in Figure \ref{fig:pic23}. \begin{figure}
\caption{Tangle $Q$}
\label{fig:pic23}
\end{figure} Observe that $ Q \in {\mathcal{T}}{(6m-3)}$. Let $\theta : H^{*\otimes 5} \rightarrow H^{* \otimes 6}$ be the linear map defined by \begin{align*} f \otimes g \otimes k \otimes u \otimes v \mapsto f_1 Sg \otimes f_2 k \otimes f_3 u \otimes f_4 Sv_2 \otimes f_5 \otimes v_1. \end{align*} We assert that $\theta$ is injective. To see this one can easily verify that the map $\theta^{\prime} : H^{*\otimes 6} \rightarrow H^{* \otimes 5}$ given by \begin{align*} f \otimes g \otimes k \otimes p \otimes u \otimes v \mapsto p(1) u_4 \otimes Sf u_3 \otimes Su_2 g \otimes Su_1 k \otimes v \end{align*} is a left inverse of $\theta$, proving the assertion. Now clearly $BeB$ contains the image of the injective linear map $Z_Q \circ ({Id_{H^*}}^{\otimes (6m-10)} \otimes \theta) : H^{*\otimes (6m-5)} \rightarrow P(H^*)_{6m-3}$ and consequently $dim ~BeB \geq (dim ~H)^{6m-5}$. To finish the proof we just need to show that $dim ~C \leq (dim ~H)^{6m-5}$, or equivalently, $dim ~Q^{2m-1}_3 \leq (dim ~H)^{6m-5}$. \begin{figure}
\caption{Tangle $R$}
\label{fig:pic63}
\end{figure} Let us consider the tangle $R \in {\mathcal{T}}{(6m-3)}$ as shown in Figure \ref{fig:pic63}. Since $Z_R^{P(H)}$ is a linear isomorphism of $H^{\otimes(6m-4)}$ onto $P(H)_{6m-3}$, we observe, in view of Proposition \ref{descrip}, that the space $Q^{2m-1}_3$ is linearly isomorphic to the space \begin{align*}
\{\otimes_{i = 1}^{6m-4} x^i \in H^{\otimes(6m-4)} : \alpha_h^{2m-1, 3}(Z_R^{P(H)}(\otimes_{i = 1}^{6m-4} x^i)) =
Z_R^{P(H)}(\otimes_{i = 1}^{6m-4} x^i)\}. \end{align*} A simple computation shows that \begin{align*}
\alpha_h^{2m-1, 3}(Z_R^{P(H)}(\otimes_{i = 1}^{6m-4} x^i)) = Z_R^{P(H)}(\otimes_{i = 1}^{6m-7} x^i \otimes h_1 x^{6m-6} \otimes h_2 x^{6m-5} \otimes h_3 x^{6m-4}) \end{align*} and hence, $Q^{2m-1}_3$ is linearly isomorphic to the space $V$ defined by \begin{align*}
V = \{\otimes_{i = 1}^{6m-4} x^i \in H^{\otimes(6m-4)} : h_1x^1 \otimes h_2x^2 \otimes h_3x^3 \otimes_{i=4}^{6m-4} x^i =
\otimes_{i = 1}^{6m-4} x^i \}. \end{align*} Proceeding in a similar fashion as in the last part of the proof of Proposition \ref{even}, one can easily see that $dim ~V \leq (dim ~H)^{6m-5}$, completing the proof. \end{proof}
We are now ready to conclude Theorem \ref{depth}. \begin{proof}[Proof of Theorem \ref{depth}] Follows immediately from Propositions \ref{even} and \ref{odd}. \end{proof}
\section{Structure maps on $\mathcal{N}^{\prime} \cap \mathcal{M}_2$}
The main result of \cite{Sde2018} asserts that the quantum double inclusion of $R^H \subset R$ is isomorphic to
$R \subset R \rtimes D(H)^{cop}$ for some outer action of $D(H)^{cop}$ on $R$. We proved this result by constructing a model
$\mathcal{N} \subset \mathcal{M}$ (see \cite[Definition 18]{Sde2018} for the definition of
$\mathcal{N}$) for the quantum double inclusion of $R^H \subset R$ and then showing that the planar algebras associated to
$\mathcal{N} \subset \mathcal{M}$ and $R \subset R \rtimes D(H)^{cop}$ are isomorphic.
As an immediate consequence of this result, we obtain that the relative
commutant $\mathcal{N}^{\prime} \cap \mathcal{M}_2$ is isomorphic to $D(H)^{cop *} ( = D(H)^{* op})$ as Kac algebras. From the proof of the
main result of \cite{Sde2018}, namely \cite[Theorem 40]{Sde2018}, the structure maps on $\mathcal{N}^{\prime} \cap \mathcal{M}_2$ can not
directly be derived. In this section
we explicitly describe the structure maps of $\mathcal{N}^{\prime} \cap \mathcal{M}_2$ which will be useful in $\S 7$ to achieve a simple
and nice description of the weak Hopf $C^*$-algebra structures on $(\mathcal{N}^m)^{\prime} \cap \mathcal{M}_2$ ($m > 2$).
Let $N \subset M$ be a finite-index, depth two, irreducible subfactor and let $N ( = M_0 ) \subset M ( = M_1 ) \subset M_2 \subset \cdots$ be the associated tower of basic construction. Then the relative commutants $N^{\prime} \cap M_2$ and $M^{\prime} \cap M_3$ admit mutually dual Kac algebra structures.
Let $P$ denote the subfactor planar algebra associated to $N \subset M$ so that $P_2 = N^{\prime} \cap M_2$. The next Theorem \ref{Kac} summarises the content of \cite[$\S 2$]{DasKdy2005} where the authors gave pictorial description of the structure maps on $P_2$.
Before we state the theorem, we need to specify certain useful tangles. Let $E, F, G$ denote tangles as shown in Figure \ref{fig:pic47}. \begin{figure}
\caption{Tangles $E$(left), $F$(middle) and $G$(right)}
\label{fig:pic47}
\end{figure}
\begin{theorem}\label{Kac}\cite{DasKdy2005}
The counit $\varepsilon : P_2 \rightarrow \mathbb{C}$ and the antipode $S : P_2 \rightarrow P_2$ are defined for $a \in P_2$ by \begin{align*} \varepsilon(a) = [M : N]^{- \frac{1}{2}} Z^P_F(a), \ S(a) = Z^P_{R_2}(a), \end{align*} and the comultiplication $\Delta : P_2 \rightarrow P_2 \otimes P_2$ is the unique linear map such that the equation \begin{align*}
Z_E^P(a \otimes x \otimes y) = [M : N]^{- \frac{1}{2}} tr_2^{0, +}(a_1x) \ tr_2^{0, +}(a_2y) \end{align*} holds for all $a, x, y \in P_2$.
\end{theorem}
Recall from \cite[Theorem 38]{Sde2018} that the planar algebra associated to $\mathcal{N} \subset \mathcal{M}$, denoted $^*\!Q$, is a planar subalgebra of $^{*(2)}\!P(H^*)$ and for each integer $k \geq 1$, the space $^*\!Q_k$ is the opposite algebra of \begin{align*}
\{ &X \in H_{[0, 2k-2]}: X \ \mbox{commutes with} \ \Delta_{l-1}(x) \in \otimes_{i = 1}^l H_{4i-3}, \forall x \in H \ \mbox{where} \
l = \frac{k}{2} \ \mbox{or} \ \frac{k+1}{2} \\
&\mbox{according as} \ k \ \mbox{is even or odd} \}. \end{align*} Thus, in particular, $^*\!Q_2$ is the opposite algebra of
\begin{align*}
\{X \in H^* \rtimes H \rtimes H^* : X \ \mbox{commutes with the middle} \ H \}.
\end{align*}
That is, $^*\!Q_2$ is the opposite algebra of $\mathcal{S}$ (see Lemma \ref{element} for the definition of $\mathcal{S}$)
and consequently, by Lemma \ref{element},
$^*\!Q_2$ precisely consists of elements of the form $Z_A^{P(H^*)}(f_2 \otimes f_1 \otimes g)$ where $f \otimes g \in H^{*\otimes2}$. We apply Theorem \ref{Kac} above to derive the structure maps for $^*\!Q_2$. \begin{proposition}\label{structure}
Let $X = Z_A^{P(H^*)}(f_2 \otimes f_1 \otimes k) \in {^*\!Q_2}$, then
\begin{align*} &\mbox{Comultiplication:} \ \Delta(X) = \delta Z_A((f S \phi_2)_2 \otimes (f S \phi_2)_1
\otimes \phi_1 k_2 S \phi_3) \otimes Z_A( (\phi_4)_2 \otimes (\phi_4)_1 \otimes k_1),\\
& \mbox{Antpode:} \ S(X) = Z_A(Sf_2 \otimes Sf_3 \otimes f_1SkSf_4), \\
&\mbox{Counit:} \ \varepsilon(X) = \delta f(h) k(1), \\
&\mbox{Involution:} \ X^* = Z_A^{P(H^*)}(Sf_1^* \otimes Sf_2^* \otimes g^*).
\end{align*} \end{proposition} \begin{proof}
Applying Theorem \ref{Kac}, the formula for $S(X)$ follows directly by using the relations (E) and (A) whereas to verify the formula for $\varepsilon(X)$
one needs to use the relations (T) and (M). To verify the involution formula we just need to observe that $X^* = Z_{A^*}^{P(H^*)}(f_2^* \otimes f_1^*
\otimes g^*) = Z_A^{P(H^*)}(Sf_1^* \otimes Sf_2^* \otimes g^*)$.
We now verify the formula for $\Delta(X)$. It follows from Theorem \ref{Kac} that given any $W \in {^*\!Q_2}$, then $\Delta_{^*\!Q_2}(W)
= W_1 \otimes W_2$ is that element of $(^*\!Q_2)^{\otimes 2}$ such the equation
\begin{align}\label{cc}
Z_E^{^*\!Q}(W \otimes Y \otimes Z) = \delta^{-2}(= [\mathcal{M} : \mathcal{N}]^{- \frac{1}{2}}) tr_2^{(0, +)}(W_1 \ Y) \ tr_2^{(0, +)}(W_2 \ Z)
\end{align}
holds for all $Y, Z \in {^*\!Q_2}$.
Let $Y, Z$ be arbitrary elements in $^*\!Q_2$, say, $Y = Z_A(g_2 \otimes g_1 \otimes p), Z = Z_A(u_2 \otimes u_1 \otimes v)$. Then
the element $Z^{^*\!Q}_E(X \otimes Y \otimes Z) = Z_{E^{*(2)}}^{P(H^*)}(X \otimes Y \otimes Z)$ is as shown in Figure \ref{fig:pic48}.
A very lengthy but completely routine computation in $P(H^*)$ along with repeated application of the well-known Hopf-algebraic formulae such as $h_1 a \otimes h_2 = h_1 \otimes h_2 Sa$ shows that \begin{align*}
&Z^{^*\!Q}_E(X \otimes Y \otimes Z) = \delta^5 f(Sh^1_1 Sh^2_1) g(h^2_2 h^1_2) k(h^3_1 h^4_1) u(h^4_4 h^2_3 h^1_3 Sh^4_2) p(h^4_3) v(h^3_2)\\
&= \delta^{-2} \delta tr_2((Z_A((f S \phi_2)_2 \otimes (f S \phi_2)_1
\otimes \phi_1 k_2 S \phi_3) Y) tr_2((Z_A( (\phi_4)_2 \otimes (\phi_4)_1 \otimes k_1)Z), \end{align*} verifying the formula for $\Delta(X)$.
\begin{figure}
\caption{ $Z^{^*\!Q}_E(X \otimes Y \otimes Z)$}
\label{fig:pic48}
\end{figure} \end{proof} Using Lemma \ref{eq}, it follows immediately from Proposition \ref{structure} that the structure maps of $^*\!Q_2$ can also be expressed as: \begin{lemma}\label{sstructure}
Given $X = f_1Sk_3Sf_3 \rtimes F^{-1}(f_2k_2) \rtimes Sk_1 \in {^*\!Q_2}$ where $f \otimes k \in {H^*}^{\otimes 2}$, then
\begin{align*}
\mbox{Comultiplication:} \ \Delta(X) = \ &\delta \ ((fS\phi_2)_1 \ S(\phi_1 k_2 S\phi_3)_3 \ S(fS\phi_2)_3 \rtimes F^{-1}((fS\phi_2)_2 \
(\phi_1 k_2 S\phi_3)_2) \rtimes S(\phi_1 k_2 S\phi_3)_1)\\
&\otimes \ ((\phi_4)_1 \ S(k_1)_3 \ S(\phi_4)_3 \rtimes F^{-1}((\phi_4)_2 (k_1)_2) \rtimes S(k_1)_1),\\
\mbox{Antipode:} \ S(X) = \ & k_1 \rtimes F^{-1}S(f_2k_2) \rtimes S(f_1Sk_3Sf_3),\\
\mbox{Counit:} \ \epsilon(X) = \ & \delta f(h) k(1),\\
\mbox{Involution:} \ X^* = \ & (Sf^*)_1 S((g^*)_3) (Sf^*)_3 \rtimes F^{-1}((Sf^*)_2 (g^*)_2) \rtimes S((g^*)_1).
\end{align*} \end{lemma}
\begin{remark}\label{Dr} In $\S 1.4$, we considered a version of $D(H)^*$ whose underlying vector space is $H^* \otimes H^*$ and the structure maps are given by Lemma \ref{dr}. Consider the linear isomorphism \begin{align*}
\nu: D(H)^{* op} = H^* \otimes H^* \mapsto {^*\!Q_2} \end{align*} that takes $g \otimes f \mapsto Z_A^{P(H^*)}(f_2 \otimes f_1 \otimes g)$. It follows immediately from Proposition \ref{structure} and the structure maps on $D(H)^*$ as given by Lemma \ref{dr} that $\nu$ is an isomorphism of Kac algebras.
\end{remark}
\section{Weak Hopf $C^*$-algebra structure on $(\mathcal{N}^m)^{\prime} \cap \mathcal{M}_2, m > 2$}
It is well-known (see \cite{Das2004}, \cite{NikVnr2000}) that if $N \subset M$ is a depth two, reducible, finite-index inclusion of $II_1$-factors
and if $N (= M_0) \subset M (= M_1) \subset M_2 \subset M_3 \subset \cdots$ is the Jones' basic construction tower associated to $N \subset M$,
then the relative commutants
$N^{\prime} \cap M_2$ and $M^{\prime} \cap M_3$ admit mutually dual weak Hopf $C^*$-algebra structures.
The following Theorem \ref{weak} (reformulation of Proposition 4.7 of \cite{Das2004}) explicitly describes the weak Hopf $C^*$-algebra
structures on
$N^{\prime} \cap M_2$.
Before we state the theorem, we need to fix some notations. Let $P$ be the planar algebra associated to $N \subset M$ so that
$P_2 = N^{\prime} \cap M_2$, $P_{1, 2} = M^{\prime} \cap
M_2$. Set $[M:N] = d^2$. Further,
let $z_R$ denote the unique element of $P_{1, 2}$ for which $tr_L(x) = tr_2(z_R x)$
for all $x \in P_{1, 2}$ where $tr_L$ is the trace of left regular representation of $P_{1, 2}$ and $tr_2$ denotes the normalised pictorial
trace on $P_2$. One can easily see that $z_R$ is a well-defined, central, positive, invertible element of $P_{1, 2}$.
By $\omega_R$ we denote the unique positive
square root of $z_R$ and let $\omega_L$ be $Z_{R_2^2}(\omega_R)$. We will use $\omega$ to denote $\omega_L \omega_{R}^{-1}$. The following theorem
describes the structure maps of $P_2$.
\begin{theorem}\label{weak} \cite{Das2004}
The comultiplication $\Delta : P_2 \rightarrow P_2 \otimes P_2$ is the unique linear map such that the equation
\begin{align*}
Z_E^P(\omega_R a \omega_L \otimes x \otimes y) = d^{-1} \ Z_{tr_2^{0, +}}^P(\omega_R a_1 \omega_L x)\ .\ Z_{tr_2^{0, +}}^P(\omega_R a_2 \omega_L y)
\end{align*}
holds in $P$ for all $a, x ,y \in P_2$.
The counit $\varepsilon:P_2 \rightarrow \mathbb{C}$ and the antipode $S: P_2 \rightarrow P_2$ are defined by
\begin{align*}
\varepsilon(a) = d^{-1} Z_F^P(w_R a w_L),\ \mbox{and}\ \ S(a) = Z_G^P(w \otimes a \otimes w^{-1}),
\end{align*}
for all $a$ in $P_2$ where $E, F, G$ are the tangles as shown in the Figure \ref{fig:pic47}.
\begin{figure}
\caption{Definition of $\Delta$}
\label{comult}
\label{fig:pic52}
\end{figure}
\begin{figure}
\caption{Definitions of $S$ and $\epsilon$}
\label{antico}
\label{fig:pic101}
\end{figure}
\end{theorem}
We use Theorem \ref{weak} to recover the weak Hopf $C^*$-algebra structure on $^*\!Q^m_2$ for all $m > 2$. Let us
use the symbols $\omega_R^m, \omega_L^m$ and $\omega^m$ to denote the elements $\omega_R, \omega_L$ and $\omega$ respectively
of $^*\!Q^m_2$.
Note that in order to find the structure maps of $^*\!Q^m_2$ using Theorem \ref{weak}, we must know the elements
$\omega_R^m, \omega_L^m$ and $\omega^m$. It follows from Lemma \ref{second} that
\begin{align*}
^*\!Q^m_{1, 2} =
\begin{cases}
\{X \in H^{op}_{[m+1, 2m-1]}: X \ \mbox{commutes with} \ H_m \}, \ \mbox{if} \ m \ \mbox{is odd}\\
\{X \in H^{op}_{[m, 2m-2]}: X \ \mbox{commutes with} \ H_{m-1} \}, \ \mbox{if} \ m \ \mbox{is even}.
\end{cases}
\end{align*}
Then by an appeal to Lemma \ref{commutants} it follows immediately that $^*\!Q^m_{1, 2}$ is identified with the subalgebra
\begin{align*}
H^{op}_{[m+2, 2m-1]} \ \mbox{or} \ H^{op}_{[m+1, 2m-2]} \ \mbox{of} \ ^*\!Q^m_2
\end{align*}
according as $m$ is odd or even.
Thus, for any $m > 2$, $^*\!Q^m_{1, 2} \cong A(H)_{m-2}^{op}$
as algebras.
One can easily see that
if $A$ is any multi-matrix algebra over the complex field, then for any $a \in A, tr_L^A(a) = tr_R^A(a)$ and consequently,
$tr_L^A(a) = tr_L^{A^{op}}(a)$ where $tr_L^A(a)$ (resp., $tr_R^A(a)$) denotes the trace of the linear endomorphism of $A$ given by left (resp., right)
multiplication by $a$. Thus, given any $X$ in $^*\!Q^m_{1, 2} = A(H)_{m-2}^{op}$, we have $tr_L^{^*\!Q^m_{1, 2}}(X) = tr_L^{A(H)_{m-2}^{op}}(X)
= tr_L^{A(H)_{m-2}}(X)$. The following lemma computes $tr_L^{A(H)_k}(X)$ for any $X \in A(H)_k$ where $k \geq 1$ is an integer.
\begin{lemma}\label{4}
$tr_L^{A(H)_k}(X) = (dim ~H)^k$ (normalised pictorial trace of $X$).
\end{lemma}
\begin{proof}
If $k$ is even, then $A(H)_k$ is a matrix algebra by Lemma \ref{matrixalg} and
the result follows immediately.
Now suppose that $H$ is a finite-dimensional Hopf algebra acting on a finite-dimensional algebra $A$. A simple exercise in linear algebra
shows that for any $a \in A$, $tr^{A \rtimes H}_L(a \rtimes 1) = dim ~H \ tr^A_L(a)$. Thus if $k$ is odd, then given $X \in A(H)_k$,
we note that $tr_L^{A(H)_k}(X) = \frac{1}{dim ~H} tr_L^{A(H)_{k+1}}(X \rtimes 1) = \frac{1}{dim ~H} dim ~H^{k+1}$ (normalised pictorial
trace of $X \rtimes 1) = (dim ~H)^k$ (normalised pictorial trace of $X$) where the second equality follows since $k+1$ is even, completing the proof.
\end{proof}
As an immediate corollary we have
\begin{corollary}\label{omega}
$\omega^m_L = \omega^m_R = (dim ~H)^{\frac{m-2}{2}} 1_{^*\!Q^m_2}$ and hence, $\omega^m = 1_{^*\!Q^m_2}$ for $m > 2$.
\end{corollary} \begin{proof}
It follows from Lemma \ref{4} and the discussion preceding Lemma \ref{4} that for any $X$ in $^*\!Q^m_{1, 2}$, $tr_L^{^*\!Q^m_{1, 2}}(X) =
tr_L^{A_{m-2}}(X) = (dim ~H)^{m-2}$ normalised pictorial trace of $X$. Hence, $\omega_R^m = (dim ~H)^{\frac{m-2}{2}} 1_{^*\!Q^m_{1, 2}}$ and consequently,
$\omega_L^m = (dim ~H)^{\frac{m-2}{2}} 1_{^*\!Q^m_2}$, $\omega^m = 1_{^*\!Q^m_2}$.
\end{proof}
We now proceed towards recovering the structure maps of $^*\!Q^m_2$. The entire procedure solely relies on Hopf-algebraic as well as pictorial computations in
$P(H)$ or $P(H^*)$. At this point, it is worth recalling from Corollary \ref{Eq1} and Remark \ref{imp} that a general element of $^*\!Q^m_2$ is of the form
\begin{align}\label{1}
x^1 \rtimes f^2 \rtimes \cdots \rtimes x^{m-2} \rtimes k_1 Sp_3 Sk_3 \rtimes
F^{-1}(k_2 p_2) \rtimes Sp_1 \rtimes x^{m+2} \rtimes f^{m+3} \rtimes \cdots \rtimes x^{2m-1}
\end{align} or \begin{align}\label{2}
f^1 \rtimes x^2 \rtimes \cdots \rtimes x^{m-2} \rtimes k_1 Sp_3 Sk_3 \rtimes
F^{-1}(k_2 p_2) \rtimes Sp_1 \rtimes x^{m+2} \rtimes \cdots \rtimes f^{2m-1} \end{align} according as $m$ is odd or even. Moreover, if $m$ is even, the elements of $^*\!Q^m_2$ can also be expressed as \begin{align}\label{3}
Z_{A(m-2, m-2)}^{P(H^*)}(f^1 \otimes f^2 \otimes \cdots \otimes f^{m-2} \otimes f^{m-1}_2 \otimes
f^{m-1}_1 \otimes f^m \otimes f^{m+1} \otimes \cdots \otimes f^{2m-2}). \end{align}
First we find the formula for antipode in $^*\!Q^m_2$. It follows from Theorem \ref{weak} and Lemma \ref{omega} that for any $X \in {^*\!Q^m_2}$, \begin{align*}
S(X) &= Z^{^*\!Q^m}_{R^2_2} (X)\\
&= Z_{(R_2^2)^{* m}}^{P(H^*)}(X)\\
&= Z_{(R_2^2)^m}^{P(H^*)}(X) \ (\mbox{since} \ (R_2^2)^* = R_2^2). \end{align*}
Let us consider a general element, say $X$, of $^*\!Q^m_2$ as given by \eqref{1} or \eqref{2} according as $m$ is odd or even. Assume that $m$ is even. Then note that $X$ is identified
with
\begin{align*}
Z_{T^{2m}}^{P(H^*)}(f^1 \otimes Fx^2 \otimes \cdots \otimes Fx^{m-2} \otimes k_1 Sp_3 Sk_3 \otimes k_2 p_2 \otimes Sp_1
\otimes Fx^{m+2} \otimes f^{m+3} \otimes
\cdots \otimes f^{2m-1}).
\end{align*}
Consequently,
\begin{align*}
S(X) = Z_{(R_2^2)^m}^{P(H^*)}(Z_{T^{2m}}^{P(H^*)}(&f^1 \otimes Fx^2 \otimes \cdots \otimes Fx^{m-2}
\otimes k_1 Sp_3 Sk_3 \otimes k_2 p_2 \otimes Sp_1 \\
&\otimes Fx^{m+2} \otimes f^{m+3} \otimes \cdots \otimes f^{2m-1}))
\end{align*}
which, by repeated application of
the relation (A) in $P(H^*)$ is easily seen to be equal to
\begin{align*}
Z_{T^{2m}}^{P(H^*)}(Sf^{2m-1} \otimes \cdots \otimes SFx^{m+2} \otimes p_1 \otimes S(k_2 p_2) \otimes S(k_1 Sp_3 Sk_3) \otimes
SFx^{m-2} \otimes
\cdots \otimes SFx^2 \otimes Sf^1)
\end{align*}
which, by virtue of the fact that $FS = SF$, is identified with
\begin{align*}
Sf^{2m-1} \rtimes \cdots \rtimes Sx^{m+2} \rtimes p_1 \rtimes F^{-1}S(k_2 p_2) \rtimes S(k_1 Sp_3 Sk_3) \rtimes
Sx^{m-2} \rtimes
\cdots \rtimes Sx^2 \rtimes Sf^1.
\end{align*}
Thus, we obtain the formula for $S(X)$ when $m$ is even. Proceeding exactly the same way, one can show that the formula for $S(X)$, when $m$ is odd,
is given by
\begin{align*}
Sx^{2m-1} \rtimes \cdots \rtimes Sx^{m+2} \rtimes p_1 \rtimes F^{-1}S(k_2 p_2) \rtimes S(k_1 Sp_3 Sk_3) \rtimes
Sx^{m-2} \rtimes
\cdots \rtimes Sf^2 \rtimes Sx^1.
\end{align*}
Thus we have proved that:
\begin{lemma}\label{wanti}
Let $X \in {^*\!Q^m_2}$ be as given by \eqref{1} or \eqref{2} according as $m$ is odd or even. Then $S(X)$ is given by
\begin{align*}
Sf^{2m-1} \rtimes \cdots \rtimes Sx^{m+2} \rtimes p_1 \rtimes F^{-1}S(k_2 p_2) \rtimes S(k_1 Sp_3 Sk_3) \rtimes
Sx^{m-2} \rtimes
\cdots \rtimes Sx^2 \rtimes Sf^1
\end{align*}
or
\begin{align*}
Sx^{2m-1} \rtimes \cdots \rtimes Sx^{m+2} \rtimes p_1 \rtimes F^{-1}S(k_2 p_2) \rtimes S(k_1 Sp_3 Sk_3) \rtimes
Sx^{m-2} \rtimes
\cdots \rtimes Sf^2 \rtimes Sx^1
\end{align*}
according as $m$ is odd or even.
\end{lemma}
Among all the structure maps the hardest is to
recover the comultiplication formula and we do it in steps.
By an appeal to Corollary \ref{omega} and Lemma \ref{index}, it follows immediately from Theorem \ref{weak} that given $X \in ^*\!Q^m_2,
\Delta_{^*\!Q^m_2}(X) =
X_1 \otimes X_2$ is that element of $(^*\!Q^m_2)^{\otimes 2}$ such that the equation
\begin{align}\label{c}
Z_E^{^*\!Q^m}(X \otimes Y \otimes Z) = \delta^{-m} \ \delta^{2m-4} \ Z_{tr_2^{(0, +)}}^{^*\!Q^m}( X_1 Y)\ .\
Z_{tr_2^{(0, +)}}^{^*\!Q^m}(X_2 Z),
\end{align}
holds for all $Y, Z \in ^*\!Q^m_2$.
We begin with finding the comultiplication formula for a special class of elements of
$^*\!Q^m_2$.
Recall from the discussion preceding Proposition \ref{structure} in $\S 6$ that the space $^*\!Q_2$, where $^*\!Q$ is the planar algebra associated
to $\mathcal{N} \subset \mathcal{M}$, is same as $\mathcal{S}$ as vector spaces but as an algebra it is the opposite algebra of $\mathcal{S}$.
Given $f \rtimes x \rtimes g \in {^*\!Q_2}$, we define $X^m_{f \rtimes x \rtimes g}$ to be the element of $^*\!Q^m_2$ given by
$\underbrace{1 \rtimes \epsilon \rtimes \cdots \rtimes 1}_{\text{$m-2$ \mbox{terms}}} \rtimes
f \rtimes x \rtimes g \rtimes \underbrace{1 \rtimes \epsilon \rtimes \cdots \rtimes 1}_{\text{$m-2$ \mbox{terms}}}$ or
$\underbrace{\epsilon \rtimes 1 \rtimes \cdots \rtimes 1}_{\text{$m-2$ \mbox{terms}}} \rtimes f \rtimes x \rtimes g \rtimes
\underbrace{1 \rtimes \epsilon \rtimes \cdots \rtimes \epsilon}_{\text{$m-2$ \mbox{terms}}}$ according as $m$ is odd or even.
Let $\Delta_{^*\!Q_2}(f \rtimes x \rtimes g) = (f \rtimes x \rtimes
g)_1 \otimes (f \rtimes x \rtimes g)_2$. The following lemma computes $\Delta_{^*\!Q^m_2}(X^m_{f \rtimes x \rtimes g})$.
\begin{lemma}\label{antipode}
$\Delta_{^*\!Q^m_2}(X^m_{f \rtimes x \rtimes g}) = 1_1 X^m_{(f \rtimes x \rtimes g)_1} \otimes 1_2 X^m_{(f \rtimes x \rtimes g)_2} \in
(^*\!Q^m_2)^{\otimes 2}$ where
$1_1 \otimes 1_2 = \Delta_{^*\!Q^m_2}(1)$.
\end{lemma}
\begin{proof}
To avoid notational clumsiness and to elucidate the computational procedure, instead of treating the general case, we explicitly work out the
particular case when $m = 4$. The general case when $m > 2$ is even follows in a similar fashion and the case when $m > 2$ is odd
follows almost in a similar way with slight modifications.
Let $f \rtimes x \rtimes g \in {^*\!Q_2}$. Let us consider the element $X^4_{f \rtimes x \rtimes g} \in {^*\!Q^4_2}$.
Let $Y, Z$ be arbitrary elements of $^*\!Q^4_2$, say,
$Y = Z_{A(2, 2)}^{P(H^*)}(k^1 \otimes k^2 \otimes u_2 \otimes u_1 \otimes v \otimes k^3 \otimes k^4)$ and
$Z = Z_{A(2, 2)}^{P(H^*)}(p^1 \otimes p^2 \otimes \tilde{u}_2 \otimes \tilde{u}_1 \otimes \tilde{v} \otimes p^3 \otimes p^4)$.
It follows from \eqref{c} that in order to verify the comultiplication formula for $X^4_{f \rtimes x \rtimes g}$, we just need to verify that \begin{align*} Z^{^*\!Q^4}_{tr_2^{(0, +)}}(1_1 X^4_{(f \rtimes x \rtimes g)_1}Y) \ Z^{^*\!Q^4}_{tr_2^{(0, +)}}(1_2 X^4_{(f \rtimes x \rtimes g)_2}Z) = Z^{^*\!Q^4}_{E}(X^4_{f \rtimes x \rtimes g} \otimes Y \otimes Z). \end{align*} Another appeal to \eqref{c} shows that \begin{align*}
& Z^{^*\!Q^4}_{tr_2^{(0, +)}}(1_1 X^4_{(f \rtimes x \rtimes g)_1}Y) \ Z^{^*\!Q^4}_{tr_2^{(0, +)}}(1_2 X^4_{(f \rtimes x \rtimes g)_2}Z) \\ &= Z^{^*\!Q^4}_{E}(1 \otimes X^4_{(f \rtimes x \rtimes g)_1}Y \otimes X^4_{(f \rtimes x \rtimes g)_2}Z)\\ &= Z^{P(H^*)}_{E^{* (4)}}(1 \otimes Y X^4_{(f \rtimes x \rtimes g)_1} \otimes Z X^4_{(f \rtimes x \rtimes g)_2}). \end{align*} where the last equality follows from the definition of adjoint and cabling of a planar algebra. Now a pleasant but lengthy computation in $P(H^*)$ involving sphericality of $P(H^*)$ and repeated application of the relations (E), (T), (C) and (A), shows that \begin{align*} &Z^{P(H^*)}_{E^{* (4)}}(1 \otimes Y X^4_{(f \rtimes x \rtimes g)_1} \otimes Z X^4_{(f \rtimes x \rtimes g)_2})\\
& = \delta^4 \ p^4(h^1)\ p^3(1)\ k^1(h^2)\ k^2(1)\ (Sp^1_1 k^4)(h^3)\ (v S (k^3 p^1_2 p^2)_2(h^4) \\ & Z_{E^{* (2)}}^{P(H^*)} (1 \otimes Z_A^{P(H^*)}(u_2 \otimes u_1
\otimes (k^3p^1_2 p^2)_1) \ (f \rtimes x \rtimes g)_1 \otimes Z_A^{P(H^*)}(\tilde{u}_2 \otimes\tilde{u}_1 \otimes
\tilde{v}) \ (f \rtimes x \rtimes g)_2)\\
& = \delta^4 \ p^4(h^1)\ p^3(1)\ k^1(h^2)\ k^2(1)\ (Sp^1_1 k^4)(h^3)\ (v S (k^3 p^1_2 p^2)_2(h^4) \\
& Z_E^{^*\!Q} (1 \otimes (f \rtimes x \rtimes g)_1 \ Z_A^{P(H^*)}(u_2 \otimes u_1
\otimes (k^3p^1_2 p^2)_1) \otimes (f \rtimes x \rtimes g)_2 \ Z_A^{P(H^*)}(\tilde{u}_2 \otimes\tilde{u}_1 \otimes
\tilde{v})). \end{align*} Now repeated application of Equation \eqref{cc} shows that \begin{align*} &\delta^4 \ p^4(h^1)\ p^3(1)\ k^1(h^2)\ k^2(1)\ (Sp^1_1 k^4)(h^3)\ (v S (k^3 p^1_2 p^2)_2(h^4) \\
& Z_E^{^*\!Q} (1 \otimes (f \rtimes x \rtimes g)_1 \ Z_A^{P(H^*)}(u_2 \otimes u_1
\otimes (k^3p^1_2 p^2)_1) \otimes (f \rtimes x \rtimes g)_2 \ Z_A^{P(H^*)}(\tilde{u}_2 \otimes\tilde{u}_1 \otimes
\tilde{v}))\\
&= \delta^{-2} \ \delta^4 \ p^4(h^1)\ p^3(1)\ k^1(h^2)\ k^2(1)\ (Sp^1_1 k^4)(h^3)\
(v S (k^3 p^1_2 p^2)_2(h^4) \\
&Z_{tr_2^{(0, +)}}^{^*\!Q} ((f \rtimes x \rtimes g)_1 \ Z_A^{P(H^*)}(u_2 \otimes u_1
\otimes (k^3p^1_2 p^2)_1)) \ Z_{tr_2^{(0, +)}}^{^*Q\!} ((f \rtimes x \rtimes g)_2 \ Z_A^{P(H^*)}(\tilde{u}_2 \otimes\tilde{u}_1 \otimes
\tilde{v})) \\
&= \delta^4 \ p^4(h^1)\ p^3(1)\ k^1(h^2)\ k^2(1)\ (Sp^1_1 k^4)(h^3)\ (v S (k^3 p^1_2 p^2)_2(h^4) \\
& Z_E^{^*\!Q}((f \rtimes x \rtimes g) \otimes Z_A^{P(H^*)}(u_2 \otimes u_1
\otimes (k^3p^1_2 p^2)_1)) \otimes Z_A^{P(H^*)}(\tilde{u}_2 \otimes\tilde{u}_1 \otimes
\tilde{v})). \end{align*}
Finally, a routine computation in $P(H^*)$ shows that \begin{align*} &Z^{^*\!Q^4}_{E}(X^4_{f \rtimes x \rtimes g} \otimes Y \otimes Z)\\ &= \delta^4 \ p^4(h^1)\ p^3(1)\ k^1(h^2)\ k^2(1)\ (Sp^1_1 k^4)(h^3)\ (v S (k^3 p^1_2 p^2)_2(h^4) \\
& Z_E^{^*\!Q}((f \rtimes x \rtimes g) \otimes Z_A^{P(H^*)}(u_2 \otimes u_1
\otimes (k^3p^1_2 p^2)_1)) \otimes Z_A^{P(H^*)}(\tilde{u}_2 \otimes\tilde{u}_1 \otimes
\tilde{v})). \end{align*} Hence, the formula for $\Delta_{{^*\!Q^4_2}}(X^4_{f \rtimes x \rtimes g})$ is verified.
\end{proof}
We now proceed towards establishing the comultiplication formula for a general element of $^*\!Q^m_2$. Let us take a general element $X$
of $^*\!Q^m_2$ as given by \eqref{1} or \eqref{2} according as $m$ is odd or even.
The multiplication in $^*\!Q^m_2$ shows that $X$ can be expressed as
\begin{align*}
X = X^m_3 \ X^m_{k_1 Sp_3 Sk_3 \rtimes F^{-1}(k_2 p_2) \rtimes Sk_1} \ X^m_1
\end{align*}
with
$X^m_1 \otimes X^m_{k_1 Sp_3 Sk_3 \rtimes F^{-1}(k_2 p_2) \rtimes Sk_1} \otimes X^m_3$ ($\in (^*\!Q^m_2)^{\otimes 3}$) being given by
\begin{align*}
(f^1 \rtimes \cdots \rtimes x^{m-2} \rtimes
\underbrace{\epsilon \rtimes 1 \rtimes \cdots \rtimes \epsilon}_{\text{$m+1$ \mbox{terms}}}) \otimes
X^m_{k_1 Sp_3 Sk_3 \rtimes F^{-1}(k_2 p_2) \rtimes Sk_1}
\otimes
(\underbrace{\epsilon \rtimes \cdots \rtimes \epsilon}_{\text{$m+1$ \mbox{terms}}} \rtimes
x^{m+2} \rtimes \cdots \rtimes f^{2m-1})
\end{align*}
or
\begin{align*}
(x^1 \rtimes \cdots \rtimes x^{m-2} \rtimes \underbrace{\epsilon \rtimes 1 \rtimes \cdots \rtimes 1}_{\text{$m+1$ \mbox{terms}}}) \otimes
X^m_{k_1 Sp_3 Sk_3 \rtimes F^{-1}(k_2 p_2) \rtimes Sk_1} \otimes
(\underbrace{1 \rtimes \cdots \rtimes \epsilon}_{\text{$m+1$ \mbox{terms}}} \rtimes x^{m+2} \rtimes \cdots \rtimes x^{2m-1})
\end{align*}
according as $m$ is even or odd. It is then not hard to show using Equation \eqref{c} and Lemma \ref{antipode} that:
\begin{proposition}\label{wcomul}
$\Delta_{^*\!Q^m_2}(X) = \Delta_{^*\!Q^m_2}(1)\ (X^m_{(k_1 Sp_3 Sk_3 \rtimes F^{-1}(k_2 p_2) \rtimes Sk_1)_1} \ X^m_1 \ \otimes \
X^m_3 \ X^m_{(k_1 Sp_3 Sk_3 \rtimes F^{-1}(k_2 p_2) \rtimes Sk_1)_2})$.
\end{proposition}
Observe that the comultiplication formula involves $\Delta(1)$. Certain useful facts regarding $\Delta(1)$ are contained in the following lemma.
\begin{lemma}\label{del}\cite[Proposition 4.12, Corollary 4.13]{Das2004}
If $P = P^{N \subset M}$ denotes the subfactor planar algebra associated to the finite-index, reducible, depth two inclusion $N \subset M$
of $II_1$-factors, then $\Delta_{P_2}(1) = f^1 \otimes Sf^2$ where $f^1 \otimes f^2$ is the unique symmetric separability element of
$P_{1, 2}$ and
$\Delta_{P_2}$ denotes the comultiplication in the weak Hopf $C^*$-algebra $P_2$.
\end{lemma}
Let $U \otimes V$ denote the unique symmetric separability element of $^*\!Q^m_{1, 2}$ and let $U \otimes V ( \in {^*\!Q^m_{1, 2}}^{\otimes 2})$
be given by
\begin{align*}
(\underbrace{\epsilon \rtimes 1 \rtimes \epsilon \rtimes \cdots \rtimes \epsilon}_{\text{$m+1$ \mbox{terms}}} \rtimes \ y^1 \rtimes g^2 \rtimes
\cdots \rtimes g^{m-2}) \otimes (\underbrace{\epsilon \rtimes 1 \rtimes \epsilon \rtimes \cdots \rtimes \epsilon}_{\text{$m+1$ \mbox{terms}}}
\ \rtimes \tilde{y}^1 \rtimes \tilde{g}^2 \rtimes
\cdots \rtimes \tilde{g}^{m-2})
\end{align*}
or
\begin{align*}
(\underbrace{1 \rtimes \epsilon \rtimes 1 \rtimes \cdots \rtimes \epsilon}_{\text{$m+1$ \mbox{terms}}} \rtimes \ y^1 \rtimes g^2 \rtimes
\cdots \rtimes y^{m-2}) \otimes (\underbrace{1 \rtimes \epsilon \rtimes 1 \rtimes \cdots \rtimes \epsilon}_{\text{$m+1$ \mbox{terms}}}
\ \rtimes \tilde{y}^1 \rtimes \tilde{g}^2 \rtimes
\cdots \rtimes \tilde{y}^{m-2})
\end{align*}
according as $m$ is even or odd.
It then follows from Lemma \ref{del} and Lemma \ref{wanti} that $\Delta_{^*\!Q^m_2}(1) ( = U \otimes SV)$ equals
\begin{align*}
(\underbrace{\epsilon \rtimes 1 \rtimes \epsilon \rtimes \cdots \rtimes \epsilon}_{\text{$m+1$ \mbox{terms}}} \rtimes \ y^1
\rtimes g^2 \rtimes
\cdots \rtimes g^{m-2}) \otimes (S\tilde{g}^{m-2} \rtimes \cdots \rtimes S\tilde{g}^2 \rtimes S\tilde{y}^1 \rtimes
\underbrace{\epsilon \rtimes 1 \rtimes \epsilon \rtimes \cdots \rtimes \epsilon}_{\text{$m+1$ \mbox{terms}}})
\end{align*} or
\begin{align*}
(\underbrace{1 \rtimes \epsilon \rtimes 1 \rtimes \cdots \rtimes \epsilon}_{\text{$m+1$ \mbox{terms}}} \rtimes \ y^1
\rtimes g^2 \rtimes
\cdots \rtimes y^{m-2}) \otimes (S\tilde{y}^{m-2} \rtimes \cdots \rtimes S\tilde{g}^2 \rtimes S\tilde{y}^1 \rtimes
\underbrace{\epsilon \rtimes 1 \rtimes \epsilon \rtimes \cdots \rtimes 1}_{\text{$m+1$ \mbox{terms}}}),
\end{align*}
according as $m$ is even or odd.
Hence, it follows from Proposition \ref{wcomul} and Lemma \ref{del} that
\begin{align*}
\Delta_{^*\!Q^m_2}(X) = U \ X^m_{(k_1 Sp_3 Sk_3 \rtimes F^{-1}(k_2 p_2) \rtimes Sp_1)_1} \ X^m_1 \ \ \otimes \
S(V) \ X^m_3 \ X^m_{(k_1 Sp_3 Sk_3 \rtimes F^{-1}(k_2 p_2) \rtimes Sp_1)_2}.
\end{align*}
Using the comultiplication formula in $^*\!Q^2_2$ as given by Lemma \ref{sstructure} , we see that
\begin{align*}
\Delta_{^*\!Q^m_2}(X) = \ \delta \ &U \ X^m_{((k S\phi_2)_1 \ S(\phi_1p_2S\phi_3)_3 \ S(k S\phi_2)_3) \ \rtimes \ F^{-1}((k S\phi_2)_2 \
(\phi_1p_2S\phi_3)_2) \ \rtimes \ S(\phi_1p_2S\phi_3)_1)} \ X^m_1 \ \otimes\\
&S(V) \ X^m_3 \ X^m_{((\phi_4)_1 \ S(p_1)_3 S(\phi_4)_3 \ \rtimes \ F^{-1}((\phi_4)_2 (p_1)_2) \ \rtimes \ S(p_1)_1)}.
\end{align*}
Assume now that $m$ is even.
A tedious computation using the multiplication rule in $^*\!Q^m_2$ shows that the formula for $\Delta_{^*\!Q^m_2}(X)$ is given by:
\begin{align}\label{eqq}
\Delta_{^*\!Q^m_2}(X) &= \delta \ (\phi_4 Sp_2 S\phi_6)(S\tilde{y}^1_1) \ (f^1 \rtimes \cdots \rtimes x^{m-2} \rtimes
(k S\phi_2)_1 \ S(\phi_1p_3S\phi_3)_3
\ S(k S\phi_2)_3 \ \rtimes \\
& F^{-1}((k S\phi_2)_2 \ (\phi_1p_3S\phi_3)_2) \ \rtimes \ S(\phi_1p_3S\phi_3)_1
\rtimes y^1
\rtimes g^2 \rtimes \cdots \rtimes g^{m-2}) \ \otimes \nonumber \\
& \ (S\tilde{g}^{m-2} \rtimes \cdots \rtimes S\tilde{g}^2 \rtimes S\tilde{y}^1_2
\rtimes
(\phi_5)_1 \ S(p_1)_3 S(\phi_5)_3 \ \rtimes \ F^{-1}((\phi_5)_2 (p_1)_2) \ \rtimes \ S(p_1)_1 \ \rtimes \nonumber \\
& x^{m+2} \rtimes f^{m+3} \rtimes \cdots \rtimes f^{2m-1}) \nonumber.
\end{align} Similarly, when $m$ is odd, one can show that the formula for $\Delta_{^*\!Q^m_2}(X)$ is given by: \begin{align}\label{eqqq}
\Delta_{^*\!Q^m_2}(X) &= \delta \ (\phi_4 Sp_2 S\phi_6)(S\tilde{y}^1_1) \ (x^1 \rtimes \cdots \rtimes x^{m-2} \rtimes
(k S\phi_2)_1 \ S(\phi_1p_3S\phi_3)_3
\ S(k S\phi_2)_3 \ \rtimes \\
& F^{-1}((k S\phi_2)_2 \ (\phi_1p_3S\phi_3)_2) \ \rtimes \ S(\phi_1p_3S\phi_3)_1
\rtimes y^1 \rtimes \cdots \rtimes y^{m-2}) \ \otimes \nonumber \\
& \ (S \tilde{y}^{m-2} \rtimes \cdots \rtimes S\tilde{y}^1_2
\rtimes
(\phi_5)_1 \ S(p_1)_3 S(\phi_5)_3 \ \rtimes \ F^{-1}((\phi_5)_2 (p_1)_2) \ \rtimes \ S(p_1)_1 \ \rtimes \nonumber \\
& x^{m+2} \rtimes f^{m+3} \rtimes \cdots \rtimes x^{2m-1}) \nonumber.
\end{align}
For each integer $m > 2$, let $K_m$ denote the vector space $A(H)_{m-2}^{op} \otimes D(H)^{*op} \otimes A(H)_{m-2}^{op}$ or $A(H^*)_{m-2}^{op} \otimes D(H)^{*op} \otimes A(H)_{m-2}^{op}$ according as $m$ is odd or even. Consider the linear isomorphism $\psi$ of $K_m$ onto $^*\!Q^m_2$ given by
\begin{align*}
a \otimes (g \otimes f) \otimes b \mapsto a \rtimes f_1Sg_3Sf_3 \rtimes F^{-1}(f_2 g_2) \rtimes Sg_1 \rtimes b.
\end{align*}
We make $K_m$ into a
weak Hopf $C^*$-algebra by transporting the structure maps on $^*\!Q^m_2$ to $K_m$ using this linear isomorphism. Thus, by
construction, $K_m$ is isomorphic to $^*\!Q^m_2$ as weak Hopf $C^*$-algebras.
The next theorem, which is the main result of this section, explicitly describes the structure maps of $K_m$.
\begin{theorem}\label{WHA}
For each $m > 2$, $K_m$ is a weak Hopf $C^*$- algebra with the structure maps given by the following formulae. \begin{align*}
&\mbox{Multiplication:} \ \ (a \otimes (g \otimes f) \otimes b) ( \tilde{a} \otimes (\tilde{g} \otimes \tilde{f}) \otimes \tilde{b}) =
(\tilde{f_1} S \tilde{g_2} S \tilde{f_3}.a) \tilde{a} \otimes (g_2 \otimes f)(\tilde{g_1} \otimes \tilde{f_2}) \otimes
b \rho_{g_1}(\tilde{b}),\\
&\mbox{Comultiplication:} \ \ \Delta(a \otimes (g \otimes f) \otimes b) = \delta (a \otimes (\phi_1 g_3 S\phi_3 \otimes fS\phi_2) \otimes u)
\otimes ((\phi_4 Sg_2 S\phi_6). v^{\prime} \otimes (g_1 \otimes \phi_5) \otimes b), \\%\mbox{where} 1^{\prime} \otimes \\
& \mbox{Counit:} \ \varepsilon(a \otimes (g \otimes f) \otimes b) =
\delta^{m-2} f(h) Z_{F^{(m-1)}}^{P(H^m)}(a \rtimes Sg \rtimes b),\\
&\mbox{Antipode:} \ \ S(a \otimes (g \otimes f) \otimes b) = b^{\prime} \otimes S_{D(H)^{*op}}(g \otimes f) \otimes a^{\prime},
\end{align*}
with $a \otimes (g \otimes f) \otimes b$, $\tilde{a} \otimes (\tilde{g} \otimes \tilde{f}) \otimes \tilde{b}$ being
elements of $K_m$, where
\begin{itemize}
\item for any $k \in H^*$ and $\underbrace{(\cdots \rtimes f \rtimes x)}_{\text{$m-2$ factors}}$ in $A(H)^{op}_{m-2} \ \mbox{or} \ A(H^*)^{op}_{m-2}$ according as $m$ is odd or
even, $k. (\cdots \rtimes f \rtimes x) := k(x_2)(\cdots \rtimes f \rtimes x_1)$,
\item $\rho$ denotes the algebra action of $H^*$ on $A(H)^{op}_{m-2}$ defined for $k \in H^*$ and $x \rtimes p \rtimes \cdots \in
A(H)^{op}_{m-2}$
by $\rho_k(x \rtimes p \rtimes \cdots) =
k(Sx_1) x_2 \rtimes p \rtimes \cdots$,
\item $u \otimes v$ denotes the unique symmetric separability element of $A(H)_{m-2}$,
\item for any positive integer $k$, $F^{(k)}$ denotes the $k$-cabling of the tangle $F$ (see Figure \ref{fig:pic47}) and
\item for any $X$ in $H_{[i, j]}$, the symbol $X^{\prime}$ denotes the element as defined in $\S 1.1$ preceding Lemma \ref{anti1}.
\end{itemize}
\end{theorem}
\begin{proof}
We first verify the formula for antipode. Assume without loss of generality that $m$ is even. Let $X \in K_m$ be given by
\begin{align*}
(f^1 \rtimes x^2 \rtimes \cdots \rtimes x^{m-2}) \otimes (p \otimes k) \otimes (x^{m+2} \rtimes \cdots \rtimes f^{2m-1})
\end{align*}
so that
\begin{align*}
\psi(X) = f^1 \rtimes x^2 \rtimes \cdots \rtimes x^{m-2} \rtimes k_1 Sp_3 Sk_3 \rtimes
F^{-1}(k_2 p_2) \rtimes Sp_1 \rtimes x^{m+2} \rtimes \cdots \rtimes f^{2m-1}.
\end{align*}
By Lemma \ref{wanti},
\begin{align*}
S(\psi(X)) &= Sf^{2m-1} \rtimes \cdots \rtimes Sx^{m+2} \rtimes p_1 \rtimes F^{-1}S(k_2 p_2) \rtimes S(k_1 Sp_3 Sk_3) \rtimes
Sx^{m-2} \rtimes \cdots \rtimes Sx^2 \rtimes Sf^1\\
&= (x^{m+2} \rtimes \cdots \rtimes f^{2m-1})^{\prime} \rtimes p_1 \rtimes F^{-1}S(k_2 p_2) \rtimes S(k_1 Sp_3 Sk_3) \rtimes
(f^1 \rtimes x^2 \rtimes \cdots \rtimes x^{m-2})^{\prime}.
\end{align*} Finally, it follows from the formula for antipode in $^*\!Q_2$ as given by Lemma \ref{sstructure} and Remark \ref{Dr} that \begin{align*}
\psi^{-1}(S(\psi(X))) = (x^{m+2} \rtimes \cdots \rtimes f^{2m-1})^{\prime} \otimes S_{D(H)^{* op}}(p \otimes k) \otimes
(f^1 \rtimes x^2 \rtimes \cdots \rtimes x^{m-2})^{\prime}. \end{align*} Thus the formula for antipode is verified. In a similar way, using the comultiplication formula in $^*\!Q^m_2$ as given by \eqref{eqq} or \eqref{eqqq} according as $m$ is even or odd, the comultiplication formula in $K_m$ can easily be verified. The verification of the multiplication
and counit formula in $K_m$ involves tedious computation and we leave these verifications for the reader.
\end{proof}
\begin{remark}
It follows immediately from the formula for antipode as given in Theorem \ref{WHA} that each $K_m$ ($m > 2$) has involutive anipode i.e.,
square of the antipode is the identity and consequently, each $K_m$ is a weak Kac algebra.
\end{remark}
\begin{remark}
The sole importance of Theorem \ref{WHA} lies in the fact that it constructs a family of weak Kac algebras out of a given finite-dimensional
Kac algebra.
\end{remark}
\end{document} | arXiv |
\begin{document}
\begin{center} {\large {\bf Global Dynamics for Symmetric Planar Maps}}\\ \mbox{} \\ \begin{tabular}{ccc} {\bf Bego\~na Alarc\'on$^{\mbox{a,c}}$} & {\bf Sofia B.\ S.\ D.\ Castro$^{\mbox{a,b}}$} & {\bf Isabel S.\ Labouriau$^{\mbox{a}}$} \end{tabular} \end{center}
\bigbreak \noindent {\small $^{\mbox{a}}$} Centro de Matem\'atica da Universidade do Porto, Rua do Campo Alegre 687, 4169-007 Porto, Portugal.
\noindent {\small $^{\mbox{b}}$} Faculdade de Economia do Porto, Rua Dr. Roberto Frias, 4200-464 Porto, Portugal.
\noindent {\small $^{\mbox{c}}$} permanent address: Departament of Mathematics. University of Oviedo; Calvo Sotelo s/n; 33007 Oviedo; Spain.
\bigbreak
\begin{abstract}
We consider sufficient conditions to determine the global dynamics for equivariant maps of the plane with a unique fixed point which is also hyperbolic. When the map is equivariant under the action of a compact Lie group, it is possible to describe the local dynamics.
In particular, if the group contains a reflection, there is a line invariant by the map. This allows us to use results based on the theory of free homeomorphisms to describe the global dynamical behaviour. We briefly discuss the case when reflections are absent, for which global dynamics may not follow from local dynamics near the unique fixed point.
\end{abstract}
\noindent{\bf MSC 2010:} 37B99, 37C80, 37C70.\\ {\bf Keywords:} planar embedding, symmetry, local and global dynamics.
\section{Introduction}
Dynamics of planar maps has drawn the attention of many authors. See, for instance, references such as \cite{Alarcon}, \cite{Brown}, \cite{Cima-Manosa}, \cite{franks} or \cite{LeRoux}. Problems addressed include the theory of free homeomorphism and trivial dynamics, searching for sufficient conditions for global results with topological tools or the Discrete Markus-Yamabe Problem. The latter conjectures results on global stability from local stability of the fixed point under some additional condition on the Jacobian matrix\footnote{The Discrete Markus-Yamabe Problem also derives its fame from its relation to the Jacobian Conjecture \cite{vanDenEssen}.}. Results in \cite{Alarcon} guarantee the global stability of the unique fixed point resorting to local conditions and the existence of an invariant embedded curve joining the fixed point to infinity.
To the best of our knowledge, this problem has been addressed exclusively in a non-symmetric context. However the existence of the invariant curve in \cite{Alarcon} made it seem natural to approach this problem in a symmetric setting,
where invariant spaces for the dynamics are a key feature. In this context, we address the global dynamics of planar diffeomorphisms having a unique fixed point which is hyperbolic. We restrict our attention to the cases where the fixed point is either an attractor or a repellor. The case of a saddle point does not rely so much on symmetry. It will therefore be addressed elsewhere.
The issue of uniqueness of a fixed point has been addressed by Alarc\'on {\em et al} \cite{Alarcon-orbitas-periodicas} who gave simple conditions for planar maps under which the origin is the unique fixed point.
The presence of symmetry constrains the admissible local dynamics near the fixed point. We extend, whenever possible, the local dynamics to the whole plane using the properties determined by the symmetry. The reader may see that, in the case of $O(2)$, $\mathbb{Z}_2(\langle\kappa\rangle)$ and $D_n$,
local dynamics determines global dynamics. However, $SO(2)$ and $\mathbb{Z}_n$ symmetry do not provide any extra information on how to go from local to global dynamics. Actually, for $\mathbb{Z}_n$, we have constructed examples where more than one configuration can occur. For instance, in the case of a local attractor, the global dynamics may exhibit either several periodic points \cite{SofisbeSzlenk} or a globally attracting set with special properties
\cite{AlarconDenjoy}. For $SO(2)$ the dynamics is determined by a one-dimensional map. These situations provide a comprehensive description of the possible global dynamics.
The paper is organised as follows: in the next section we transcribe preliminary results concerning dynamics of planar maps and equivariance. These are organised in two subsections, the first on Topological Dynamics and the second on Equivariance. The reader familiar with the subject may skip them without loss. Section~\ref{secLocalGlobal} shows how local results can be extended to global results.
We thus establish all possible dynamics in the presence of symmetry. This is discussed at the end in Section~\ref{Discussion}. Note that the aforementioned groups are the only compact subgroups of $O(2)$ acting on the plane. For the reader's convenience the results obtained are summarised in the Equivariant Table of Appendix~\ref{EquivTable}.
\section{Preliminaries} \label{secPre}
This section contains definitions and known results about topological dynamics and equivariant theory. These are grouped in three separate subsections, which are elementary for readers in each field.
\subsection{Topological Dynamics}
We consider planar topological embeddings, that is, continuous and injective maps defined in $\mathbb{R}^ 2$. The set of topological embeddings of the plane is denoted by $\textrm{Emb}(\mathbb{R}^2)$.
Recall that for $f\in \textrm{Emb}(\mathbb{R}^2)$ the equality $f(\mathbb{R}^2)=\mathbb{R}^2$ may not hold. Since every map $f\in \textrm{Emb}(\mathbb{R}^2)$ is open (see \cite{libroembeddings}), we will say that $f$ is a homeomorphism if $f$ is a topological embedding defined onto $\mathbb{R}^2$. The set of homeomorphisms of the plane will be denoted by $\textrm{Hom}(\mathbb{R}^2)$.
When $\mathcal{H}$ is one of these sets we denote by $\mathcal{H}^ {+}$ (and $\mathcal{H}^ {-}$) the subset of orientation preserving (reversing) elements of $\mathcal{H}$.
Given a continuous map $f: \mathbb{R}^2 \to \mathbb{R}^2$, we say that $p$ is a \emph{non-wandering point} of $f$ if for every neighbourhood $U$ of $p$ there exists an integer $n>0$ and a point $q\in U$ such that $f^n(q)\in U$. We denote the set of non-wandering points by $\Omega(f)$. We have $$ \textrm{Fix}(f)\subset \textrm{Per}(f) \subset \Omega(f), $$ where $\textrm{Fix}(f)$ is the set of fixed points of $f$, and $\textrm{Per}(f)$ is the set of periodic points of $f$.
Let $\omega(p)$ be the set of points $q$ for which there is a sequence $n_j\to+\infty$ such that $f^{n_j}(q)\to p$. If $f\in \textrm{Hom}(\mathbb{R}^2)$ then $\alpha(p)$ denotes the set $\omega(p)$ under $f^{-1}$.
Let $f\in \textrm{Emb}(\mathbb{R}^2)$ and $p\in \mathbb{R}^2$. We say that $\omega(p)=\infty$ if $\norm{f^n(p)}\to \infty$ as $n$ goes to $ \infty$. Analogously, if $f\in \textrm{Hom}(\mathbb{R}^2)$, we say that $\alpha(p)=\infty$ if $\norm{f^{-n}(p)}\to \infty$ as $n$ goes to $ \infty$.
We say that $0\in \textrm{Fix}(f)$ is a \emph{local attractor} if its basin of attraction ${\mathcal U}=\{p\in \mathbb{R}^ 2 : \omega(p)=\{0\}\}$ contains an open neighbourhood of $0$ in $\mathbb{R}^2$ and that $0$ is a \emph{global attractor} if ${\mathcal U}=\mathbb{R}^2$. The origin is a \emph{stable fixed point} if for every neighborhood $U$ of $0$ there exists another neighborhood $V$ of $0$ such that $f(V)\subset V$ and $f(V)\subset U$. Therefore, the origin is an \emph{asymptotically local (global) attractor} or a \emph{(globally) asymptotically stable fixed point} if it is a stable local (global) attractor. See \cite{Bhatia} for examples.
We say that $0\in \textrm{Fix}(f)$ is a \emph{local repellor} if there exists a neighbourhood $V$ of $0$ such that $\omega(p)\notin V$ for all $0\neq p\in \mathbb{R}^ 2$ and a \emph{global repellor} if this holds for $V=\mathbb{R}^2$.
We say that the origin is an \emph{asymptotically global repellor} if it is a global repellor
and, moreover, if for any neighbourhood $U$ of $0$ there exists another neighbourhood $V$ of $0$, such that,
$V\subset f(V)$ and $V\subset f(U)$.
When the origin is a fixed point of a $C^ 1-$map of the plane we say the origin is a \emph{local saddle} if the two eigenvalues of $Df_0$, $\alpha, \beta$, are both real and verify $0<\abs{\alpha}<1<\abs{\beta}$.
We also need the following theorem of Murthy \cite{Murthy}, to be applied to parts of the domain of our maps with no fixed points:
\begin{theorem}[Murthy \cite{Murthy}] \label{teoMurthy} Let $f\in \textrm{Emb}^ {+}(\mathbb{R}^ 2)$. If $\textrm{Fix}(f)=\emptyset$, then $\Omega(f)=\emptyset$. \end{theorem}
We say that $f\in \textrm{Emb}(\mathbb{R}^2)$ has \emph{trivial dynamics} if, for all $p\in \mathbb{R}^ 2$, either $\omega(p) \subset \textrm{Fix}(f)$ or $\omega(p)=\infty$. Moreover, we say that a planar homeomorphism has trivial dynamics if, for all $p\in \mathbb{R}^ 2$, both $\omega(p), \alpha(p) \subset \textrm{Fix}(f)\cup\{\infty\}$.
Let $f: \mathbb{R}^N \rightarrow \mathbb{R}^N$ be a continuous map. Let $\gamma : [0,\infty) \to \mathbb{R}^2$ be a topological embedding of $[0,\infty) \,. \;$ As usual, we identify $\gamma$ with $\gamma\,([0,\infty))\,.$ We will say that $\gamma$ is an \emph{ $f-$invariant ray} if $\, \gamma(0)=(0,0)\,, \,$ $\,f(\gamma)\subset
\gamma \,, \,$ and $\lim_{t\to\infty}|\gamma(t)|=\infty$, where
$|\cdot| \,$ denotes the usual Euclidean norm.
\begin{proposition}[Alarc\'on {\em et al.} \cite{Alarcon}] \label{lemrayo} Let $f\in \textrm{Emb}^ {+}(\mathbb{R}^2)$ be such that $\textrm{Fix}(f)=\{0\}$. If there exists an $f-$invariant ray $\gamma$, then $f$ has trivial dynamics. \end{proposition}
The relation between the stability of the origin and the admissible forms of the Jacobian at that point is well known, but it is not usually clear that it holds for continuous maps that are not necessarily $C^1$. The precise hypotheses are stated in the following result, and its proof is given in Appendix~\ref{ApendiceEstabilidade}.
\begin{proposition}\label{Liapunov} Let $U \subset \mathbb{R}^N$ be an open set containing the origin and let $f: \; \mathbb{R}^N \rightarrow \mathbb{R}^N$ be a continuous map, differentiable at the origin and such that $f(0)=0$. If all the eigenvalues of $Df(0)$ have norm strictly smaller than one then the origin is locally assymptotically stable. If all the eigenvalues of $Df(0)$ have norm strictly greater than one then the origin is a local repellor. \end{proposition}
\subsection{Equivariant Planar Maps}
Let $\Gamma$ be a compact Lie group acting on $\mathbb{R}^2$. The following definitions and results are taken from Golubitsky {\em et al.} \cite{golu2}, especially Chapter~XII, to which we refer the reader interested in further detail.
Given a map $f:\mathbb{R}^2\longrightarrow\mathbb{R}^2$, we say that $\gamma \in \Gamma$ is a \emph{symmetry} of $f$ if $f(\gamma x)=\gamma f(x)$. We define the \emph{symmetry group} of $f$ as the smallest closed subgroup of $GL(2)$ containing all the symmetries of $f$. It will be denoted by $\Gamma_f$.
We say that $f:\mathbb{R}^2\to \mathbb{R}^2$ is \emph{$\Gamma-$equivariant} or that $f$ {\em commutes} with $\Gamma$ if $$ f(\gamma x)=\gamma f(x) \quad \text{ for all }\quad \gamma \in \Gamma. $$
It follows that every map $f:\mathbb{R}^2\to \mathbb{R}^2$ is equivariant under the action of its symmetry group, that is, $f$ is $\Gamma_f-$equivariant. We are interested in maps having a non-trivial symmetry group, $\Gamma_f\subset GL(2)$.
Let $\Sigma$ be a subgroup of $\Gamma$. The {\em fixed-point subspace} of $\Sigma$ is $$ \textrm{Fix} (\Sigma) =\{p\in \mathbb{R}^2: \sigma p=p \; \text{ for all } \; \sigma \in \Sigma\}. $$ If $\Sigma$ is generated by a single element $\sigma \in \Gamma$, we write \emph{$\textrm{Fix}\langle\sigma\rangle$} instead of $\textrm{Fix} (\Sigma)$.
We note that, for each subgroup $\Sigma$ of $\Gamma$, $\textrm{Fix} (\Sigma)$ is invariant by the dynamics of a $\Gamma-$equivariant map (\cite{golu2}, XIII, Lemma 2.1).
For a group $\Gamma$ acting on $\mathbb{R}^2$ a non trivial fixed point subspace arises when $\Gamma$ contains a reflection. By a linear change of coordinates we may take the reflection to be the {\em flip}
$$
\kappa . (x,y) = (x, -y) .
$$
\section{Symmetric Global Dynamics} \label{secLocalGlobal}
In this section we study the global dynamics of a symmetric discrete dynamical system with a unique and hyperbolic attracting fixed point. We establish conditions for the hyperbolic local dynamics to become global dynamics. For most symmetry groups, the local dynamics is restricted to either an attractor or a repellor. Saddle points only occur for very small symmetry groups and therefore the study of these points does not depend so much on symmetry issues and requires additional tools. As pointed out before, this will be the object of a separate article.
We address the dynamics when the groups involved possess an element acting as a reflection (flip). This is the main result of this article. In this case, we make use of the fact that there exists an invariant ray for the dynamics (either of $f$ itself or of $f^2$) from which results follow.
We begin with two convenient results. Although the next lemma is only required to hold for planar maps, we present it for $\mathbb{R}^N$, as the proof is the same.
Let $p\in \mathbb{R}^N$ and $f:\mathbb{R}^N \to \mathbb{R}^N$ be a continuous map. We denote by $\omega_2(p)$ the $\omega-$limit of $p$ with respect to $f^2$, given by
$$
\omega_2(p)=\{q\in \mathbb{R}^N : \lim \;f^{2n_k}(p)=q, \; \text{ for some sequence} \; n_k\to \infty \} .
$$
\begin{lemma} \label{lemOmega2} Let $f\in \textrm{Emb}(\mathbb{R}^N)$ be such that $f(0)=0$.
For $p\in \mathbb{R}^N$,
\begin{enumerate} \item[a)] if $\omega_2(p)=\{0\}$, then $\omega(p)=\{0\}$; \item[ b)] if $\omega_2(p)=\infty$, then $\omega(p)=\infty$. \end{enumerate}
\end{lemma}
\begin{proof} Let $p\in \mathbb{R}^N$.
$a)$ Suppose $\omega_2(p)=\{0\}$ and suppose also that $\omega(p)\neq\{0\}$. Then there exists an $r\neq 0$ such that $r\in\omega(p)$. In that case $r=\lim f^{n_k}(p)$, so there exists a $k_0\in \mathbb{N}$ such that $\forall k>k_0$, $n_k$ is odd because $\omega_2(p)=\{0\}$. Then, $f(r)=\lim f^{n_k+1}(p)$ with $n_k +1$ even. So $f(r)\in \omega_{2}(p) $ hence $f(r)=0$ with $r\neq 0$, which is impossible because $f$ is an injective map such that $f(0)=0$.
$b)$ Suppose now $\omega_2(p)= \infty$ and also that $\omega(p)\neq\infty$. Then there exists an $r\in \mathbb{R}^N$ such that $r\in\omega(p)$. In that case $r=\lim f^{n_k}(p)$, so there exists a $k_0\in \mathbb{N}$ such that $\forall k>k_0$, $n_k$ is odd because $\omega_2(p)=\infty$. Then, $f(r)=\lim f^{n_k+1}(p)$ with $n_k +1$ even. So $f(r)\in \omega_{2}(p)$ which is impossible, since $\omega_2(p)=\infty$. \end{proof}
\begin{lemma} \label{lemgdim1} Let $g:[0,1) \to [0,1)$ be a continuous and injective map such that $Fix(g)=\{0\}$. The following holds: \begin{itemize} \item[$a)$] If $0$ is a local attractor for $g$, then $0$ is a global attractor for $g$. \item[$b)$] If $0$ is a local repellor for $g$, then $0$ is a global repellor for $g$. \end{itemize} \end{lemma}
\begin{proof} Assume $0$ is a local attractor. Since $g$ is a continuous map, $g$ is increasing at $0$. Because $g$ is injective, $g$ is increasing in $[0,1)$. Moreover, $\textrm{Fix}(g)=\{0\}$ so the graph of $g$ does not cross the diagonal of the first quadrant and one of the following holds:
\begin{itemize} \item[i)] $g(x)>x$, for all $x\in (0,1)$; \item[ii)] $g(x)<x$, for all $x\in (0,1)$. \end{itemize}
Only $ii)$ can happen when $0$ is a local attractor. Then $g(x)<x$, for all $x\in [0,1)$ and $0$ is a global attractor for $g$.
The proof of b) follows in a similar fashion. \end{proof}
We now proceed to use the existence of an invariant ray to obtain information concerning the dynamics.
\begin{lemma} \label{lemFixdim1} Let $f:\mathbb{R}^2\to \mathbb{R}^2$ be a map with symmetry group $\Gamma$. If $\kappa \in \Gamma$, then $\textrm{Fix} \langle\kappa\rangle$ is an $f-$invariant line. Moreover, $\textrm{Fix} \langle\kappa\rangle$ contains an $f^2-$invariant ray. \end{lemma}
\begin{proof} By Lemma XIII, $2.1$ and Theorem XIII, $2.3$ in \cite{golu2}, $\textrm{Fix} \langle\kappa\rangle$ is a vector subspace of dimension one such that $f(\textrm{Fix} \langle\kappa\rangle) \subseteq \textrm{Fix} \langle\kappa\rangle$. Let $\gamma$ denote one of the two half-lines in $\textrm{Fix} \langle\kappa\rangle$, then $\gamma$ is an $f^2-$invariant ray. \end{proof}
The next proposition describes the admissible $\omega$-limit set of a point and is essential for the main results.
\begin{proposition} \label{proprayoS} Let $f\in \textrm{Emb}(\mathbb{R}^2)$ have symmetry group $\Gamma$ with $\kappa \in \Gamma$, such that $\textrm{Fix}(f)=\{0\}$. Suppose one of the followings holds:
\begin{itemize} \item[$a)$] $f\in \textrm{Emb}^ {+}(\mathbb{R}^2)$ and $f$ does not interchange connected components of $\mathbb{R}^2 \setminus \textrm{Fix} \langle\kappa\rangle$. \item[$b)$] $\textrm{Fix}(f^ 2)=\{0\}$. \end{itemize} Then for each $p\in \mathbb{R}^2$ either $\omega(p)=\{0\}$ or $\omega(p)=\infty$. \end{proposition}
\begin{proof} Suppose $a)$ holds. By Lemma \ref{lemFixdim1}, $\mathbb{R}^2\setminus \textrm{Fix} \langle\kappa\rangle$ is the disjoint union of two open subsets $U_1, U_2\subset \mathbb{R}^2$ homeomorphic to $\mathbb{R}^2$. Moreover,
$f|_{U_{i}}: U_{i} \to U_{i}$ for $i=1,2$ is an orientation preserving embedding without fixed points. Then by Theorem \ref{teoMurthy}, $\Omega(f|_{U_{i}})=\emptyset$ for $i=1,2$ and it then follows that $\; \Omega(f) \subseteq \textrm{Fix} \langle\kappa\rangle$. \par
The subspace $\textrm{Fix} \langle\kappa\rangle\setminus\{0\}$ is the disjoint union of two subsets homeomorphic to $(0,1)$. Even if $f$ interchanges these components, $f^2$ does not. Then applying Lemma \ref{lemgdim1} to the restriction of $f^2$ to each component, it follows that for $p\in \textrm{Fix} \langle\kappa\rangle$ either $\omega_2(p)=0$ or $\omega_2(p)=\infty$. By Lemma \ref{lemOmega2} , it follows that for $p\in \textrm{Fix} \langle\kappa\rangle$ either $\omega(p)=0$ or $\omega(p)=\infty$.
Let $p\in \mathbb{R}^2 \setminus \textrm{Fix} \langle\kappa\rangle $. Since $\; \Omega(f) \subseteq \textrm{Fix} \langle\kappa\rangle$, we have that $\omega(p)\subseteq \textrm{Fix} \langle\kappa\rangle$. We show next that $\omega(p)\neq \textrm{Fix} \langle\kappa\rangle$.
Suppose there is an $r\in\omega(p)\cap ({\textrm{Fix} \langle\kappa\rangle}\setminus\{0\})$ and an open neighbourhood $K$ of $r$ such that $0\notin K$ and $\textrm{Fix} \langle\kappa\rangle \cap K$ is an embedded segment and $K\setminus\textrm{Fix} \langle\kappa\rangle$ is the union of two disjoint disks $W_1$ and $W_2$ homeomorphic to $\mathbb{R}^2$. Suppose without loss of generality that $p\in U_1$, then the positive orbit of $p$ accumulates at $r$ and this positive orbit meets $W_1$ infinitely many times. Since $r\in \Omega(f)\setminus \{0, \infty\}$ is not a fixed point, taking $K$ (and hence $W_1$) sufficiently small, there exists an open disk $V\subset W_1$ and a positive integer $n,$ with $n\ge 2$, such that for some $s\in V,$ we have that $f^n(s)\in V$, while $V\cap f^{\ell}(V)=\emptyset $, for $\ell=1,2,\dots, n-1$. Then, by Theorem 3.3 in \cite{Murthy}, $f$ has a fixed point in $V$ which contradicts the uniqueness of the fixed point. So the orbit of $p$ does not accumulate at $\textrm{Fix} \langle\kappa\rangle$ and hence either $\omega(p)=0$ or $\omega(p)=\infty$.
Suppose $b)$ holds. By Lemma \ref{lemFixdim1} there exists an $f^2-$invariant ray $ \gamma \subset \textrm{Fix} \langle\kappa\rangle$. Moreover, $f^2\in \textrm{Emb}^{+}(\mathbb{R}^2)$ and $\textrm{Fix}(f^2)=\{0\}$, so by Proposition~\ref{lemrayo} we have that for each $p\in \mathbb{R}^2$ either $\omega_2(p)=\{0\}$ or $\omega_2(p)=\infty$ and therefore, by Lemma \ref{lemOmega2}, either $\omega(p)=\{0\}$ or $\omega(p)=\infty$. \end{proof}
The next example shows that assumption $b)$ in Proposition \ref{proprayoS} is necessary in the case where $f$ interchanges connected components of $\mathbb{R}^2 \setminus \textrm{Fix} \langle\kappa\rangle$.
\begin{example} Consider the map $f: \mathbb{R}^2 \to \mathbb{R}^2$ defined by $$ f(x,y)=\left(-ax^3+(a-1)x,-\frac{y}{2}\right) \quad 0<a<1. $$ It is easily checked that $f$ has symmetry group $D_2$ and verifies (see Figure \ref{figureattractor}): \begin{enumerate} \item $f\in \textrm{Emb}^{+}(\mathbb{R}^2)$ is an orientation-preserving diffeomorphism. \item $\operatorname{{Spec}}(f)\cap [0, \infty)=\emptyset$. \item $0$ is a local hyperbolic attractor. \item $\textrm{Fix}(f^2)\neq \{0\}$. \end{enumerate} \begin{figure}
\caption{A local attractor which is not a global attractor due to the existence of periodic orbits.}
\label{figureattractor}
\end{figure} \end{example}
The main results, Theorems \ref{proprayoS1} and \ref{proprayoS2}, are obtained as a direct consequence of Proposition \ref{proprayoS} under additional assumptions.
We say that a map $f$ is \emph{dissipative} if there is an open set $V$ such that $\mathbb{R}^2\backslash V$ is compact and $\omega(p)\notin V$ for all $p\in\mathbb{R}^2$.
\begin{theorem} \label{proprayoS1} Let $f\in \textrm{Emb}(\mathbb{R}^2)$ be dissipative with symmetry group $\Gamma$ with $\kappa \in \Gamma$ such that $\textrm{Fix}(f)=\{0\}$. Suppose in addition that one of the following holds:
\begin{itemize} \item[$a)$] $f\in \textrm{Emb}^ {+}(\mathbb{R}^2)$ and $f$ does not interchange connected components of $\mathbb{R}^2 \setminus \textrm{Fix} \langle\kappa\rangle$. \item[$b)$] There exist no $2-$periodic orbits. \end{itemize} Then $0$ is a global attractor.
\end{theorem}
\begin{proof} Follows from Proposition \ref{proprayoS} since being dissipative excludes $\omega(p)=\infty$. \end{proof}
\begin{corollary} Suppose the assumptions of Theorem \ref{proprayoS1} are verified and $f$ is differentiable at $0$. If every eigenvalue of $Df(0)$ has norm strictly less than one, then $0$ is a global asymptotic attractor. \end{corollary}
\begin{proof} Follows by Theorem \ref{proprayoS1} and Proposition \ref{Liapunov}. \end{proof}
\begin{theorem} \label{proprayoS2} Let $f\in \textrm{Emb}(\mathbb{R}^2)$ be a map with symmetry group $\Gamma$ with $\kappa \in \Gamma$ such that $\textrm{Fix}(f)=\{0\}$. Suppose in addition that one of the following holds:
\begin{itemize} \item[$a)$] $f\in \textrm{Emb}^ {+}(\mathbb{R}^2)$ and $f$ does not interchange connected components of $\mathbb{R}^2 \setminus Fix\langle\kappa\rangle$. \item[$b)$] There exist no $2-$periodic orbits. \end{itemize} Then, if $0$ is a local repellor, then $0$ is a global repellor.
\end{theorem}
\begin{proof} Follows from Proposition \ref{proprayoS}, since a local repellor excludes $\omega(p)=\{0\}$. \end{proof}
\begin{corollary} Suppose the assumptions of Theorem \ref{proprayoS2} are verified and $f$ is differentiable at $0$. If every eigenvalue of $Df(0)$ has norm strictly greater than one, then $0$ is a global asymptotic repellor. \end{corollary}
\begin{proof} Follows from Theorem \ref{proprayoS2} and Proposition \ref{Liapunov}. \end{proof}
\section{Discussion: From Local to Global Symmetric Dynamics}\label{Discussion}
In this section we discuss all possible groups of symmetries for a planar topological embedding, and how this may be used to obtain global dynamics from local information near a unique fixed point. In our discussion of symmetries we need only be concerned with compact subgroups of the orthogonal group $O(2)$, since every compact Lie group in $GL(2)$ can be identified with one of them. For the convenience of the reader less familiar with symmetry, these are listed in Appendix \ref{subgroups}, together with their action on $\mathbb{R}^2$. Finally we give a brief description of what happens for $SO(2)$ and $\mathbb{Z}_n$, the groups that do not contain a flip, where the information about the local dynamics cannot be extended to global dynamics.
Since most of our results depend on the existence of a unique fixed point for $f$, we assume also that the group action is {\em faithful}: for every $\gamma\in\Gamma$, different from the identity, there exists $x\in\mathbb{R}^2$ such that $\gamma x\ne x$. Therefore $\textrm{Fix} (\Gamma) = \{ 0 \}$ and hence, if $f$ is $\Gamma-$equivariant then $f(0)=0$.
The Jacobian matrix of an equivariant map $f$ at the origin depends on its symmetries as follows:
\begin{lemma}\label{LemaDerivada} Let $f: \; V \rightarrow V$ be a $\Gamma$-equivariant map differentiable at the origin. Then $Df(0)$, the Jacobian of $f$ at the origin, commutes with $\Gamma$. \end{lemma}
Straightforward calculations, Lemma~\ref{LemaDerivada} and Proposition \ref{Liapunov} allow us to describe the Jacobian matrix at the origin and, from it, the local dynamics for maps equivariant under each of the subgroups of $O(2)$.
\begin{proposition} \label{proptable} Let $f$ be a planar map differentiable at the origin. The admissible forms for the Jacobian matrix of $f$ at the origin are those given in the third column of the Table in Appendix~\ref{EquivTable} depending on the symmetry group of $f$. The fourth column of this table describes how the dynamics near the origin depends on the symmetry of $f$. \end{proposition}
The groups that do not possess a flip are $SO(2)$ and $\mathbb{Z}_n$. For these groups, local dynamics does not determine global dynamics. In order to complete our analysis we add some remarks about each case.
The dynamics of an $SO(2)$-symmetric embedding is mostly determined by its radial component, as can be seen by writing $f$ in polar coordinates as $f(\rho, \theta)=(R(\rho, \theta), T(\rho, \theta))$. Since $f$ is $SO(2)-$equivariant, then for all $\alpha \in \mathbb{R}$ $f(\rho, \theta + \alpha)=(R(\rho, \theta), T(\rho, \theta)+\alpha)$. Therefore, $ f(\rho, \theta-\theta)=(R(\rho, 0), T(\rho, 0))=(R(\rho, \theta), T(\rho, \theta)-\theta). $ So $R(\rho, \theta)$ only depends on $\rho$ and $R\in Emb(\mathbb{R}^+)$.
Suppose $0$ is a local attractor. If $\textrm{Fix}(R)=\{0\}$ then the origin is a global attractor by Lemma \ref{lemgdim1}. Otherwise, the fixed points of the radial component are invariant circles for $f$ and therefore, knowledge about local dynamics does not contribute to the description of global dynamics\footnote{We are grateful to a referee for pointing this out.}.
In general if $f$ is an $SO(2)$-symmetric embedding, then $\mathbb{R}^2$ may be decomposed into $f$-invariant annuli with centre at the origin. Each annulus is either the union of $f$-invariant circles with $f$ acting as a rotation in each circle (corresponding to fixed points of $R$) or, for all points $p$ in the annulus, $\omega(p)$ is contained in the same connected component of the boundary of the annulus.
For the group $\mathbb{Z}_n$ we may have a local attractor or repellor if $n\geq 3$ and a local atractor, repellor or saddle in the case of $\mathbb{Z}_2$. For a local attractor, almost any global dynamics may be realised. Examples of dissipative $\mathbb{Z}_n$-equivariant diffeomorphism with a periodic orbit of period $n$ are given in \cite{SofisbeSzlenk} for all $n\ge 2$. Alarc\'on \cite{AlarconDenjoy} proves the existence of a Denjoy map of the circle with symmetry group $\mathbb{Z}_n$. This is used to construct orientation preserving homeomorphisms of the plane with symmetry group $\mathbb{Z}_n$ having the origin, the unique fixed point, as an asymptotic local attractor. Moreover, for these homeomorphisms there exists a global attractor containing the origin that is a compact and connected subset of $\mathbb{R}^2$ with zero Lebesgue measure.
Summarising, we have obtained conditions on planar $\Gamma$-equivariant maps $f$ under which a local attractor/repellor is always a global attractor/repellor. This concerns only subgroups $\Gamma$ containing a reflection, the only subgroups where this is possible.
\paragraph{Acknowledgements:} The research of all authors at Centro de Matem\'atica da Universidade do Porto (CMUP)
had financial support from
the European Regional Development Fund through the programme COMPETE and
from the Portuguese Government through the Funda\c c\~ao para a Ci\^encia e a Tecnologia (FCT) under the project PEst-C/MAT/UI0144/2011. B. Alarc\'on was also supported from EX2009-0283 of the Ministerio de Educaci\'on (Spain) and grants MICINN-08-MTM2008-06065 and MICINN-12-MTM2011-22956 of the Ministerio de Ciencia e Innovaci\'on (Spain).
\bigbreak \noindent{Email addresses:} \smallbreak
\noindent{B. Alarc\'on --- [email protected]}\\ \noindent{ S.B.S.D. Castro --- [email protected]}\\ \noindent{ I.S. Labouriau --- [email protected]}
\appendix
\section{Proof of Proposition \ref{Liapunov}} \label{ApendiceEstabilidade}
In order to prove Proposition~\ref{Liapunov} we show that the hypotheses guarantee that $f$ is either a uniform contraction or expansion in a neighbourhood of the origin. For this we need the following standart result, similar to a Lemma in Chapter 9 \S 1 of Hirsch and Smale \cite{HirschSmale}.
\begin{lemma}\label{lemNorm} Let $A: \; \mathbb{R}^N \rightarrow \mathbb{R}^N$ be linear and let $\alpha$, $\beta \in \mathbb{R}$ satisfy $$
0<\alpha < \left| \lambda\right| < \beta $$ for all eigenvalues $\lambda$ of $A$. Then there exists a basis for $\mathbb{R}^N$ such that, in the corresponding inner product and norm, \begin{equation}\label{NormaQuadrado} \alpha \norm{x} \leq \; \norm{Ax} \; \leq \beta \norm{x} \qquad\forall \; x \in \mathbb{R}^N. \end{equation} and moreover, \begin{equation}\label{MajoraProduto}
\left|\langle Ax,y\rangle\right|\; \leq \beta \norm{x} \norm{y} \qquad \forall \; x,y \in \mathbb{R}^N. \end{equation} \end{lemma}
\begin{proof}[Proof of Proposition \ref{Liapunov}] Write $f(x)=Df(0).x +r(x)$. If $f$ is linear then $r(x) \equiv 0$. Otherwise, using the norm of the previous lemma, since $f$ is differentiable at the origin, we know that $$ \lim_{x \rightarrow 0} \frac{\norm{r(x)}}{\norm{x}}=0, $$ that is, $$ \forall \; \varepsilon >0 \quad\exists \; \delta >0 : \quad \norm{x} < \delta \Rightarrow \norm{r(x)} < \varepsilon \norm{x}. $$
\bigbreak
Assume the eigenvalues of $Df(0)=A$ have absolute value smaller than $\beta<1$. Then, we have, if $\norm{x} < \delta$: \begin{eqnarray*}
\norm{f(x)}^2 & = & \langle f(x),f(x)\rangle=\langle Ax+r(x), Ax+r(x)\rangle \\
& = & \langle Ax,Ax\rangle + 2 \langle Ax, r(x)\rangle +\langle r(x) , r(x) \rangle \\
& \leq & \beta^2 \norm{x}^2 + 2 \beta \norm{x}\norm{r(x)} + \norm{r(x)}^2\\
& \leq & \beta^2 \norm{x}^2 + 2 \varepsilon\beta \norm{x}^2 + \varepsilon^2 \norm{x}^2 \\
& \leq &\left( \beta^2 + 2 \varepsilon\beta + \varepsilon^2\right) \norm{x}^2 \end{eqnarray*} showing that $f$ is a uniform contraction in a ball of radius $\delta$ around the origin provided that $\beta^2 + 2 \beta \varepsilon + \varepsilon^2 <1$. Since $\beta$ is fixed and smaller that one, such an $\varepsilon$ always exists.
\bigbreak Assume now that $$
1 < \alpha <\left|\lambda\right| < \beta, $$ for all eigenvalues $\lambda$ of $Df(0)=A$. Then \begin{eqnarray*} \norm{f(x)}^2& = & \langle f(x),f(x)\rangle = \langle Ax+r(x), Ax+r(x)\rangle \\
& = & \langle Ax,Ax \rangle + 2 \langle Ax, r(x) \rangle + \langle r(x) , r(x) \rangle \\
& \geq & \alpha^2 \norm{x}^2 + 2 \langle Ax, r(x) \rangle \end{eqnarray*} since $\norm{r(x)}^2 > 0$. It then follows that \begin{eqnarray*}
\norm{f(x)}^2 & \geq & \alpha^2 \norm{x}^2 + 2 \langle Ax, r(x) \rangle \\
& \geq &\alpha^2 \norm{x}^2 - 2\left| \langle Ax, r(x)\rangle \right| \\
& \geq & (\alpha^2 - 2\beta \varepsilon) \norm{x}^2, \end{eqnarray*} showing that $f$ is a uniform expansion in a ball of radius $\delta$ around the origin provided that $$ \varepsilon < \frac{\alpha^2 -1}{2\beta}. $$ Since $\alpha^2 -1>0$ such an $\varepsilon$ always exists.
\end{proof}
\section{Compact subgroups of $O(2)$} \label{subgroups}
\begin{itemize}
\item $O(2)$, acting on $\mathbb{R}^2 \simeq \mathbb{C}$ as the group generated by $\theta$ and $\kappa$ given by
$$
\theta . z = e^{i\theta} z, \quad \theta \in S^1 \qquad\mbox{ and } \qquad \kappa . z=\bar{z}.
$$
\item $SO(2)$, acting on $\mathbb{R}^2 \simeq \mathbb{C}$ as the group generated by $\theta$ given by
$$
\theta . z = e^{i\theta} z, \quad \theta \in S^1.
$$
\item $D_n$, $n \geq 2$, acting on $\mathbb{R}^2 \simeq \mathbb{C}$ as the finite group generated by $\zeta$ and $\kappa$ given by
$$
\zeta . z = e^{\frac{2\pi i}{n}} z \qquad\mbox{ and } \qquad \kappa . z=\bar{z}.
$$
\item $\mathbb{Z}_n$, $n \geq 2$, acting on $\mathbb{R}^2 \simeq \mathbb{C}$ as the finite group generated by $\zeta$ given by
$$
\zeta . z = e^{\frac{2\pi i}{n}} z.
$$
\item $\mathbb{Z}_2(\langle\kappa\rangle)$, acting on $\mathbb{R}^2$ as
$$
\kappa . (x,y) = (x, -y).
$$
\end{itemize}
\section{The equivariant table}\label{EquivTable}
This appendix contains a table summarizing the bulk of results in the paper. The first column concerns the group of symmetry. The second column provides information about the existence of an invariant ray. The third and fourth columns concern the dynamics by providing the form of the jacobian matrix at the origin and a list of possible local dynamics. Finally, the last column lists hypotheses required to go from local to global dynamics.
No conditions are provided when the fixed point is a local saddle since this case is not addressed here.
\thispagestyle{empty} \begin{table}[hh] \label{Apendiceequvtable} \label{ApendiceTabela} \centering
\begin{sideways}
\begin{tabular}{|c|c|c|c|c|c|}
\hline \multicolumn{5}{|c|}{EQUIVARIANT TABLE} \\
\hline Symmetry Group & Contains $\kappa$?& $Df(0)$ & \multicolumn{1}{|p{3cm}|}{Hyperbolic \par Local Stability} & \multicolumn{1}{|p{5cm}|}{Hypothesis for Hyperbolic \par Global Stability.} \\
\hline $O(2)$ & yes & $\left( \begin{matrix} \alpha & 0 \\ 0 & \alpha \end{matrix} \right)$ $\alpha \in \mathbb{R}$
& \multicolumn{1}{|p{3cm}|}{attractor \par $ $ \par repellor} & \multicolumn{1}{|p{5cm}|}{$\textrm{Emb}(\mathbb{R}^2)$ differentiable and dissipative. \par $\textrm{Emb}(\mathbb{R}^2)$ differentiable.} \\
\hline $SO(2)$ & no & $\left( \begin{matrix} \alpha & -\beta \\ \beta & \alpha \end{matrix} \right)$ $\alpha, \beta \in \mathbb{R}$
& attractor / repellor & \multicolumn{1}{|p{5cm}|}{ Other configurations. }\\
\hline $D_n, \;n\geq 3,$ & yes & $\left( \begin{matrix} \alpha & 0 \\ 0 & \alpha \end{matrix} \right)$ $\alpha \in \mathbb{R}$
& \multicolumn{1}{|p{3cm}|}{attractor \par $ $ \par repellor} & \multicolumn{1}{|p{5cm}|}{$\textrm{Emb}(\mathbb{R}^2)$ differentiable and dissipative. \par $\textrm{Emb}(\mathbb{R}^2)$ differentiable.}\\
\hline $\mathbb{Z}_n, \;n\geq 3$ & no & $\left( \begin{matrix} \alpha & -\beta \\ \beta & \alpha \end{matrix} \right)$ $\alpha, \beta \in \mathbb{R}$ & attractor / repellor & \multicolumn{1}{|p{5cm}|}{ Other configurations. }\\
\hline $\mathbb{Z}_2(\langle\kappa\rangle)$ & yes & $\left( \begin{matrix} \alpha & 0 \\ 0 & \beta \end{matrix} \right)$ $\alpha, \beta \in \mathbb{R}$ & \multicolumn{1}{|p{3cm}|}{attractor \par $ $ \par repellor \par saddle} & \multicolumn{1}{|p{5cm}|}{$\textrm{Emb}(\mathbb{R}^2)$ differentiable and dissipative. \par $\textrm{Emb}(\mathbb{R}^2)$ differentiable. \par ---}\\
\hline $\mathbb{Z}_{2}$ & no & any matrix & \multicolumn{1}{|p{3.5cm}|}{attractor / repellor \par saddle} & \multicolumn{1}{|p{5cm}|}{ Other configurations. \par ---}\\
\hline $D_{2}$ & yes & $\left( \begin{matrix} \alpha & 0 \\ 0 & \beta \end{matrix} \right)$ $\alpha, \beta \in \mathbb{R}$
& \multicolumn{1}{|p{3cm}|}{attractor \par $ $ \par repellor \par saddle} & \multicolumn{1}{|p{5cm}|}{$\textrm{Emb}(\mathbb{R}^2)$ differentiable and dissipative. \par $\textrm{Emb}(\mathbb{R}^2)$ differentiable. \par ---}\\
\hline
\end{tabular} \end{sideways} \end{table}
\end{document} | arXiv |
Operator Algebras
Multi Variable Operator Theory with Relations
Ken Davidson
PIMS, University of Victoria
Canadian Operator Symposium 2011 (COSY)
Read more about Multi Variable Operator Theory with Relations
Aperiodicity: Lessons from Various Generalizations
C. Radin
Thu, Aug 1, 2002
University of Victoria, Victoria, Canada
Aperiodic Order, Dynamical Systems, Operator Algebras and Topology
This is a slightly expanded version of a talk given at the Workshop on Aperiodic Order, held in Victoria, B.C. in August, 2002. The general subject of the talk was the densest packings of simple bodies, for instance spheres or polyhedra, in Euclidean or hyperbolic spaces, and describes recent joint work with Lewis Bowen. One of the main points was to report on our solution of the old problem of treating optimally dense packings of bodies in hyperbolic spaces. The other was to describe the general connection between aperiodicity and nonuniqueness in problems of optimal density.
Radin_Charles.pdf
Read more about Aperiodicity: Lessons from Various Generalizations
Projective Modules in Classical and Quantum Functional Analysis
A. Ya. Helemskii
University of Alberta, Edmonton, Canada
PIMS Distinguished Chair Lectures
Homological theory of the "algebras in analysis" exists in at least three different versions. First of all, there is the homological theory of Banach and more general locally convex algebras. This is about 40 years old. However, in the last decade of the previous century, a "homological section" appeared in a new branch of analysis, the so-called quantized functional analysis or, more prosaically, the theory of operator spaces. One of principal features of this theory, as is now widely realized, is the existence of different approaches to the proper quantum version of a bounded bilinear operator. In fact, two such versions are now thought to be most important; each of them has its own relevant tensor product with an appropriate universal property. Accordingly, there are two principal versions of quantized algebras and quantized modules, and this leads to two principal versions of quantized homology.
Thus we have now, in the first decade of the 21st century, three species of topological homology: the traditional (or "classical") one, and two "quantized" ones.
In these lectures, we shall restrict ourselves by studying, in the framework of these three theories, the fundamental concept of a projective module. This concept is "primus inter pares" among the three recognized pillars of the science of homology: projectivity, injectivity, and flatness. It is this notion that is the cornerstone for every sufficiently developed homological theory, let it be in algebra, topology, or, as in these notes, in functional analysis.
Our initial definitions of projectivity do not go far away from their prototypes in abstract algebra. However, the principal results concern essentially functional-analytic objects. As we shall see, they have, as a rule, no purely algebraic analogues. Moreover,
some phenomena are strikingly different from what algebraists could expect, based on their experience.
Helemskii.pdf
Quantum Algebra and Geometry
Read more about Projective Modules in Classical and Quantum Functional Analysis
Actions of Z^k associated to higher rank graphs
U. Kumjian,
D. Pask
We construct an action of $\mathbb Z^k$ on a compact zero-dimensional space obtained from a higher graph $\Lambda$ satisfying a mild assumption generalizing the construction of the Markov shift associated to a nonnegative integer matrix. The stable Ruelle algebra $R_s(\Lambda)$ is shown to be strongly Morita equivalent to $C^*(\Lambda)$. Hence $R_s(\Lambda)$ is simple, stable and purely infinite, if $\Lambda$ satisfies the aperiodicity condition.
Kumjian_and_Pask.pdf
Published in: Ergodic Theory Dynam. Systems 23 (2003), no. 4, 1153-1172.
Read more about Actions of Z^k associated to higher rank graphs | CommonCrawl |
$ABCD$ is a rectangular sheet of paper. $E$ and $F$ are points on $AB$ and $CD$ respectively such that $BE < CF$. If $BCFE$ is folded over $EF$, $C$ maps to $C'$ on $AD$ and $B$ maps to $B'$ such that $\angle{AB'C'} \cong \angle{B'EA}$. If $AB' = 5$ and $BE = 23$, then the area of $ABCD$ can be expressed as $a + b\sqrt{c}$ square units, where $a, b,$ and $c$ are integers and $c$ is not divisible by the square of any prime. Compute $a + b + c$.
Let $\angle{AB'C'} = \theta$. By some angle chasing in $\triangle{AB'E}$, we find that $\angle{EAB'} = 90^{\circ} - 2 \theta$. Before we apply the law of sines, we're going to want to get everything in terms of $\sin \theta$, so note that $\sin \angle{EAB'} = \sin(90^{\circ} - 2 \theta) = \cos 2 \theta = 1 - 2 \sin^2 \theta$. Now, we use law of sines, which gives us the following:
$\frac{\sin \theta}{5}=\frac{1 - 2 \sin^2 \theta}{23} \implies \sin \theta = \frac{-23 \pm 27}{20}$, but since $\theta < 180^{\circ}$, we go with the positive solution. Thus, $\sin \theta = \frac15$.
Denote the intersection of $B'C'$ and $AE$ with $G$. By another application of the law of sines, $B'G = \frac{23}{\sqrt{24}}$ and $AE = 10\sqrt{6}$. Since $\sin \theta = \frac15, GE = \frac{115}{\sqrt{24}}$, and $AG = AE - GE = 10\sqrt{6} - \frac{115}{\sqrt{24}} = \frac{5}{\sqrt{24}}$. Note that $\triangle{EB'G} \sim \triangle{C'AG}$, so $\frac{EG}{B'G}=\frac{C'G}{AG} \implies C'G = \frac{25}{\sqrt{24}}$.
Now we have that $AB = AE + EB = 10\sqrt{6} + 23$, and $B'C' = BC = B'G + C'G = \frac{23}{\sqrt{24}} + \frac{25}{\sqrt{24}} = \frac{48}{\sqrt{24}}=4\sqrt{6}$. Thus, the area of $ABCD$ is $(10\sqrt{6} + 23)(4\sqrt{6}) = 92\sqrt{6} + 240$, and our final answer is $92 + 6 + 240 = \boxed{338}$. | Math Dataset |
How does VHF/UHF propagate beyond the expected (radio) horizon?
I am not asking about the fairly well-known effect of the earth "appearing less curved to radio waves" that are otherwise still essentially line-of-sight, but a deeper arcanum:
In the ARRL Antenna Book, 17th edition (1994) there is a discussion of "Reliable VHF coverage" in starting on page 23-7 in the Radio Wave Propagation chapter. The claim is made,
Because of age-old ideas, misconceptions about the coverage obtainable in our VHF bands persist. This reflects the thoughts that VHF waves travel only in straight lines, […] However, let us survey the picture in the light of modern wave-propagation knowledge and see what the bands above 50 MHz are good fro on a day-to-day basis, ignoring the anomalies [presumably referring to the tropospheric ducting of previous section] that may result in extensions of normal coverage.
It goes on, after mentioning an article by D.W. Bray, K2LMG in the November 1961 QST magazine, to present two graphs that plot "tropospheric path loss" against distance. The curves therein rise steeply from 120 dB of loss at a distance of 0 miles [?!] to around 180 dB near 50 miles, then level off slightly so that at 500 miles there is a path loss around 240 dB. (That's reading the 50% reliability chart roughly, there's actually 4 lines plotted for 144/50, 220, 432, and 1296 MHz, as well as a second separate chart showing 99% reliability; the 99% reliability chart is very approximately 10–20 dB worse than the 50% one at any given point.)
UPDATE: thanks to W0BTU Mike, here's the actual charts scanned from an earlier edition:
What "modern wave-propagation knowledge" is this referring to? What mechanism(s) would allow VHF signals to be 99% reliably received 500 miles away, albeit with more than 250 dB of path loss, or 50%-of-the-time reliability with a little less loss? (These path-loss charts do NOT assume any antenna-height gain.)
propagation vhf uhf
natevw - AF7TB
natevw - AF7TBnatevw - AF7TB
$\begingroup$ Troposcatter? EME? $\endgroup$ – Phil Frost - W8II Feb 24 '17 at 14:09
$\begingroup$ Here are some nomographs and accompanying text from an earlier version of the ARRL VHF Manual. Go to w0btu.com/files/vhf and download VHF_distance_coverage_nomographs.zip. I've found them to be a good predictor of VHF coverage back when I was operating SSB and CW on the low end of 2m back in the 1980s. And somewhat related is this webpage: w0btu.com/VHF-UHF_vertical_antenna_stacking.html $\endgroup$ – Mike Waters♦ Mar 7 '17 at 15:14
$\begingroup$ @MikeWaters Thanks, those are very similar to my edition! I've grabbed the chart I tried to describe and added it to my question, hope you don't mind. (Was on your site just the other day while researching beverage antennas and glad to meet you on this site now too!) $\endgroup$ – natevw - AF7TB Mar 7 '17 at 20:03
$\begingroup$ @natevw-AF7TB I don't mind at all! In August 2018 I converted the TIFF files to four PNG files there. Feel free to add those. $\endgroup$ – Mike Waters♦ Oct 5 '18 at 18:38
Turns out that, after turning to discussion of HF propagation for a number of intervening pages, this Antenna Book ends up getting back around to its own answer for this question!
From the "Scatter Modes" section on page 23-30 of the same 17th edition:
The wave energy of VHF stations is not gone after it reaches the radio horizon, described early in this chapter. It is scattered, but it can be heard to some degree for hundreds of miles. Everything on Earth, and in the regions of space up to at least 100 miles, is a potential scattering agent.
Tropospheric scatter is always with us […] this is what produces that nearly flat portion of the curves given in an earlier section on reliable VHF coverage. … As long ago as the early 1950s, VHF enthusiasts found that VHF contests could be won with high power, big antennas and a good ear for signals deep in the noise. … Ionospheric scatter works much the same as the tropo version, [… and] can fill in the skip zone with marginally readable signals scattered from ionized trails of meteors, small areas of random ionization, cosmic dust, satellites and whatever may come into the antenna patterns at 50 to 150 miles or so above the Earth. […]
[bold added for emphasis]
It goes on similarly to discuss "backscatter" and "transequatorial scatter" before going on to a different section on "auroral propagation" (which can also affect VHF but is probably not related to the reliable propagation graphs).
So in short, "scatter" (in many forms) is claimed as the mechanism that allows VHF signals to be heard hundreds of miles beyond the primary "radio horizon".
I also believe the ARRL editors consider the experimental discoveries of the various scatter modes to be the "modern wave-propagation knowledge" referred to earlier — in this "Scatter Modes" section there are a couple historical references around the same dates as the QST article, including the "early 1950s" one quoted above as well as Transequitorial scatter as "an amateur 50-MHz discovery in the years 1946–1947".
Terrestrial, point-point propagation paths can exist due to diffraction of the radiated e-m fields over terrain peaks and man-made structures, with each diffraction adding loss at the receive antenna to the normal inverse-distance field loss for an LOS path of that total length.
The graphic below illustrates this for an FM broadcast station, where the line-of-sight path is severely obstructed, but the signal can be received well beyond those obstructions.
The real propagation path would consist of several straight-line segments over terrain peaks, joined together to connect the transmit and receive antennas.
In this example, the additional loss due to diffractions compared to an LOS path of that total length is shown to be 76.59 dB.
The received field will vary over time depending on atmospheric K-factor and other conditions.
Richard FryRichard Fry
$\begingroup$ Older ARRL publications I have refer to what you describe in the first paragraph as knife edge diffraction, IIRC. But they don't mention man-made structures (clusters of tall buildings in cities?), only jagged mountain peaks. That's intriguing! Do you happen to know of any examples? $\endgroup$ – Mike Waters♦ Oct 5 '18 at 18:48
$\begingroup$ Here is a quote with a short reference about this from Engineering Considerations for Microwave Communications Systems (GTE Lenkurt, Inc, 1970): "The effect of man-made obstacles depends entirely upon their shape and position, Microwave- transparent objects, which are few, are ignored. A large round container such as a gas storage reservoir, if partially in the path, causes both diffraction and dispersion as well as some blocking.". $\endgroup$ – Richard Fry Oct 5 '18 at 19:51
I really don't know – in a review, I'd have marked that sentence as far too vague – what the author means with "modern wave-propagation knowledge". If I need to benchmark that knowledge against a 31 year old Ham article, well… I don't really think it's a great advocate. "Modern knowledge", to me, is probably something that is what research on an academic level has yielded and is now dissipating into technological areas such as amateur radio, and thus, an article in a ham mag might by definition, not be used as describing the state of the sciences in 1961¹.
So, I dispute your "expected radio horizon"; just because a ham expert in 1961 modeled something is really no great reason to expect the same to be accurate in 1994.
Anyway, having minimal knowledge of earth observation and radio propagation modelling, I'd go for:
having the ability to actually simulate non-trivial situations, including:
clearer idea of atmospheric properties such as permittivity ($\epsilon_r$) and magnetic permeability ($\mu_r$) as well as charge density ($\rho$),
non-linear gradients of above properties,
non-perfectly-spherical atmosphere,
actually looking into things that are far, far more granular than just saying "ok, here's the troposphere that we model as thing with the following diffractive index and a constant attenuation $\alpha$", including effects like weather-based inversion of conductivities etc.
being able to model effects of ground conductivity etc.
actually having data on how charged, conductive, vapor-saturated and shaped strata of the atmosphere are, based on a lot of satellite and radioastronomy experiments
having a far better understanding of the interaction between large antennas and their surroundings
¹I know that article is cited in many ham places. I've never read it.
However, it seems to me that modeling VHF transmission's reach in 1961 should basically be equivalent to asking a couple of WWII and cold war radio engineers; it's really not like radio reach is not a very important strategic factor, and I'm very certain that all involved parties had very accurate recordings of how far they could reach, and will have worked to improve their models to match that, long before 1961.
These models might not have been public back in the day, but really, they also shouldn't be that much of rocket science to recreate. In 1994, there should be no "surprise" in how VHF propagates terrestrially – I really think it's very worth writing articles that bring research-level theory, models and experimentation to the amateur masses (which, by the beard of Hertz, are pretty good at such things), but you must then compare these to state-of-back then in academics, not in ham mags. That's just unfair – many countries simply had restrictions on radio usage during WWII, so the amateur radio community just needed a decade or two to catch up. That catching-up phase was an especially fruitful one, what with all the semiconductor technology emerging at the same time.
Downside of that being a phase is that if you look online, you still find a lot of people trying to build the 1960's kits nowadays, even trying to get the same diodes and transistors of the day – there's really no good reason you'd want a Germanium noise gen– err transistor amplifier if you can have a silicon one for cheap, if you just don't stick to material from the "golden era of rediscovering possibilities". I attribute that to a lot of Kit manufacturers and mag publishers just copying articles from back then, until the original source and its restrictions got lost. Enough ranting for today.
Marcus MüllerMarcus Müller
$\begingroup$ Hi Marcus, I don't begrudge your rant; I hear what you're saying but would appreciate a bit more balance towards explaining "ϵ, μ gradients" (you mean EM fields?) and your other ideas for the mechanisms there. $\endgroup$ – natevw - AF7TB Feb 23 '17 at 18:11
$\begingroup$ You're right – though I must admit that I'm looking at this a bit from an "aftermath" position. It's kind of hard for me to know to what state to compare – for example, 1961's models definitely had tropospheric diffraction models, but I don't know how well they actually modelled things like large-scale "waveguiding" due to relatively strong changes of atmospheric properties within a couple m of height difference (i.e. weather), or whether the models assumed effects of ionized upper atmosphere and so on. $\endgroup$ – Marcus Müller Feb 23 '17 at 18:24
$\begingroup$ Thanks! And re. "expected radio horizon" I am not sure what you are disputing? My intent was to draw a distinction between the notion of a basically "hard" radio horizon 15% further away than the visual horizon (e.g. en.wikipedia.org/w/… expectations) versus the notion that signals propagate weakly beyond that — how? $\endgroup$ – natevw - AF7TB Feb 23 '17 at 18:25
$\begingroup$ I must admit I wasn't even aware of this 115% Line-Of-Sight model! $\endgroup$ – Marcus Müller Feb 23 '17 at 18:26
$\begingroup$ I'd have to get a large stack of paper out and get going,but the idea is: As light entering water,beams always bend towards the normal at a medium interface if they enter a medium with higher refractive index $n$ (optical density),and away from it when going into one of lower refractive index.Electromagnetically,this is an effect of how the poynting vector shifts if you increase $\epsilon$ or $\mu$ ($n=\sqrt{\epsilon\mu}$, by the way).You can now model the atmosphere as a ball with decreasing $n$ the further you go from the center;thus,you can,analytically,find an equation for EM "beam-bend". $\endgroup$ – Marcus Müller Feb 23 '17 at 18:33
Historically, this is the best time of year (Late September through early December) in the northern hemisphere to enjoy VHF and UHF propagation enhancements ("band openings") caused by sharp air temperature differences between two (or more) distinct layers of air. These occur far below the ionosphere.
However, band openings can occur in any season. The two types --somewhat related-- are described below.
Temperature Inversions
By far, the most common type. Just two layers in the troposphere are are involved.
Often incorrectly referred to as "ducting", these occur where there is a sudden change in air temperature vs. height, and can even cover an area encompassing many hundreds of square miles. They usually occur along a cold occlusion or occluded front.
While most common in the fall and spring, I know a ham in Ohio who observed a spectacular band opening years ago in January, where the temperature dropped from above freezing to -20° F in a matter of hours. (It would have been even more spectacular had other hams known about it, but there was no Internet or VHF DX clusters in those days.)
Tropospheric Ducting
A genuine duct consists of three layers of air.
They can be spectacular and extend over a greater distance, but are rare.
A true tropo duct almost never covers an area anywhere near as large as a temperature inversion. By *waveguiding" I assume that Marcus means "tropospheric ducting".
Below is what a duct looks like. More details can be found here.
Features common to both inversions and ducts
Both inversions and ducts usually occur when the air is relatively calm. Once it gets windy, the air masses start to mix and the band opening gradually disappears.
Experienced VHF/UHF enthusiasts understand that whenever a hot day followed by a rapid and large temperature drop in the evening occurs, there might be a good opening. The larger the drop, the better the DX.
You can tell if you have a band opening in your area by transmitting on 146.94 (or another very common FM repeater frequency) followed by receiving countless beeps, heterodynes, and distant repeater IDs when you unkey and listen. During the better band openings, all it takes is an HT to experience that.
Depending on who you ask, that can either be a blessing or a nuisance. ;-) They can and do cause interference to both local and distant repeater groups. Hams operating SSB or CW below ~144.250 or on simplex FM frequencies within that area are treated to a special delight.
Mike Waters♦Mike Waters
$\begingroup$ I just discovered a site that claims to forecast when a "duct" might occur. Select your region in the drop-down list. It looks to me like that when he says "duct", he actually means two-layer temperature inversions. Regardless, it may well very be useful assuming it is at least somewhat accurate. $\endgroup$ – Mike Waters♦ Oct 6 '18 at 2:42
$\begingroup$ I'd just like to point out that the refraction of electromagnetic waves by a temperature inversion, is exactly why you see the 'mirage' effect on a hot road (which has very hot air in contact with the road, with cooler air above it) $\endgroup$ – Scott Earle♦ Oct 7 '18 at 12:54
Not the answer you're looking for? Browse other questions tagged propagation vhf uhf or ask your own question.
Can I talk 2 Meter VHF across the United States
Effect of night on VHF/UHF propagation
Why can't VHF / UHF be used with ionosphere reflection?
VHF / UHF - Terrestrial Circular Polarized Antenna
UHF Into VHF Power Amplifier
How to learn about the physics behind propagation?
Troposcatter: really all that bad?
Why is VHF better than UHF in this situation?
Better explanation for beyond-line-of-sight VHF signal reach than "lower curvature to radio waves"
Weakly directive VHF/UHF antenna
Does a UHF/VHF handheld transceiver kit exist? | CommonCrawl |
\begin{document}
\sloppy
\title{A geometric construction of Tango bundle on $\p^5$}
\author{Daniele Faenzi} \email{[email protected]} \address{Dipartimento di Matematica ``U.~Dini'', Universit\`a di
Firenze, Viale Morgagni 67/a, I-50134, Florence, Italy} \urladdr{http://www.math.unifi.it/\~{}faenzi/}
\thanks{{\em 2000 Mathematics Subject Classification} Primary 14F05; Secondary 14-04,14J60} \thanks{The author was partially supported by Italian Ministery funds and the EAGER contract HPRN-CT-2000-00099.}
\begin{abstract} The Tango bundle $T$ over $\p ^5$ is proved to be the pull-back of the twisted Cayley bundle $C(1)$ via a map $f \colon \p ^5 \rightarrow Q_5$ existing only in characteristic 2. The Frobenius morphism $\varphi$ factorizes via such $f$. \\ Using $f$ the cohomology of $T$ is computed in terms of $S \otimes C$, $\varphi^*(C)$, $\Sym^2(C)$ and $C$, while these are computed by applying Borel-Bott-Weil theorem. \\ By machine-aided computation the mimimal resolutions of $C$ and $T$ are given; incidentally the matrix presenting the spinor bundle $S$ over $Q_5$ is shown.
\end{abstract}
\maketitle
\section{Introduction}
The well-known Hartshorne conjecture states, in particular, that there are no indecomposable rank-2 vector bundles on $\p^n$, when $n$ is greater than 5. However, one of the few rank-2 bundles on $\p^5$ up to twist and pull-back by finite morphisms is the Tango bundle $T$ first given in \cite{tan-morphisms-I}. Later Horrocks in \cite{horrocks} and Decker Manolache and Schreyer in \cite{decker} descovered that it can be obtained starting from Horrocks rank-3 bundle: anyway it only exists in characteristic 2.
Here we prove that $T$ is the pull-back of the twisted Cayley bundle $C(1)$ (defined over any field) via a map $f \colon \p ^5 \rightarrow Q_5$ existing only in characteristic 2. This allows to compute its cohomology by Borel-Bott-Weil theorem (see \ref{bott-cohomology}). We observe non-standard cohomology of $\Sym ^2C$ in characteristic 2, although $C \otimes S$ behaves standardly (i.e. as if $\ch (k)=0$).
Different ways to prove that Tango's equations actually give $C(1)$ are shown in section (\ref{sec-2}). Lastly in section (\ref{sec-3}) we give the minimal resolution of $T$ and some more computational remarks. We make extensive use of {\tt Macaulay2} computer algebra system: computation-related material can be found on the url: {\tt http://www.math.unifi.it/\~{}faenzi/tango/} \\ I would like to thank Wolfram Decker for the observations on the Horrocks bundle and his help on machine computation and Edoardo Ballico for his careful remarks about Ekedahl's work and representations in positive characteristic. Also I am indebted with Giorgio Ottaviani, who posed to me this problem.
So let
$$Q_5 = \{ z_0^2+ z_1z_2+z_3z_4+z_5z_6 = 0 \} \subset \p^6$$ be the 5-dimensional smooth quadric over an algebraically closed field $k$. We denote by $\xi$ (respectively $\eta$, $\zeta$) the generator of $A^1(\p ^5)$ (respectively of $A^1(Q_5)$, $A^3(Q_5)$), so that
\begin{align*}
A(\p^5) = \Z[\xi] / (\xi ^6) \quad A(Q_5) = \Z[\eta , \zeta] / (\eta ^3 - 2 \zeta, \eta^6)
\end{align*} On the coordinate ring $S(\p^5)$ we use variables $x_i$'s while on $S(Q_5)$ we use $z_j$'s.
\section{The bundle on $\p ^5$} \label{sec-1}
Let $k$ be an algebraically closed field. On $Q_5=G_2 / P(\alpha_1) $ we have the Cayley bundle $C$, coming from the standard representation of the semisimple part of the parabolic group $P(\alpha_1)$, where $\alpha_1$ is the shortest root in the Lie algebra of $G_2$, the exceptional Lie group. $C$ is irreducible $G_2-$homogeneous with maximal weight $\lambda_2 - 2\lambda_1$. $C(2)$ (weight $\lambda _2$) is globally generated and $\hh^0(C(2))=14$.\\ $C$ is the cohomology of a monad:
$$\OO(-1) \longrightarrow S \longrightarrow \OO $$ where $S$ is the spinor bundle. $C$ has rank 2 and Chern classes $(-1,1)$. The only non-vanishing intermediate cohomology groups are $\HH^1(C)=\HH^4(C(-4)) =k$. All this is done in \cite{ott-cayley} and follows easily from \cite[Proposition 5.4]{jantzen} in any characteristic. \\ When $\ch (k)=2$ we have a map $f \colon \p ^5 \rightarrow Q_5$ having the expression
$$f : (x_0:\ldots:x_5) \mapsto (x_0x_1 + x_2x_3 + x_4x_5,x_0^2,x_1^2,x_2^2,x_3^2,x_4^2,x_5^2)$$ and, letting $\varphi$ be the Frobenius and $\pi$ the projection from $(1:0:0:0:0:0:0)$ we have:
\begin{equation}\label{pull-back}
\xymatrix @C+3ex{
& Q_5 \ar^-{\pi}[d] & \varphi^*(\OO_{\p^5}(1)) = \OO_{\p^5}(2) \\
\p^5 \ar^-{f}[ur] \ar_-{\varphi}[r] & \p^5 & \pi^*(\xi) = \eta \qquad \pi^*(\xi^2) = \eta ^2 \qquad \pi^*(\xi^3) = \eta ^3 = 2 \zeta
}
\end{equation} We define $T = f^*(C(1))$. The rank$-2$ vector bundle $T$ is the main subject of this paper.
By Hirzebruch-Riemann-Roch theorem we have the following easy observation \begin{obs} $T$ has Chern classes $(2,4)$ and
$$\chi(T(t)) = \frac{1}{60} {t}^{5}+\frac{1}{3} {t}^{4}+\frac{25}{
12} {t}^{3}+\frac{11}{ 3} {t}^{2}-\frac{51}{ 10} t-14$$ \end{obs} The proof of the following theorem on the cohomology of $T$ is left at the end of this section. \begin{thm} \label{bott-cohomology} The non-zero cohomology of $T$ is the following:
\begin{align*}
& \hh^0(T(t)) = \chi(T(t)) \qquad \mbox{for $t\geq 2$} \\
& \hh^1(T(-2)) = 1 \quad \hh^1(T(-1)) = 7 \quad \hh^1(T) = 14 \quad \hh^1(T(1)) = 13 \quad \hh^1(T(2)) = 1 \quad \\
& \hh^2(T(-3)) = 1
\end{align*} and their Serre-dual $\hh^{5-i}(T(t)) = \hh^i(T(-t-8))$. $T'=T(-4)$ is the only twist for which all cohomology groups vanish. \end{thm}
For this we first need an observation on the map $f$. \begin{obs}\label{fstar} For the map $f$ above we have
\begin{align}
& f_{*}(\OO_{\p^5}) = \OO \oplus \OO(-1)^{14} \oplus \OO(-2)
\nonumber \\
\label{fstar1}
& f_{*}(\OO_{\p^5}(1)) = S \oplus \OO^6 \oplus \OO(-1)^6
\end{align} \end{obs} \begin{proof} Let $\FF = f_{*}(\OO_{\p^5}), \GG = f_{*}(\OO_{\p^5}(1))$. The map $f$ is a $16\colon 1$ cover, because the Frobenius is $32 \colon 1$ and the projection $\pi$ is $2\colon1$. Then $\FF$ and $\GG$ are rank-16 vector bundles, whose cohomology one can read from the Leray degenerate spectral sequence. Indeed since $R^i(f_*) = 0$ (for $i>0$) we have
\begin{align*}
& \HH^i(Q_5,\FF(t)) = \HH^i ( \p^5, f^* (\OO_{Q_5}(t)) = \HH^i (\p^5 , \OO_{\p^5} (2t)) \\
& \HH^i(Q_5,\GG(t)) = \HH^i ( \p^5, f^* (\OO_{Q_5}(1)(t)) = \HH^i (\p^5 , \OO_{\p^5} (2t+1))
\end{align*} for $0\leq i\leq 5$ and every $t$. This says that $\FF$ and $\GG$ have no intermediate cohomology, hence by \cite{kapranov} or \cite{buchweitz-greuel-schreyer} they must decompose as sum of spinors $S$ and line bundles, up to twist (although actually Kapranov's setting is over $\C$). For $\GG$ this implies, by a computation on the Euler characteristic, that the only choice is the one stated while for $\FF$ we have a priori two possibilities: the one stated above or $S \oplus S(-1) \oplus \OO \oplus \OO(-1)^6 \oplus \OO(-2)$ \\ Now the above formula says that the polynomial ring $S(\p^5)$ decomposes as module over $S(Q_5)$ (under the action given by $f$) as $S(\p^5)_{\even} \oplus S(\p^5)_{\odd}$ where:
$$S(\p^5)_{\even} = \Oplus_{t \in \Z} \HH^0(\p^5,\OO(2t)) \qquad
S(\p^5)_{\odd} = \Oplus_{t \in \Z} \HH^0(\p^5,\OO(2t+1))$$ For $\FF$ we have to compute explicitely a presentation of the $S(Q_5)-$module $S(\p^5)_{\even}$. We need $e_0$ to generate $k=S(\p^5)_{0}$ and $e_{ij}$ to generate the monomial $x_ix_j$ ($i\neq j$) in $S(\p^5)_{2}$, thus obtaing a map
$$\Phi \colon S(Q_5) \oplus S(Q_5)(-1) ^{15}
\longrightarrow S(\p^5)_0 \oplus S(\p^5)_{2}$$ But the coordinate $e_{45}$ is redundant, since $z_0 \Phi(e_0) + \Phi(e_{01}) + \Phi(e_{23}) = \Phi(e_{45})$. Now for $S(\p^5)_{4}$: the terms containing $x_i^2$ already lie in the image (got by the action of $z_{i-1}$), and in fact we just have to fix $x_0x_1x_2x_3$ because, e.g. $x_0x_1x_2x_4 = z_3\Phi(e_{34})+z_5\Phi(e_{25})+z_0\Phi(e_{24})$ and $x_0x_1x_4x_5=x_0x_1x_2x_3+z_1z_2\Phi(e_0)+z_0\Phi(e_{01})$. Thus we get a generator in degree $2$ (and no syzygy); moreover $x_0x_1x_2x_3x_4x_5 = z_1z_2\Phi(e_{23})+z_3z_4\Phi(e_{01})+z_0(x_0x_1x_2x_3)$, so that $S(\p^5)_{6}$, and in fact all $S(\p^5)_{\even}$, is also covered.
The presentation could be computed also for $S(\p^5)_{\odd}$, where one finds as syzygy of a map $S(Q_5)^{6} \oplus S(Q_5)(-1)^{14} \rightarrow S(\p^5)_{\odd}$ the matrix giving the spinor bundle described later in (\ref{spinmatrix}), thus getting again (\ref{fstar1}). \end{proof}
\begin{rmk}
It seems likely that the observation (\ref{fstar}) can be extended to any odd dimension. Here we only mention that for $f \colon \p^3 \longrightarrow Q_3$ we get $f_*(\OO_{\p^3}) = S \oplus \OO_{Q_3} \oplus \OO_{Q_3} (-1)$ and $f_*(\OO_{\p^3}(1))= \OO_{Q_3}^4$ which can be computed by the presentation or by Euler characteristic.
Lastly, one may notice that $\pi_*(S)=\OO_{\p^5}(-1)^8$ and clearly
$\pi_*(\OO_{\p^5}(t))=\OO(t) \oplus \OO(t-1)$ so that the extension $0 \rightarrow S(-1) \rightarrow \OO(-1)^8 \rightarrow S \rightarrow 0$ becomes split after $\pi_*$ (actually this holds in any characteristic). This agrees with the formula
\begin{align*}
& \varphi_*(\OO_{\p^5}) = \OO \oplus \OO(-1)^{15} \oplus \OO(-2)^{15} \oplus \OO(-3) \\
& \varphi_*(\OO_{\p^5}(1)) = \OO^6 \oplus \OO(-1)^{20} \oplus \OO(-2)^{6}
\end{align*} \end{rmk}
Next we compute the cohomology of $\Sym ^2 C$, $C \otimes C$, $S \otimes C$. First notice that if $V$ is the $SL(2)-$representation giving $C$, when $\ch(k)=2$ the representation $\Sym ^2 V$ (having weight $2\lambda _2 - 4\lambda _1 $) will not be irreducible
(recall that in finite characteristic $SL(2)$ is not linearly reductive, check \cite{nagata}), on the contrary, letting $C^{[2]} = \varphi ^* (C)$, we have the non-split exact sequence
\begin{equation} \label{C2}
0 \longrightarrow C^{[2]} \longrightarrow \Sym ^2 C
\longrightarrow \OO(-1) \longrightarrow 0
\end{equation} We also have
\begin{align} \label{phiQ5}
& \varphi_*(\OO_{Q_5}) = \OO \oplus \OO(-1)^{20} \oplus \OO(-2)^{7} \oplus S(-1) \\
& \varphi_*(\OO_{Q_5}(1)) = \OO^7 \oplus \OO(-1)^{20} \oplus \OO(-2) \oplus S \nonumber
\end{align} Now again by Borel-Bott-Weil theorem (\cite[Proposition 5.4]{jantzen}) we know $\hh^i(\Sym^2 C(3)) = 0$, $\Sym(4)$ is globally generated and:
\begin{align*}
& \hh^0(\Sym(t)) = \chi(\Sym ^2 C(t)) = \frac{1}{20}\,{t}^{5}+\frac{3}{8}\,{t}^{4}-{\frac {27}{8}}\,{t}^{2}-{\frac{81}{20}}\,t \quad \text{for $t\geq 4$} \\
& \quad \hh^1(\Sym^2 C(2)) = 14 \quad \hh^1(\Sym^2 C(1)) = 7
\quad \hh^0(\Sym^2 C(-1)) = 1
\end{align*} where in the above twists these are the only non-vanishing $\HH^i$'s. By Serre duality we only miss $\HH^i(\Sym ^2 C)$. Now since $R^i( \varphi _*) = 0$, by Leray spectral sequence, (\ref{C2}) and (\ref{phiQ5}) we get:
\begin{equation*}\begin{split}
\HH^2(\Sym^2 C) = & \HH^2(C ^{[2]}) \overset{\text{Leray}}{=} \HH^2(C \otimes S(-1)) \overset{\text{Leray}}{=} \\
& = \HH^2(C ^{[2]}(-1)) = \HH^2(\Sym^2 C(-1)) \overset{\text{Bott}}{=} k
\end{split}\end{equation*} and by the same argument $\hh^1(\Sym^2 C) = 1$, while the remaining $\hh^i$ are zero. The same procedure yields the values of $\hh^i(S \otimes C (t))$
\begin{align} \label{cohomology-SC}
& \hh^1(S\otimes C )=1 & \hh^1(S\otimes C (1))=6 \\
& \hh^2(S\otimes C (-1))=1 & \hh^3(S\otimes C (-2))=1 \nonumber\\
& \hh^4(S\otimes C (-4))=6 & \hh^4(S\otimes C (-5))=1 \nonumber
\end{align} Now we can prove theorem (\ref{bott-cohomology}) \begin{proof}[Proof of theorem (\ref{bott-cohomology})] Since $R^i(f_*) = 0$ for $i>0$ we use the Leray degenerate spectral sequence that here reads:
\begin{align} \label{leray}
\HH ^i(\p^5,T(2t)) = & \HH^i(Q_5,f_*(\OO_{\p^5}) \otimes C(1+t)) \nonumber \\
\HH ^i(\p^5,T(2t+1)) = & \HH^i(Q_5,f_*(\OO_{\p^5}(1)) \otimes C(1+t))
\end{align} The even part gives $\HH ^i(\p^5,T(2t))=\HH^i(Q_5, C(1+t)) \oplus \HH^i(Q_5,C(t))^{14} \oplus \HH^i(Q_5,C(t-1))$, while the odd part gives
$\HH^i(\p^5,T(2t+1)) = \HH^i(Q_5, C(1+t))^6 \oplus \HH^i(Q_5,
C(t))^6 \oplus \HH^i(Q_5, S \otimes C(1+t))$ and these are known by the
above formulas. \end{proof}
Now we can write the Beilinson table of the normalized $T(-1)$ (rather than of $T$) that reads:
$$\left(\begin{array}{cccccc}
0 & 0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 7 \\
0 & 0 & 0 & 0 & 0 & 0
\end{array} \right)$$ Then we write $T(-1)$ as cohomology of a monad:
$$ \OO(-1) \oplus \Omega ^4 (4) \longrightarrow \Omega ^2 (2)
\oplus \Omega ^1 (1) \longrightarrow \OO ^7 $$
The following remark shows that our situation is more ``rigid'' than one may expect. \begin{rmk} We notice that we have the diagram: $$\xymatrix@C+2ex{
& Q_5 \ar[d]_-{\pi} \ar^-{\varphi}[r] & Q_5\\
\p^5 \ar_{\varphi}[r] \ar^f[ur] & \p^5 \ar_f[ur] &
}$$ Then, as Edoardo Ballico pointed out to us, we may relate this
framework to a description given by Ekedahl in \cite[proposition 2.5]{eke-foliations} of finite morfisms $\psi$ (with $Y$ smooth) that factor through the Frobenius $\varphi$ in characteristic $p>0$:
$$\xymatrix{
Y \ar[d]_-{\psi} \ar^-{\varphi}[r] & Y\\
\p^n \ar[ur] &
}$$ Ekedahl shows that in such a situation $Y$ is $Q_n$, $n$ is odd, $\psi$ is the projection from a point external to $Q_n$ and the characteristic is 2. That is, precisely our setup. \end{rmk} \begin{rmk} The values $\hh^2(\Sym^2 C)=\hh^1(\Sym^2 C)=1$ exhibit non-standard cohomology for the representation $\Sym^2 V$. Indeed $3\lambda_2 - 3 \lambda_1$ is singular ($(3\lambda_2 - 3 \lambda_1,3 \alpha_1 + \alpha _2)=0$) so standard Borel-Bott-Weil theorem (i.e. in characteristic 0) would give $\hh^i(\Sym^2 C)=0$. \\ Of course, we would have no such sequence as (\ref{C2}). Still, by tensoring the monad defining $C$ by $C(t)$ we get
\begin{gather*}
0 \longrightarrow G^{\vee} \otimes C(t) \longrightarrow S \otimes
C(t) \longrightarrow C(t) \longrightarrow 0
\\
0 \longrightarrow C(t-1) \longrightarrow G^{\vee} \otimes C(t)
\longrightarrow C \otimes C(t) \longrightarrow 0
\end{gather*} whence we derive the values of $\hh^i(S \otimes C(t))$ from those of $\Sym^2 C$, since if $\ch (k) \neq 2$ $C \otimes C = \Sym^2 C \oplus \OO (-1) $. This way we would get the same values as in (\ref{cohomology-SC}) if $\ch (k)=0$. \\ Finally, one can compute on {\tt Macaulay2} the values of the cohomology of $C^{[2]}$ and check the correcteness of the above result. \end{rmk}
\section{The Cayley bundle and Tango's equations} \label{sec-2}
In this sections we work over any algebraically closed field $k$ and prove that Tango's equations in \cite{tan-morphisms-I} give $C(1)$. \\ First we complete Tango's $3 \times 6$ matrix by stacking 9 rows to get $A$ having everywhere rank 3 over the quotient ring $S(Q_4) = k[z_0,\ldots,z_6]/(z_0^2+z_1z_2+z_3z_4+z_5z_6)$
\begin{equation} \label{matrix-A}
A = {\scriptsize
\left(
\begin{array}{cccccc}
z_0^2 & 0 & 0 & z_1^2 & z_1z_3+z_0z_6 & -z_0z_4+z_1z_5 \\
0 & z_0^2 & 0 & z_1z_3-z_0z_6 & z_3^2 & z_0z_2+z_3z_5 \\
0 & 0 & z_0^2 & z_0z_4+z_1z_5 & -z_0z_2+z_3z_5 & z_5^2 \\
0 & z_0z_2-z_3z_5 & z_3^2 & -z_0z_3-z_2z_6 & 0 & z_2^2 \\
z_5^2 & 0 & z_0z_4-z_1z_5 & z_4^2 & -z_2z_4-z_0z_5 & 0 \\
-z_3^2 & z_1z_3+z_0z_6 & 0 & -z_6^2 & 0 & -z_0z_3+z_2z_6 \\
0 & z_5^2 & -z_0z_2-z_3z_5 & -z_2z_4+z_0z_5 & z_2^2 & 0 \\
z_1z_3-z_0z_6 & -z_1^2 & 0 & 0 & -z_6^2 & z_0z_1+z_4z_6 \\
z_0z_4+z_1z_5 & 0 & -z_1^2 & 0 & -z_0z_1+z_4z_6 & -z_4^2 \\
-z_2^2 & -z_2z_4-z_0z_5 & z_0z_3-z_2z_6 & z_0^2 & 0 & 0 \\
z_2z_4-z_0z_5 & z_4^2 & z_0z_1+z_4z_6 & 0 & z_0^2 &0 \\
-z_0z_3-z_2z_6 & z_0z_1-z_4z_6 & -z_6^2 & 0 & 0 & -z_0^2
\end{array}
\right)}
\end{equation} indeed we have:
\begin{align*}
& \sat (A) = \sat ((\wedge ^2 A)) = \sat ((\wedge ^3 A)) = (1) \\
& \wedge ^4 A = (0)
\end{align*}
Now let $H$ be the sheaf $(\im (A) \widetilde{\; \;})^{\vee}(-1)$. Tango's bundle $F$ is defined as the cokernel $0 \rightarrow \OO \rightarrow H \rightarrow F \rightarrow 0$.
One can ask {\tt Macaulay2} for the cohomology groups $\HH^i(H)$ and the result is:
\begin{align*}
& \hh^i(Q_5,H(t)) = 0 \qquad \forall \hspace{1mm} t, \hspace{3mm} 0<i<5, \mbox{ except:} \\
& \hh^1(Q_5,H(-1)) = 1
\end{align*} Therefore we have:
$$\Ext^1(\OO,H(-1)) = k$$ the associated extension is:
$$ 0 \longrightarrow H(-1) \longrightarrow W \longrightarrow \OO \longrightarrow 0 $$ where $W$ is a 4-bundle whose intermediate cohomology is forced to be zero. Then by \cite{buchweitz-greuel-schreyer} or \cite{kapranov} and Euler characteristic we get $W = S$ and $H(-1)=G^{\vee}$ so that $F = C(1)$.
A different method would be proving that $C(1)$ and $F$ have the same resolution. The resolution of $C(1)$ can be obtained on the computer by
in the following way. We first need a matrix for the spinor bundle $S$ and we derive it from \cite{beauville-determinantal}. \\ We know that $S$ is a rank$-4$ bundle with no intermediate cohomology, $S^{\vee}= S(1)$, and $\hh^0(S(1))=\hh^5(S(-5))=8$. \\ Then by Beilinson theorem $S(1)$ extended by zero to $\p^6$ has the resolution:
$$ 0 \longrightarrow \OO_{\p^6}(-1)^8 \overset{B}{\longrightarrow} \OO_{\p^6}^8
\longrightarrow S(1) \longrightarrow 0 $$ where now $B$, by the observations in \cite{beauville-determinantal}, is an antisymmetric matrix whose determinant is the equation of the quadric, to the power 4. This is done by the matrix
\begin{equation} \label{spinmatrix}
B =
\begin{pmatrix}0&
0&
0&
{-{{z}}_{{3}}}&
0&
{-{{z}}_{1}}&
{{z}}_{{5}}&
{-{{z}}_{0}}\\
0&
0&
{{z}}_{{3}}&
0&
{{z}}_{1}&
0&
{-{{z}}_{0}}&
{-{{z}}_{{6}}}\\
0&
{-{{z}}_{{3}}}&
0&
0&
{-{{z}}_{{5}}}&
{{z}}_{0}&
0&
{-{{z}}_{{2}}}\\
{{z}}_{{3}}&
0&
0&
0&
{{z}}_{0}&
{{z}}_{{6}}&
{{z}}_{{2}}&
0\\
0&
{-{{z}}_{1}}&
{{z}}_{{5}}&
{-{{z}}_{0}}&
0&
0&
0&
{{z}}_{{4}}\\
{{z}}_{1}&
0&
{-{{z}}_{0}}&
{-{{z}}_{{6}}}&
0&
0&
{-{{z}}_{{4}}}&
0\\
{-{{z}}_{{5}}}&
{{z}}_{0}&
0&
{-{{z}}_{{2}}}&
0&
{{z}}_{{4}}&
0&
0\\
{{z}}_{0}&
{{z}}_{{6}}&
{{z}}_{{2}}&
0&
{-{{z}}_{{4}}}&
0&
0&
0\\
\end{pmatrix}
\end{equation}
Incidentally, we mention that by matrices written in such a fashion one can obtain the spinor bundles over the quadric of any dimension. Here by a standard mapping cone construction we get an infinite 2-periodic resolution of $C(1)$ of the form:
$$ \textstyle {R^{\bullet}
\overset{\delta}{\longrightarrow} \OO_{Q_5}(-4)^{55}
\rightarrow \OO_{Q_5}(-3)^{49}
\rightarrow \OO_{Q_5}(-2)^{34}
\rightarrow \OO_{Q_5}(-1)^{14}
\rightarrow C(1) \rightarrow 0}
$$ where
$$\textstyle{R^{\bullet} =
\cdots \overset{d}{\longrightarrow} \OO_{Q_5}(-i-3)^{56}
\overset{e}{\longrightarrow} \OO_{Q_5}(-i-2)^{56}
\overset{d}{\longrightarrow} \OO_{Q_5}(-i-1)^{56}
\overset{e}{\longrightarrow}
\cdots}$$ and this coincides with what we get by Tango's original construction.
We remark that the periodicity here is a standard behaviour, by \cite{eisenbud}. \\ The rank of the kernel of $\delta$ is 28 and again it must have no intermediate cohomology. Then Euler characteristic shows that it must be $S(-5)^{\oplus 7}$, i.e. the resolution actually reads:
$$ \textstyle {0 \rightarrow S(-5)^{\oplus 7}
\rightarrow \OO_{Q_5}(-4)^{55}
\rightarrow \OO_{Q_5}(-3)^{49}
\rightarrow \OO_{Q_5}(-2)^{34}
\rightarrow \OO_{Q_5}(-1)^{14}
\rightarrow C(1) \rightarrow 0}
$$ This agrees with \cite[pag. 197]{ottaviani-szurek} (here we consider the dual spectral sequence) i.e. the above is a {\em Kapranov sequence} for $C(1)$.
Yet another method is the following. First one proves the analogous for $Q_5$ of \cite[lemma 1]{tan-morphisms-I}, i.e.
\begin{lem}\label{enumerative} Let $\rho \colon Q_n \longrightarrow \G(\p^k,\p^n)$ be a
non-constant morphism, with $n$ odd and $k$ even, and let $\EE$ be the
pull-back on $Q_n$ of the dual universal sub-bundle $U^{\vee}$ on $\G(\p^k,\p^n)$.
Then $k=\frac{n-1}{2}$, and:
$$c_i (\EE)= 2a^i$$ for some positive integer $a$. \end{lem}
Here $\rho^*(U)(1)=H$ defined above, where $\rho$ is given by the matrix $A$ (\ref{matrix-A}): a row of $A(x)$ is a point in $\p ^6$ and $\rk (A(x))=3$, hence $A(x)$ represents a $\p^2 \subset \p^6$. \\ But we know that $\rho^*(U)$ contains the sub-line-bundle $\OO(-1)$, so that $a=1$. So $H_{\norm} = H(-1)$. Then {\tt Macaulay2} gives $\HH^0(H_{\norm}) = \HH^0(\wedge^2 H_{\norm}) =0$ thus $H$ is stable by Hoppe's criterion. Hence we can conclude by \cite{ott-spinor} that $\rho^*(U)=G^{\vee}$ and $F=C(1)$.
\begin{proof}[Proof of lemma (\ref{enumerative})] The proof is almost identical to the case of $\p^n$, the only difference being that we have to work in $H^*(Q_n)$, where $\eta^{\frac{n+1}{2}}=2\zeta$. \\ We can suppose $k$ even and $k\leq\frac{n-1}{2}$ because $\G(\p^k,\p^n) \cong \G(\p^{n-k-1},\p^n)$ and we put $a_i = c_i(\rho^*(\EE))$, $b_i = c_i(\rho ^*(Q))$ where $Q$ is the universal quotient bundle. We have, in the ring $A(Q_5)[t]$ the relation on Chern polynomials
\begin{equation} \label{chern-1}
c _{\EE^{\vee}} (t) \cdot c_{\rho^*(Q)} (t) = 1
\end{equation} Now we can think of the coefficients in (\ref{chern-1}) as integers times some $\eta^r$, taking care to replace $\zeta$ by $\frac{1}{2}\eta^{\frac{n+1}{2}}$, that is, replacing $a_i$ (and $b_i$) by $a_i'=\frac{1}{2}a_i$ (by $b_i'=\frac{1}{2}b_i$) whenever $i\geq\frac{n+1}{2}$. \\ Then one proceeds exactly as in \cite[lemma 1]{tan-morphisms-I} and \cite[lemma 3.3]{tan-projective}, and finds:
\begin{align*}
& k = \frac{n-1}{2} \\
& a'_i = 2 a^i \qquad \mbox{for $i=1,\ldots,\frac{n-1}{2}=k$} \\
& a'_{k+1} = a^{k+1}
\end{align*} and so $a_i = 2a^i$, for all $i$'s, as only for $i=k+1$ we have to substitute $a_{k+1} = 2a'_{k+1}$. \end{proof}
\section{further remarks} \label{sec-3}
Now we turn back to $\p^5$ and $\ch(k)=2$. As we have the equations for $T$, {\tt Macaulay2} provides the following resolution
$$
\xymatrix@C-2ex{
0 \ar[r]
& *\txt{$\OO^7(-7)$ \\ $\oplus$ \\ $\OO(-8)$} \ar[r]
& \OO^{49}(-6) \ar[r]
& \OO^{98}(-5) \ar[r]
& \OO^{76}(-4)\ar[r]
& *\txt{$\OO^{7}(-3)$ \\ $ \oplus $ \\ $\OO^{14}(-2)$} \ar[r]
& T \ar[r] & 0
}$$ One may check that this gives back the Chern classes $2,4$. \\ As Wolfram Decker pointed out to me, another way to get Tango's bundle is by Horrocks bundle $\HHH$ in characteristic 2. Concretely, Horrocks becomes a non-split extension $0 \rightarrow T(-1) \rightarrow \HHH
\rightarrow \OO \rightarrow 0$. \\ This allows to compute the cohomology of $\HHH$ in terms of $T$.
Moreover in \cite{decker} one finds an explicit description of the maps in the Beilinson monad. This provides also a check of computations. Denoting with $e_0,\ldots,e_5$ the canical basis of $E$ (the exterior algebra over $V$), and using the natural isomorphism $\Hom(\Omega^i(i),\Omega^j(j)) = \wedge ^{i-j} V = E_{j-i}$, $T$ is the cohomology of the maps $\alpha$ and $\beta$:
$$\begin{array}{c}
\beta = \left( \begin{array}{cc}
e_0 & e_4e_5 \\
e_1 & e_3e_5 \\
e_2 & e_3e_4 \\
e_3 & e_1e_2 \\
e_4 & e_0e_2 \\
e_5 & e_0e_1 \\
0 & e_0e_3+e_1e_4+e_2e_5
\end{array}\right)
\\ \\
\alpha = \left( \begin{array}{ccc}
e_0e_1e_2+e_3e_4e_5 & e_0e_1e_3e_4+e_0e_2e_3e_5+e_1e_2e_4e_5 & 0 \\
e_0e_3+e_1e_4+e_2e_5 & e_0e_1e_2+e_3e_4e_5 & e_1e_2e_4e_5
\end{array}\right)
\end{array}
$$ If we compute the resolution we get back the same result, thus restating what is said in \cite[proposition 1.8.]{decker}.
Applying cohomology algorithms in {\tt Macaulay2} developed by Decker Eisenbud and Schreyer one may also obtain a full table of the cohomology, which I write in {\tt Macaulay2} notation:
\begin{equation} \label{tango-cohomology}
\vtop{
\hbox{{\tt {}total:\ 573\ 260\ 92\ 27\ 14\ 7\ 2\ 2\ 7\ 14\ 27\ 92\ 260\ 573}}
\hbox{{\tt {}\ \ \ -6:\ 573\ 260\ 91\ 14\ \ .\ .\ .\ .\ .\ \ .\ \ .\ \ .\ \ \ .\ \ \ .}}
\hbox{{\tt {}\ \ \ -5:\ \ \ .\ \ \ .\ \ 1\ 13\ 14\ 7\ 1\ .\ .\ \ .\ \ .\ \ .\ \ \ .\ \ \ .}}
\hbox{{\tt {}\ \ \ -4:\ \ \ .\ \ \ .\ \ .\ \ .\ \ .\ .\ 1\ .\ .\ \ .\ \ .\ \ .\ \ \ .\ \ \ .}}
\hbox{{\tt {}\ \ \ -3:\ \ \ .\ \ \ .\ \ .\ \ .\ \ .\ .\ .\ 1\ .\ \ .\ \ .\ \ .\ \ \ .\ \ \ .}}
\hbox{{\tt {}\ \ \ -2:\ \ \ .\ \ \ .\ \ .\ \ .\ \ .\ .\ .\ 1\ 7\ 14\ 13\ \ 1\ \ \ .\ \ \ .}}
\hbox{{\tt {}\ \ \ -1:\ \ \ .\ \ \ .\ \ .\ \ .\ \ .\ .\ .\ .\ .\ \ .\ 14\ 91\ 260\ 573}}
}
\end{equation} Reading the table along one antidiagonal gives the list of cohomology groups of a single twist. Here the list for $T$ starts from the up-right corner, while starting from a shift to the left means reading the list for a $(-1)-$twist. \\ One can check that table (\ref{tango-cohomology}) agrees with theorem \ref{bott-cohomology}.
\end{document} | arXiv |
\begin{document}
\maketitle
\begin{abstract} This note is concerned with the unipotent characters of the Ree groups of type $G_2$. We determine the roots of unity associated by Lusztig and Digne-Michel to unipotent characters of $^2G_2(3^{2n+1})$ and we prove that the Fourier matrix of $^2G_2(3^{2n+1})$ defined by Geck and Malle satisfies a conjecture of Digne-Michel. Our main tool is the Shintani descent of Ree groups of type $G_2$. \end{abstract}
\section{Introduction}\label{intro}
Let $\mathbf{G}$ be a connected reductive group defined over the finite field $\mathbb{F}_q$ with $q$ elements of characteristic $p>0$, and let $F$ be the corresponding Frobenius map. Let $\mathbf{T}$ be a maximal rational torus of $\mathbf{G}$, contained in a rational Borel subgroup $\mathbf{B}$ of $\mathbf{G}$. We denote by $W=\operatorname{N}_{\mathbf{G}}(\mathbf{T})/\mathbf{T}$ the Weyl group of $\mathbf{G}$. For $w\in W$, there is the corresponding Deligne-Lusztig character $R_w$ of the finite fixed-point subgroup $\mathbf{G}^F$. We refer to~\cite[\S7.7]{Carter2} for a precise construction. Then we define the set of unipotent characters of $\mathbf{G}^F$ by
$$\mathcal{U}(\mathbf{G}^F)=\{\chi\in\operatorname{Irr}(\mathbf{G}^F)\ |\ \cyc{\chi,R_w}_{\mathbf{G}^F}\neq 0\ \textrm{for some }w\in W\}.$$
Lusztig~\cite{LusBook} and Digne-Michel~\cite{DMShin} associated to every $\chi\in\mathcal{U}(\mathbf{G}^F)$ a root of unity~$\omega_{\chi}\in\overline{\mathbb{Q}}_{\ell}$ (where $\overline{\mathbb{Q}}_{\ell}$ is the $\ell$-adic field for $\ell\neq p$) as follows.
We denote by $\delta$
the order of the automorphism of $W$ induced by $F$. For $w\in W$, we denote by $n_w\in\operatorname{N}_{\mathbf{G}}(\mathbf{T})$ an element such that $n_w\mathbf{T}=w$. We set $X_w=\{x\mathbf{B}\ |\ x^{-1}F(x)\in \mathbf{B} n_w\mathbf{B}\}$ the corresponding Deligne-Lusztig variety. For every integer $j$ we denote by $H_c^j(X_w,\overline{\mathbb{Q}}_{\ell})$ the $j$-th $\ell$-adic cohomology space with compact support over $\overline{\mathbb{Q}}_{\ell}$, associated to $X_w$.
Therefore, the groups $\cyc{F^\delta}$ and $\mathbf{G}^F$ act on $X_w$, which induces linear operations on $H_c^j(X_w,\overline{\mathbb{Q}}_{\ell})$. Hence $H_c^j(X_w,\overline{\mathbb{Q}}_{\ell})$ is a $\overline{\mathbb{Q}}_{\ell}\mathbf{G}^F$-module, and $F^\delta$ acts on this space as a linear endomorphism. We also fix an eigenvalue $\lambda$ of $F^\delta$, and we denote by $F_{\lambda,j}$ its generalized eigenspace. Since the actions of $\cyc{F^\delta}$ and $\mathbf{G}^F$ commute, the space $F_{\lambda,j}$ is a $\overline{\mathbb{Q}}_{\ell}\mathbf{G}^F$-module. Moreover, the irreducible constituents of $\mathbf{G}^F$ occurring in this module are unipotent. Conversely, for every unipotent character $\chi$ of $\mathbf{G}^F$, there are $w\in W$, $\lambda\in\overline{\mathbb{Q}}_{\ell}^{\times}$ and $j\in\mathbb{N}$, such that $\chi$ occurs in the character associated to $F_{\lambda,j}$. Lusztig~\cite{LusBook} and Digne-Michel~\cite{DMShin} have shown that $\lambda=q^{s/2}\omega_{\chi}$ for some non-negative integer $s$ and a root of unity $\omega_{\chi}\in\overline{\mathbb{Q}}_{\ell}^\times$ which depends only on $\chi$.
The set of roots associated as above to the unipotent characters have been computed by Lusztig in~\cite{LusBook} for finite reductive groups if
$F$ is split. Moreover Lusztig computed in~\cite{LusCox} the set of roots for the unipotent characters appearing in $ H_c^j(X_{w_{\operatorname{cox}}},\overline{\mathbb{Q}}_{\ell})$ where $w_{\operatorname{cox}}$ denotes the Coxeter element of $W$, with no condition on~$F$. This work is completed for the cases that $F$ is a non-split Frobenius map by Geck and Malle in~\cite{GM}. However, for few cases, the methods of Lusztig and Geck-Malle allow to associate the roots to their unipotent characters only up to complex conjugation. This is for example the case for the Suzuki and the Ree groups. In~\cite{Br3} we remove this indetermination for the unipotent characters of the Suzuki groups.
Moreover, we recall that Lusztig~\cite{LusBook} associated to most of sets $\mathcal{U}(\mathbf{G}^F)$ some non-abelian Fourier matrices, which involve the decomposition in unipotent characters of $R_w$ for $w\in W$.
This note is concerned with the Fourier matrices and the roots of unity, associated as above to the unipotent characters of the Ree group~$^2G_2(q)$ for $q=3^{2n+1}$. For these groups, the method in~\cite{LusBook} does not allow to define Fourier matrices. However using the theory of character sheaves, Geck and Malle give a more general definition for these matrices~\cite[5.1]{GM}. For the Ree groups of type $G_2$, they obtained the following matrix~\cite[5.4]{GM} $$ \frac{\sqrt{3}}{6} \begin{bmatrix} 1&1&1&1&2&2\\ \sqrt{3}&-\sqrt{3}&\sqrt{3}&-\sqrt{3}&0&0\\ 0&0&2&2&0&-2\\ 2&2&0&0&-2&0\\ 1&1&-1&-1&2&-2\\ \sqrt{3}&-\sqrt{3}&-\sqrt{3}&\sqrt{3}&0&0\\ \end{bmatrix}. $$
We set $I=\{1,3,5,6,7,8,9,10\}$. The Ree group $^2G_2(q)$ has $8$ unipotent characters, denoted in~\cite{Ward} by $\xi_k$ for $k\in I$. In~\cite{LusCox}, Lusztig shows that $\omega_{\xi_1}=1$, $\omega_{\xi_3}=1$, and $$\{\omega_{\xi_5},\omega_{\xi_{7}}\}=\{\pm i\}\quad
\quad\textrm{and}\quad \{\omega_{\xi_9},\omega_{\xi_{10}}\}=\left\{\frac{\pm i-\sqrt{3}}{2}\right\},$$ where $i\in\overline{\mathbb{Q}}_{\ell}$ is a root of $-1$. This work is completed in~\cite{GM} by Geck-Malle who proved that $\{\omega_{\xi_6},\omega_{\xi_{8}}\}=\{\pm i\}$.
The aim of this note is to compute the roots $\omega_{\xi_k}$ for $k\in I$.
Moreover, we will also show that the Fourier matrices of the Ree groups of type $G_2$ satisfy a conjecture of Digne-Michel~\cite{DMShin} that we recall in~\S\ref{part4}. These are new results, which complete works of Lusztig~\cite{LusCox} and of Geck-Malle~\cite{GM} for the Ree groups of type $G_2$.
The paper is organized as follows. In Section~\ref{part1} we fix some notation and we give preliminary results. In Section~\ref{part2} we give results on the Shintani descents from $\mathbf{G}^{F^2}\semi F$ to $\mathbf{G}^F$ that we need in order to apply the same method as in~\cite{Br3}. In Section~\ref{part3} we compute the roots of unity associated as above to the unipotent characters of the Ree groups. Finally, in Section~\ref{part4} we show that the Fourier matrices for the Ree groups defined by Malle and Geck satisfy the Digne-Michel conjecture. \section{Notation and preliminary results}\label{part1}
\subsection{Notation} Let $\mathbf{G}$ be a simple algebraic group of type $G_2$ over an algebraic closure $\overline{\mathbb{F}}_3$ of the finite field $\mathbb{F}_3$ with $3$ elements.
We denote by $\Sigma$ the root system of type $G_2$, and by $\Pi=\{a,b\}$ a fundamental system of roots. We choose $a$ for the short root, and $b$ for the long one. We denote by $\Sigma^+=\{a,b,a+b,2a+b,3a+b,3a+2b\}$
the set of positive roots with respect to $\Pi$. For $r\in\Sigma$ and $t\in\overline{\mathbb{F}}_3$, there is the corresponding Chevalley element $x_r(t)\in\mathbf{G}$. We recall that $\mathbf{G}=\cyc{x_r(t)\ |\ r\in\Sigma, t\in\overline{\mathbb{F}}_3^{\times}}$. We set
$$\mathbf{U}=\cyc{x_r(t)\ |\ r\in\Sigma^+,\,t\in\overline{\mathbb{F}}_3}\quad
\textrm{and}\quad\mathbf{T}=\cyc{h_r(t)\ |\ r\in\Sigma^+,\,t\in\overline{\mathbb{F}}_3},$$ where $h_r(t)=x_{-r}(t^{-1}-1)x_r(1)x_{-r}(t-1)x_r(-t^{-1})$. The subgroup $\mathbf{T}$ is a maximal torus of $\mathbf{G}$, contained in the Borel subgroup $\mathbf{B}=\mathbf{T}\mathbf{U}$ of $\mathbf{G}$. The Weyl group of $\mathbf{G}$ is $W=\operatorname{N}_{\mathbf{G}}(\mathbf{T})/\mathbf{T}$.
For every positive integer $m$, the Frobenius map $F_m$ on $\mathbf{G}$ is defined on the Chevalley generators by setting $F_m(x_r(t))=x_r(t^{3^m})$. As in~\cite[\S12.4]{Carter1}, we define an automorphism $\alpha$ of $\mathbf{G}$ by setting $\alpha(x_{r}(t))=x_{\rho(r)}(t)$ if $r$ is a long root and $\alpha(x_{r}(t))=x_{\rho(r)}(t^3)$ if $r$ is short, where $\rho$ is the unique angle-preserving, and length-changing bijection of $\Sigma$ which preserves $\Pi$.
Throughout this paper, we fix a positive integer $n$. We set $\theta=3^n$ and $q=3\theta^2$. We write $F=\alpha\circ F_n$. We then have $F^2=F_{2n+1}$. The fixed-point subgroups $\mathbf{G}^F$ and $\mathbf{G}^{F^2}$ are the Ree group $^2G_2(q)$ and the finite Chevalley group $G_2(q)$ respectively. The subgroups $\mathbf{T}$, $\mathbf{U}$ and $\mathbf{B}$ are $F$-stable. We notice that the automorphism of $W$ induced by $F$ has order $2$.
Moreover, the Chevalley relations are, for $u,v\in\overline{\mathbb{F}}_3$: $$\begin{array}{lll} x_a(t)x_b(u)&=&x_b(u)x_a(t)x_{a+b}(-tu)x_{3a+b}(t^3u)x_{2a+b}(-t^2u)x_{3a+2b}(t^3u^2)\\ x_{a}(t)x_{a+b}(u)&=&x_{a+b}(u)x_a(t)x_{2a+b}(tu)\\ x_{b}(t)x_{3a+b}(u)&=&x_{3a+b}(u)x_b(t)x_{3a+2b}(tu)\\ x_{a+b}(t)x_{3a+b}(u)&=&x_{3a+b}(u)x_{a+b}(t)\\ x_{a+b}(t)x_{2a+b}(u)&=&x_{2a+b}(u)x_{a+b}(t)\\ x_{a+b}(t)x_{3a+2b}(u)&=&x_{3a+2b}(u)x_{a+b}(t)\\ x_{2a+b}(t)x_{3a+b}(u)&=&x_{3a+b}(u)x_{2a+b}(t)\\ x_{2a+b}(t)x_{3a+2b}(u)&=&x_{3a+2b}(u)x_{2a+b}(t) \end{array}$$
We fix a root $\alpha_0\in\overline{\mathbb{F}}_3^\times$ of $X^q-X+1$, and we set $\xi=\alpha_0^3-\alpha_0$. We have $$\xi^q=\alpha_0^{3q}-\alpha_0^q=\alpha_0^3-1-\alpha_0+1=\xi.$$ Therefore $\xi\in\mathbb{F}_q$. Moreover, $X^3-X-\xi\in\mathbb{F}_q[X]$ is irreducible over $\mathbb{F}_q$. Otherwise there is a $t\in\mathbb{F}_q$ with $t^3-t-\xi=0$, implying $(t-\alpha_0)^3=(t-\alpha_0)$. However, $t\neq\alpha_0$ (because $\alpha_0\not\in\mathbb{F}_q$). Thus $(t-\alpha_0)^2=1$. It follows that $\alpha_0=t\pm 1\in\mathbb{F}_q$. This is a contradiction.
The character table of $\mathbf{G}^{F^2}$ was computed by Enomoto in~\cite{Enomoto}. The description of this table depends on an element $\xi_0\in\mathbb{F}_q$ such that $X^3-X-\xi_0$ is an irreducible polynomial over $\mathbb{F}_q$. In the following we choose $\xi_0=\xi$.
We recall that the unipotent regular class $u_{\textrm{reg}}$ of $\mathbf{G}$ splits in $3$ classes $A_{51}$, $A_{52}$ and $A_{53}$ in $\mathbf{G}^{F^2}$, with representatives $x_a(1)x_b(1)$, $x_a(1)x_b(1)x_{3a+b}(\xi)$ and $x_a(1)x_b(1)x_{3a+b}(-\xi)$ respectively. Moreover, $u_{\textrm{reg}}$ also splits in $3$ classes in $\mathbf{G}^F$ whose representatives are not conjugate in $\mathbf{G}^{F^2}$. We denote by $Y_1$, $Y_2$ and $Y_3$ representatives with $Y_1\in A_{51}$, $Y_2\in A_{52}$ and $Y_3\in A_{53}$.
The group $\mathbf{G}^F$ has $q+8$ conjugacy classes. More precisely, we recall
that the system of representatives of classes of $\mathbf{G}^F$ given in~\cite[4.1]{Br5} is described as follows.
\begin{itemize} \item The trivial element $1$. \item The element $J=h_{a+b}(-1)$ which has a centralizer of order $q(q-1)(q+1)$. \item The element $X=x_{2a+b}(1)x_{3a+2b}(1)$ which has centralizer order $q^3$. \item The elements $T=x_{a+b}(1)x_{3a+b}(1)$ and $T^{-1}$ whose centralizers have order $2q^2$. \item The elements $Y_1$, $Y_2$ and $Y_3$ described as above whose centralizers have order $3q$. \item The elements $TJ$ and $T^{-1}J$ whose centralizers have order $2q$. \item A family $R$ of $(q-3)/2$ semisimple regular elements with centralizer order $q-1$. \item A family $S$ of $(q-3)/6$ semisimple regular elements with centralizer order $q+1$. \item A family $V$ of $(q-3\theta)/6$ semisimple regular elements with centralizer order $q-3\theta+1$. \item A family $M$ of $(q+3\theta)/2$ semisimple regular elements with centralizer order $q+3\theta+1$. \end{itemize}
In~\cite[4.5]{Br5} we give the class fusion between $\mathbf{G}^F$ and $\mathbf{G}^{F^2}$. The character table of $\mathbf{G}^F$ with respect to this parametrization is given in~\cite[7.2]{Br5}. We notice that there are some misprints in~\cite[7.2]{Br5} for the values of $\xi_9$ and $\xi_{10}$ on $Y_2$ and $Y_3$. Indeed we have $$\xi_9(Y_2)=\xi_{10}(Y_3)=\theta(1+i\sqrt{3})/2\quad\textrm{and}\quad \xi_9(Y_3)=\xi_{10}(Y_2)=\theta(1-i\sqrt{3})/2.$$
A system of representatives of the conjugacy classes of $\mathbf{G}^{F^2}\semi F$ is computed in~\cite[4.2]{Br5}. It is shown that the following elements are representatives of the outer classes of $\mathbf{G}^{F^2}\semi F$ (i.e. the classes of $\mathbf{G}^{F^2}\semi F$ lying in the coset $\mathbf{G}^{F^2}.F$):
\begin{itemize} \item The element $F$, which has centralizer order $2q^3(q^2-1)q^2-q+1)$. \item The element $h_0.F$ with $h_0=h_{a}(-1)$,
which has centralizer order $2q(q-1)(q+1)$. \item The element $X.F$, with centralizer order $2q^3$. \item The elements $T.F$ and $T^{-1}.F$, whose centralizers have order $6q^2$. \item The elements $Y_1.F$, $Y_2.F$ and $Y_3.F$, whose centralizers have order $6q$. \item The elements $\eta h_0.F$ and $\eta^{-1} h_0.F$ with $\eta=x_{a+b}(1)x_{3a+b}(-1)$, whose centralizers have order $2q$. \item A family $R'$ of $(q-3)/2$ elements with centralizer order $q-1$. \item A family $S'$ of $(q-3)/6$ elements with centralizer order $q+1$. \item A family $V'$ of $(q-3\theta)/6$ elements with centralizer order $q-3\theta+1$. \item A family $M'$ of $(q+3\theta)/2$ elements with centralizer order $q+3\theta+1$. \end{itemize}
Finally, we recall that the character table of $\mathbf{G}^{F^2}\semi F$ is computed in~\cite[1.1]{Br5}.
\subsection{Some results}
We will need the following results.
\begin{lemma}\label{conjU}
We set $E=\{t^3-t\ |\ t\in\mathbb{F}_q\}$. Then every element $x\in\mathbb{F}_q$ can be written uniquely as $x=\eta_x\xi+y_x$ with $y_x\in E$ and $\eta_x\in\mathbb{F}_3$. Moreover let $u\in\mathbf{U}^{F^2}$ be such that $$u=x_a(1)x_b(1)x_{a+b}(t_{a+b})x_{3a+b}(t_{3a+b})x_{2a+b}(t_{2a+b}) x_{3a+2b}(t_{3a+2b}),$$ for $t_{a+b}$, $t_{3a+b}$, $t_{2a+b}$, $t_{3a+2b}\in\mathbb{F}_q$. For $u$ as above, we set $p(u)=t_{a+b}+t_{3a+b}$. Then $u\in A_{51}$ if $\eta_{p(u)}=0$, $u\in A_{52}$ if $\eta_{p(u)}=1$, and $u\in A_{53}$ if $\eta_{p(u)}=-1$. \end{lemma} \begin{proof}This is a consequence of the Chevalley relations. \end{proof}
We remark that the map $\mathbb{F}_q\rightarrow\mathbb{F}_3,\,x\mapsto\eta_x$ is an additive morphism. We now describe the elements $Y_1$, $Y_2$ and $Y_3$ more precisely.
\begin{lemma}\label{conjRep} For $u=\pm\xi$, we set $$\alpha(1)=x_a(1)x_b(1)x_{a+b}(1)x_{2a+b}(1)\ \textrm{and } \beta(u)=x_{a+b}(u^\theta)x_{3a+b}(u).$$ As previously, we denote by $\eta_1\in\mathbb{F}_3$ the unique element such that $1=\eta_1\xi+t^3-t$ for some $t\in\mathbb{F}_q$. \begin{itemize} \item If $n\equiv 1\mod 3$, then $\eta_1=0$. We choose $Y_1=\alpha(1)$, $Y_2=\alpha(1)\beta(-\xi)$ and $Y_3=\alpha(1)\beta(\xi)$. \item If $n\equiv 0\mod 3$, then $\eta_1=-1$. We choose $Y_1=\alpha(1)\beta(-\xi)$, $Y_2=\alpha(1)\beta(\xi)$ and $Y_3=\alpha(1)$. \item If $n\equiv -1\mod 3$, then $\eta_1=1$. We choose $Y_1=\alpha(1)\beta(\xi)$, $Y_2=\alpha(1)$ and $Y_3=\alpha(1)\beta(-\xi)$. \end{itemize} \end{lemma}
\begin{proof}We have $$p(\alpha(1))=1,\quad p(\alpha(1)\beta(\xi))=1+\xi+\xi^{\theta}\quad\textrm{and}\quad p(\alpha(1)\beta(-\xi))=1-\xi-\xi^{\theta}.$$ We discuss $\eta_1$. There is an element $t\in\mathbb{F}_q$ such that $$1=\eta_1\xi+t^3-t.$$ For $0\leq j\leq 2n$, we take the $3^j$-power of the last relation, and sum all new obtained relations. Thus we obtain $$2n+1=\eta_1(\xi+\xi^3+\cdots+\xi^{3^{2n}}).$$ However, $\xi=\alpha_0^3-\alpha_0$ with $\alpha_0^q=\alpha_0-1$. Hence $\xi+\cdots+\xi^{3^{2n}}=\alpha_0^q-\alpha_0=-1$. It follows that $$2n+1+\eta_1\equiv 0\mod 3.$$ Moreover, we remark that $\xi^\theta=(\xi+\cdots+\xi^{\theta/3})^{3}-(\xi+\cdots+\xi^{\theta/3}) +\xi$. We deduce that $\eta_{\xi^{\theta}}=1$. \begin{itemize} \item If $n\equiv 1\mod 3$, then $\eta_1=0$. Therefore $\eta_{1+\xi+\xi^{\theta}}=-1$ and $\eta_{1-\xi-\xi^{\theta}}=1$. \item If $n\equiv 0\mod 3$, then $\eta_1=-1$. Therefore $\eta_{1+\xi+\xi^{\theta}}=1$ and $\eta_{1-\xi-\xi^{\theta}}=0$. \item If $n\equiv -1\mod 3$, then $\eta_1=1$. Therefore $\eta_{1+\xi+\xi^{\theta}}=0$ and $\eta_{1-\xi-\xi^{\theta}}=-1$. \end{itemize} The result follows. \end{proof}
\begin{lemma}\label{systeme}Let $\alpha$ and $\beta$ be two elements of $\mathbb{F}_q$. We consider the following system of equations (S) $$\left\{\begin{array}{lcl} y^\theta-x&=&1\\ x^{3\theta}-y&=&1\\ t^{\theta}-z+1+x^{3\theta+1}&=&\alpha\\ z^{3\theta}-t-1-x^{3\theta+3}&=&\beta \end{array}\right.$$ If $x_0$ is a root in $\overline{\mathbb{F}}_3$ of $X^q-X+1$ and $t_0$ is a root in $\overline{\mathbb{F}}_3$ of $X^q-X-x_0^{3\theta}-\beta-\alpha^{3\theta}$, then the tuple $(x_0,x_0^{3\theta}-1,t_0,x_0^{3\theta+1}+t_0^\theta+1-\alpha)$ is a solution of (S) in $\overline{\mathbb{F}}_3$. \end{lemma}
\begin{proof}If $y_0=x_0^{3\theta}-1$, then $y_0^\theta=x_0^q-1=x_0+1$. If $z_0=x_0^{3\theta+1}+t_0^\theta+1-\alpha$, then $$\begin{array}{lcl} z_0^{3\theta}&=&x_0^{3q}x_0^{3\theta}+t_0^q+1-\alpha^{3\theta}\\ &=&(x_0^3-1)x_0^{3\theta}+t_0+x_0^{3\theta}+b+\alpha^{3\theta}+1-\alpha^{3\theta}\\ &=&x_0^{3\theta+3}+t_0+\beta+1 \end{array}$$ The result follows. \end{proof} \begin{remark}\label{remarque1} For the solution of the system $(S)$ given in Lemma~\ref{systeme}, we remark that we can choose $x_0$ independently of $\alpha$ and $\beta$. \end{remark} \section{Shintani descents}\label{part2}
\subsection{Definition}
A reference for this section is for example~\cite{DMShin}. We keep the notation as above. We denote by $L_F:\mathbf{G}\rightarrow\mathbf{G},\,x\mapsto x^{-1}F(x)$ the Lang map associated to $F$. Since $\mathbf{G}$ is a simple algebraic group, it follows that $\mathbf{G}$ is connected. Hence $L_F$ and $L_{F^2}$ are surjective. Moreover, $L_{F}(x)\in\mathbf{G}^{F^2}$ for some $x\in\mathbf{G}$ if and only if $L_{F^2}(x^{-1})\in\mathbf{G}^F$, and the correspondence $$L_{F^2}(x^{-1})\in\mathbf{G}^{F}\longleftrightarrow L_F(x).F\in\mathbf{G}^{F^2}\semi F\quad\textrm{for }x\in\mathbf{G},$$ induces a bijection from the outer classes of the group $\mathbf{G}^{F^2}\semi F$ to the classes of $\mathbf{G}^F$. We denote this correspondence by $N_{F/F^2}$. Furthermore we have~\cite[\S I.7]{DMShin} \begin{equation}\label{eqcent}
|\operatorname{C}_{\mathbf{G}^{F^2}\semi F}\left( L_{F}(x).F
\right)|=2|\operatorname{C}_{\mathbf{G}^F}(L_{F^2}(x^{-1})|\quad \textrm{for } x\in\mathbf{G}. \end{equation}
For every class function $\psi$ on $\mathbf{G}^{F^2}\semi F$, we define the Shintani $\operatorname{Sh}_{F^2/F}(\psi)$ of $\psi$ by $\operatorname{Sh}_{F^2/F}(\psi)=\psi\circ N_{F/F^2}$.
\subsection{Shintani correspondence from $\mathbf{G}^{F}$ to $\mathbf{G}^{F^2}\semi F$ in type $G_2$}
\begin{lemma}\label{shinT} We keep the notation as above. We set $T=\beta(1)$. We have $$N_{F/F^2}([T]_{\mathbf{G}^F})=[T.F]_{\mathbf{G}^{F^2}\semi F}\quad \textrm{and}\ N_{F/F^2}([T^{-1}]_{\mathbf{G}^F})=[T^{-1}.F]_{\mathbf{G}^{F^2}\semi F}.$$ Here $[x]_G$ denotes the conjugacy class of $x$ in $G$. \end{lemma}
\begin{proof} We set $x=x_{a+b}(\alpha_0)x_{3a+b}(\alpha_0^{3\theta}-1)$. Then we have $$\begin{array}{lcl} L_F(x)&=&x_{a+b}( (\alpha_0^{3\theta}-1)^{\theta}-\alpha_0)x_{3a+b}(\alpha_0^{3\theta} -(\alpha_0^{3\theta}-1))\\ &=&x_{a+b}( \alpha_0-2-\alpha_0)x_{3a+b}(1)\\ &=&\beta(1) \end{array} $$ and $L_{F^2}(x^{-1})=x_{a+b}(\alpha_0-\alpha_0^q) x_{3a+b}(\alpha_0-\alpha_0^q)=\beta(1)$. \end{proof}
\begin{lemma}\label{shinY} We keep the notation as above. Then we have
$$\begin{array}{lcl} N_{F/F^2}([Y_1]_{\mathbf{G}^F})&=&[Y_3.F]_{\mathbf{G}^{F^2}\semi F}\\ N_{F/F^2}([Y_2]_{\mathbf{G}^F})&=&[Y_1.F]_{\mathbf{G}^{F^2}\semi F}\\ N_{F/F^2}([Y_3]_{\mathbf{G}^F})&=&[Y_2.F]_{\mathbf{G}^{F^2}\semi F}. \end{array}$$
\end{lemma}
\begin{proof}Let $u=x_a(1)x_b(1)x_{a+b}(\alpha)x_{3a+b}(\beta)y\in\mathbf{U}^{F^2}$ with $y\in \operatorname{Z}(\mathbf{U}^{F^2})$. Since $\mathbf{U}$ is connected, there is $x\in\mathbf{U}$ such that $L_F(x)=u$. Then there are $z\in\operatorname{Z}(\mathbf{U}^{F^2})$ and $t_a$, $t_b$, $t_{a+b}$ and $t_{3a+b}$ in $\overline{\mathbb{F}}_3$ such that $$x=x_a(t_a)x_b(t_b)x_{a+b}(t_{a+b})x_{3a+b}(t_{3a+b})z.$$ Using the Chevalley relations, we have $$\begin{array}{lcl}L_F(x)&=&x_a(t_b^\theta-t_a)x_b(t_a^{3\theta}-t_b) x_{a+b}(t_at_b-t_{a+b}+ t_b^\theta t_a^{3\theta}+t_{3a+b}^{\theta}-t_b^{\theta+1})\\ && x_{3a+b}(t_{a+b}^{3\theta}-t_b^{3\theta}t_a^{3\theta} -t_a^3t_b-t_{3a+b}+t_b^{3\theta+1})z', \end{array} $$ for some $z'\in\operatorname{Z}(\mathbf{U}^{F^2})$. However, using the uniqueness of the Chevalley decomposition we have the system of equations $(S')$ $$\left\{ \begin{array}{lcl} t_b^\theta-t_a&=&1\\ t_a^{3\theta}-t_b&=&1\\ t_at_b-t_{a+b}+ t_b^\theta t_a^{3\theta}+t_{3a+b}^{\theta}-t_b^{\theta+1}&=&\alpha\\ t_{a+b}^{3\theta}-t_b^{3\theta}t_a^{3\theta} -t_a^3t_b-t_{3a+b}+t_b^{3\theta+1}&=&\beta \end{array} \right.$$ We deduce from the two first equations that $t_a^q-t_a=-1$ and $t_b^q-t_b=-1$.
Moreover, there is $z''\in\operatorname{Z}(\mathbf{U}^{F^2})$ such that $$ \begin{array}{lcl}L_{F^{2}}(x^{-1})&=& x_a(t_a-t_a^q)x_b(t_b-t_b^q) x_{a+b}(t_a^q(t_b^q-t_b) + t_{a+b} - t_{a+b}^{q})\\ && x_{3a+b}(t_{3a+b}-t_{3a+b}^q-t_a^{3q}(t_b-t_b^q))z''\\ &=&x_a(1)x_b(1)x_{a+b}(t_{a+b}-t_{a+b}^q -t_a^q)x_{3a+b}(t_{3a+b}-t_{3a+b}^q+t_a^{3q})z''. \end{array} $$ Hence using Lemma~\ref{conjRep} in order to find the $\mathbf{G}^F$-class of $L_{F^{2}}(x^{-1})\in\mathbf{G}^F$, it is sufficient to find the $\mathbf{G}^{F^2}$-class of $L_{F^{2}}(x^{-1})$. However using Lemma~\ref{conjU}, we have to evaluate $\eta_{p(L_{F^2}(x^{-1}))}$ where $p$ is defined as in Lemma~\ref{conjU}. We have $$\begin{array}{lcl} p(L_{F^2}(x^{-1}))&=&t_{3a+b}-t_{3a+b}^q+t_a^{3q}+t_{a+b}-t_{a+b}^q -t_a^q\\ &=&t_{3a+b}-t_{3a+b}^q+t_{a+b}-t_{a+b}^q+t_a^3-t_a. \end{array}$$ Using the equations of the system $(S')$, we deduce that $$\left\{ \begin{array}{lcl} t_a^{3\theta+1}+t_{3a+b}^{\theta}-t_{a+b}+1&=&\alpha\\ -t_a^{3\theta+3}+t_{a+b}^{3\theta}-t_{3a+b}-1&=&\beta \end{array}\right. $$ It follows that $$\left\{ \begin{array}{lcl} t_a^{(3\theta+1)3\theta}-t_a^{3\theta+3}+t_{3a+b}^q-t_{3a+b}&=&\alpha^{3\theta}+\beta\\ t_a^{3\theta+1}-t_a^{(3\theta+3)\theta}+t_{a+b}^q-t_{a+b}&=&\alpha+\beta^{\theta} \end{array}\right. $$ Adding these two equations, we obtain $$t_{a+b}^q-t_{a+b}+t_{3a+b}^q-t_{3a+b}=\alpha+\alpha^{3\theta}+ \beta+\beta^{\theta}.$$ Hence we have $$p(L_{F^2}(x^{-1}))=-(\alpha+\alpha^{3\theta}+ \beta+\beta^{\theta})+t_a^3-t_a.$$ Moreover, we remark that the system $(S')$ is equivalent to the system $(S)$. We use for $(t_a,t_b,t_{a+b},t_{3a+b})$ the solution described in Lemma~\ref{systeme}, choosing $t_a=\alpha_0$, which is possible as we have seen in Remark~\ref{remarque1}. Thus $$p(L_{F^2}(x^{-1}))=-(\alpha+\alpha^{3\theta}+ \beta+\beta^{\theta})+\xi.$$ We suppose now that $n\equiv 1\mod 3$. Using Lemma~\ref{conjRep}, we have $$ \begin{array}{lcl} Y_1&=&x_a(1)x_b(1)x_{a+b}(1)x_{2a+b}(1)\\ Y_2&=&x_a(1)x_b(1)x_{a+b}(1-\xi^\theta)x_{3a+b}(-\xi)x_{2a+b}(1)\\ Y_3&=&x_a(1)x_b(1)x_{a+b}(1+\xi^\theta)x_{3a+b}(\xi)x_{2a+b}(1) \end{array}$$ For $(\alpha,\beta)=(1,0)$, $(\alpha,\beta)=(1-\xi^{\theta},-\xi)$ and $(\alpha,\beta)=(1+\xi^{\theta},\xi)$ respectively, we deduce that $\eta_{p(L_{F^2}(x^{-1}))}$ is equal to $1$, $-1$, and~$0$ respectively, because $\eta_1=0$. Thus $$\begin{array}{lcl} N_{F/F^2}([Y_1]_{\mathbf{G}^F})&=&[Y_3.F]_{\mathbf{G}^{F^2}\semi F}\\ N_{F/F^2}([Y_2]_{\mathbf{G}^F})&=&[Y_1.F]_{\mathbf{G}^{F^2}\semi F}\\ N_{F/F^2}([Y_3]_{\mathbf{G}^F})&=&[Y_2.F]_{\mathbf{G}^{F^2}\semi F} \end{array}$$ We proceed similarly when $n\equiv 0\mod 3$ and $n\equiv -1\mod 3$. The result follows.\end{proof}
\begin{proposition}\label{corrshin} We keep the same notation as above. Then we have $$\begin{array}{lcl} N_{F/F^2}([1]_{\mathbf{G}^F})&=&[F]_{\mathbf{G}^{F^2}\semi F}\\ N_{F/F^2}([X]_{\mathbf{G}^F})&=&[X.F]_{\mathbf{G}^{F^2}\semi F}\\ N_{F/F^2}([J]_{\mathbf{G}^F})&=&[h_0.F]_{\mathbf{G}^{F^2}\semi F}\\ N_{F/F^2}([T]_{\mathbf{G}^F})&=&[T.F]_{\mathbf{G}^{F^2}\semi F}\\ N_{F/F^2}([T^{-1}]_{\mathbf{G}^F})&=&[T^{-1}.F]_{\mathbf{G}^{F^2}\semi F}\\ N_{F/F^2}([Y_1]_{\mathbf{G}^F})&=&[Y_3.F]_{\mathbf{G}^{F^2}\semi F}\\ N_{F/F^2}([Y_2]_{\mathbf{G}^F})&=&[Y_1.F]_{\mathbf{G}^{F^2}\semi F}\\ N_{F/F^2}([Y_3]_{\mathbf{G}^F})&=&[Y_2.F]_{\mathbf{G}^{F^2}\semi F}\\ \{N_{F/F^2}([JT]_{\mathbf{G}^F}),N_{F/F^2}([JT^{-1}]_{\mathbf{G}^F})\}&=& \{[\eta h_0.F]_{\mathbf{G}^{F^2}\semi F},[\eta^{-1}h_0.F]_{\mathbf{G}^{F^2}\semi F}\}\\
\{N_{F/F^2}([t]_{\mathbf{G}^F})\ |\ t\in R\}&=&\{[t]_{\mathbf{G}^{F^2}\semi F}\ |\ t\in R'\}\\
\{N_{F/F^2}([t]_{\mathbf{G}^F})\ |\ t\in S\}&=&\{[t]_{\mathbf{G}^{F^2}\semi F}\ |\ t\in S'\}\\
\{N_{F/F^2}([t]_{\mathbf{G}^F})\ |\ t\in V\}&=&\{[t]_{\mathbf{G}^{F^2}\semi F}\ |\ t\in V'\}\\
\{N_{F/F^2}([t]_{\mathbf{G}^F})\ |\ t\in M\}&=&\{[t]_{\mathbf{G}^{F^2}\semi F}\ |\ t\in M'\}\\ \end{array}$$
\end{proposition} \begin{proof}To prove this result, we essentially use Relation~(\ref{eqcent}) comparing the orders of centralizers of the representatives of classes of $\mathbf{G}^F$, and of the representatives of the outer classes of $\mathbf{G}^{F^2}\semi F$ recalled in \S\ref{part1}. Moreover, for the classes $[T]_{\mathbf{G}^F}$, $[T^{-1}]_{\mathbf{G}^F}$, and $[Y_1]_{\mathbf{G}^F}$, $[Y_2]_{\mathbf{G}^F}$, $[Y_3]_{\mathbf{G}^F}$, we use Lemma~\ref{shinT} and Lemma~\ref{shinY} respectively. \end{proof}
\subsection{Shintani descents of the unipotent characters in type $G_2$}
The $F$-stable unipotent characters of $\mathbf{G}^{F^2}$ are denoted by $1_{\mathbf{G}^{F^2}}$, $\theta_1$, $\theta_2$, $\theta_5$, $\theta_{10}$, $\theta_{11}$, $\theta_{12}[1]$ and $\theta_{12}[-1]$ in~\cite{Enomoto}. Their degrees are $1$, $\frac{1}{6}q(q+1)^2(q^2+q+1)$, $\frac{1}{2}q(q+1)(q^3+1)$, $q^6$, $\frac{1}{6}q(q-1)^2(q^2-q+1)$, $\frac{1}{2}q(q-1)(q^3-1)$, $\frac{1}{3}q(q^2-1)^2$ and $\frac{1}{3}q(q^2-1)^2$ respectively. These characters extends to $\mathbf{G}^{F^2}\semi F$. Let $\chi$ be such a character. If $\chi$ has an extension such that its value on $F$ is not zero, then we denote by $\widetilde{\chi}$ the extension of $\chi$ such that $\widetilde{\chi}(F)>0$. In \cite[5.6]{Br5} we have seen that this is always the case except for $\theta_2$ and $\theta_{10}$. Then we choose for $\widetilde{\theta}_2$ and $\widetilde{\theta}_{10}$ the extensions such that $$\widetilde{\theta}_2(\eta h_0.F)=\sqrt{q}\quad\textrm{and}\quad\widetilde{\theta}_{10}(T.F)= \theta^2\sqrt{3}i.$$ Moreover, there is a misprint in~\cite[5.6]{Br5}. Indeed, we have $\widetilde{\theta}_{10}(Y_2)=-\theta\sqrt{3}i$ and $\widetilde{\theta}_{10}(Y_3)=\theta\sqrt{3}i$.
\begin{proposition}\label{desShin} We keep the same notation as above. Then we have $$\begin{array}{lcl} \operatorname{Sh}_{F^2/F}(1_{\mathbf{G}^{F^2}\semi F})&=&1_{\mathbf{G}^F}\\ \operatorname{Sh}_{F^2/F}(\widetilde{\theta}_1)&=&\dfrac{\sqrt{3}}{6}\left(i\xi_5 +i\xi_6-i\xi_7-i\xi_8+(\sqrt{3}-i)\xi_9+(\sqrt{3}+i)\xi_{10}\right)\\ \operatorname{Sh}_{F^2/F}(\widetilde{\theta}_2)&=&\pm\dfrac{1}{2}(-i\xi_5+i\xi_6+i\xi_7-i\xi_8)\\ \operatorname{Sh}_{F^2/F}(\widetilde{\theta}_{5})&=&\xi_3\\ \operatorname{Sh}_{F^2/F}(\widetilde{\theta}_{10})&=&\dfrac{\sqrt{3}}{6}\left(i\xi_5 +i\xi_6+i\xi_7+i\xi_8+(\sqrt{3}-i)\xi_9-(i+\sqrt{3})\xi_{10}\right)\\ \operatorname{Sh}_{F^2/F}(\widetilde{\theta}_{11})&=&\dfrac{1}{2}(\xi_5-\xi_6+\xi_7 -\xi_8)\\ \operatorname{Sh}_{F^2/F}(\widetilde{\theta}_{12}[1])&=&\dfrac{\sqrt{3}}{6}\left(
(\sqrt{3}-i)\xi_5+ (\sqrt{3}-i)\xi_6+(\sqrt{3}+i)\xi_9\right)\\ \operatorname{Sh}_{F^2/F}(\widetilde{\theta}_{12}[-1])&=&\dfrac{\sqrt{3}}{6}\left(
(\sqrt{3}+i)\xi_7+ (\sqrt{3}+i)\xi_8+(\sqrt{3}-i)\xi_{10}\right)\\ \end{array}$$ \end{proposition} \begin{proof} We use Proposition~\ref{corrshin} and \cite[5.6]{Br5} to obtain the values of the Shintani descents as class functions on $\mathbf{G}^F$. Using the character table of $\mathbf{G}^F$ in~\cite{Ward}, we decompose them in the basis of irreducible characters of $\mathbf{G}^F$. The result follows. \end{proof}
\section{Eigenvalues of the Frobenius for the Ree groups of type $G_2$} \label{part3}
As an application of Proposition~\ref{desShin} we compute in this section the roots of unity associated to the unipotent characters of the Ree groups of type $G_2$, as explained in Section~\ref{intro}.
\begin{theorem}We keep the same notation as above. The roots of unity associated to the unipotent characters of $^2G_2(q)$ are
$$\renewcommand{\arraystretch}{1.4}\begin{array}{l|l|l|l|l|l|l|c|c} \chi&\xi_1&\xi_3&\xi_5&\xi_6&\xi_7&\xi_8&\xi_9&\xi_{10}\\ \hline \omega_{\chi}&1&1&-i&-i&i&i&\frac{-\sqrt{3}+i}{2}&\frac{-\sqrt{3}-i}{2} \end{array}$$
\end{theorem} \begin{proof}We first recall a result of Digne-Michel.
Let $\rho$ be an irreducible character of $W$. Using the Harish-Chandra theory, we can associate to $\rho$ an irreducible character $\chi_{\rho}$ of the principal series of $\mathbf{G}^{F^2}$, that is the set of irreducible constituents of $$\Phi=\operatorname{Ind}_{\mathbf{B}^{F^2}}^{\mathbf{G}^{F^2}}(1_{\mathbf{B}^{F^2}}).$$ The group $\cyc F$ acts on $\operatorname{Irr}(W)$. If $\rho$ is $F$-stable, then $\rho$ extends to $W\semi F$, and has exactly $2$ extensions denoted by $\widetilde{\rho}$ and $\varepsilon\widetilde{\rho}$, where $\varepsilon$ is the non-trivial character of $W\semi F$ which has $W$ is its kernel.
However Malle shows in~\cite[1.5]{MalleHarish} that the irreducible characters of $W\semi F$ are in 1-1 correspondence with the constituents of $\widetilde{\Phi}=\operatorname{Ind}_{\mathbf{G}^{F^2}}^{\mathbf{G}^{F^2}\semi F}(\Phi)$. In particular, if $\rho$ is $F$-stable, then so is $\chi_{\rho}$. Hence, the characters $\chi_{\widetilde{\rho}}$ and $\chi_{\widetilde{\varepsilon\rho}}$ of $\widetilde{\Phi}$ corresponding to $\widetilde{\rho}$ and $\varepsilon\widetilde{\rho}$ respectively, are the two extensions of $\chi_{\rho}$ to $\mathbf{G}^{F^2}\semi F$. Moreover, we recall that the almost character of $\mathbf{G}^F$ corresponding to $\widetilde{\rho}$ is defined by $$\mathcal{R}_{\widetilde{\rho}}=\sum_{w\in W}\widetilde{\rho}(w.F)R_w.$$ The main theorem~\cite[2.3]{DMShin} asserts that \begin{equation} \operatorname{Sh}_{F^2/F}(\chi_{\widetilde\rho})=\sum_{V\in \mathcal{U}(\mathbf{G}^F}\cyc{\mathcal{R}_{\widetilde{\rho}},V}_{\mathbf{G}^F} \,\omega_V\, V. \label{DMtheo} \end{equation} Furthermore the almost characters of the Ree groups are computed by Geck and Malle in~\cite[2.2]{GM}. More precisely, the $F$-stable characters of $W$ are $1_W$, $\operatorname{sgn}$, and the two characters of degree $2$ of $W$, denoted by $2_1$ and $2_2$. The character $2_1$ is chosen such that it takes the value $-2$ on the Coxeter element of $W$. Then for the extensions of these characters chosen in~\cite{GM}, we have $$\begin{array}{lcl} \mathcal{R}_{1_{W\semi F}}&=&1_{\mathbf{G}^F}\\ \mathcal{R}_{\widetilde{\operatorname{sgn}}}&=&\xi_3\\ \mathcal{R}_{\widetilde{2}_1}&=& \dfrac{\sqrt{3}}{6}(\xi_5+\xi_6+\xi_7+\xi_8+2\xi_9+2\xi_{10})\\ \mathcal{R}_{\widetilde{2}_2}&=&\dfrac{1}{2}(\xi_5-\xi_6+\xi_7-\xi_8) \end{array}$$ Moreover, using~\cite[p.112,p.150]{Carter2}, we deduce that $$\operatorname{Ind}_{\mathbf{B}^{F^2}}^{\mathbf{G}^{F^2}}(1_{\mathbf{B}^F})= 1_{\mathbf{G}^{F^2}}+\theta_5+\theta_3+\theta_4+2\theta_1+2\theta_2,$$ and $1_{\mathbf{G}^{F^2}}=\chi_{1_W}$, $\theta_5=\chi_{\operatorname{sgn}}$, $\theta_1=\chi_{2_1}$ and $\theta_2=\chi_{2_2}$. Hence we have $$\chi_{\widetilde{2}_1}\in\{\widetilde{\theta}_1,\varepsilon\widetilde{\theta}_1\},$$where $\varepsilon$ denotes now the non trivial character of $\mathbf{G}^{F^2}\semi F$ containing $\mathbf{G}^{F^2}$ in its kernel. Therefore, using Relation~(\ref{DMtheo}) we deduce that $$\pm\operatorname{Sh}_{F^2/F}(\widetilde{\theta}_1)=\frac{\sqrt{3}}{6}\left( \omega_{\xi_5}\xi_5+\omega_{\xi_6}\xi_6+\omega_{\xi_7}\xi_7+ \omega_{\xi_8}\xi_8+ 2\omega_{\xi_9}\xi_9+2\omega_{\xi_{10}}\xi_{10}\right).$$ Hence we deduce using Proposition~\ref{desShin} that $$\omega_{\xi_9}=\pm\sqrt{3}\cyc{\operatorname{Sh}_{F^2/F}(\theta_1),\xi_9}_{\mathbf{G}^F} =\pm\frac{\sqrt{3}-i}{2}.$$ However using the result of Lusztig~\cite{LusCox}, we know that $\omega_{\xi_9}=(\pm i-\sqrt{3})/2$. We then deduce that $\chi_{\widetilde{2}_1}=\varepsilon\widetilde{\theta}_1$ and that $$\omega_{\xi_9}=\frac{-\sqrt{3}+i}{2}.$$ We immediately obtain the other roots using Proposition~\ref{desShin}. \end{proof}
\begin{remark}To determine the roots of unity of the unipotent characters of the Ree groups of type $G_2$, the preceding proof shows that we only need to know the Shintani descent of $\widetilde{\theta}_1$.
\end{remark}
\section{Conjecture of Digne-Michel for the Ree groups of type $G_2$} \label{part4}
In~\cite{DMShin} Digne and Michel state a conjecture on the decomposition in irreducible constituents of the Shintani descents. We recall this conjecture in the special case that $ F\in\operatorname{Aut}(W)$ has order $2$. If $\chi$ is an $F$-stable irreducible character of $\mathbf{G}^{F^2}$, and if $\widetilde{\chi}$ denotes an extension of $\chi$ to $\mathbf{G}^{F^2}\semi F$, then it is conjectured that: \begin{itemize} \item The irreducible constituents of $\operatorname{Sh}_{F^2/F}(\widetilde{\chi})$ are unipotent. \item Up to a normalization, the coefficients of $\operatorname{Sh}_{F^2/F}$ in the basis~$\operatorname{Irr}(\mathbf{G}^F)$ only depend on the coefficients of the Fourier matrix, and on the roots of unity attached to the unipotent characters of $\mathbf{G}^F$ as above. More precisely, there is a root of unity $u$ such that $$\pm u\operatorname{Sh}_{F^2/F}(\widetilde{\chi})=\sum_{V\in\mathcal{U}(\mathbf{G}^F)}f_V\omega_V\,V,$$ where $(f_V,\ V\in\mathcal{U}(\mathbf{G}^F))$ is, up to a sign, a row of the Fourier matrix. \end{itemize}
\begin{theorem}\label{conDM} The conjecture of Digne-Michel on the decomposition of the Shintani descents of unipotent characters holds in type $G_2$ for the Frobenius map $F$ that defines the Ree group $^2G_2(q)$. \end{theorem}
\begin{proof}We set $u_{12[1]}=\frac{1}{2}(-1+\sqrt{3}i)$ and $u_{12[-1]}=\frac{1}{2}(1+\sqrt{3}i)$. We remark that $$\begin{array}{ll} u_{12[1]}(\sqrt{3}-i)=2i,& u_{12[1]}(\sqrt{3}+i)=-\sqrt{3}+i,\\ u_{12[-1]}(\sqrt{3}+i)=2i,& u_{12[-1]}(\sqrt{3}-i)=\sqrt{3}+i \end{array}$$ Therefore, using Proposition~\ref{desShin} we have $$ \begin{array}{lcl} u_{12[1]}\operatorname{Sh}_{F^2/F}(\widetilde{\theta}_{12[1]})&=& \displaystyle{\frac{\sqrt{3}}{6} \left(2i\xi_5+2i\xi_6+(i-\sqrt{3})\xi_9\right)}\\ &=& \displaystyle{\frac{\sqrt{3}}{6} \left(-2(-i)\xi_5+-2(-i)\xi_6+2\frac{1}{2}(i-\sqrt{3})\xi_9\right).} \end{array}$$ Similarly, we obtain $$u_{12[-1]}\operatorname{Sh}_{F^2/F}(\widetilde{\theta}_{12[-1]})=\frac{\sqrt{3}}{6} \left(2i\xi_7+2i\xi_8-\frac{1}{2}(-i-\sqrt{3})\xi_{10}\right).$$ Moreover, we have $$\begin{array}{lcl} \operatorname{Sh}_{F^2/F}(\widetilde{\theta}_2)&=&\pm\dfrac{\sqrt{3}}{6}( \sqrt{3}(-i)\xi_5-\sqrt{3}(-i)\xi_6+ \sqrt{3}i\xi_7-\sqrt{3}i\xi_8),\\ \operatorname{Sh}_{F^2/F}(\widetilde{\theta}_{10})&=&\dfrac{\sqrt{3}}{6}\left(-(-i)\xi_5 +-(-i)\xi_6+i\xi_7+i\xi_8-2\dfrac{1}{2}(-\sqrt{3}+i)\xi_9\right.\\ &&\left.+2\dfrac{1}{2}(-i-\sqrt{3})\xi_{10}\right),\\
i\operatorname{Sh}_{F^2/F}(\widetilde{\theta}_{11})&=&\dfrac{\sqrt{3}}{6} \left(-\sqrt{3}(-i)\xi_5+\sqrt{3}(-i)\xi_6 +\sqrt{3}i\xi_7-\sqrt{3}i\xi_8\right). \end{array}$$ We set $u_{1}=u_2=u_{10}=1$ and $u_{11}=i$. If we compare the coefficients in the preceding computations with the coefficients of the Fourier matrix of Geck-Malle recalled in~\S\ref{intro}, we obtain up to a sign a row of the Fourier matrix. Therefore the conjecture holds. \end{proof}
\begin{remark}\label{rem} We now discuss the roots $u_i$ for $i\in \{1,2,10,11,12[1],12[-1]\}$ appearing in the proof of Theorem~\ref{conDM}.
For $g\in\mathbf{G}^F$, Lang's theorem says that there is $x\in\mathbf{G}$ such that $g=x^{-1}F(x)$. Therefore we have $xgx^{-1}\in\mathbf{G}^F$. Moreover, if $x'\in\mathbf{G}$ is such that $x'^{-1}F(x')=g$, then $x'$ lies in the coset $\mathbf{G}^F.x$, hence the $\mathbf{G}^F$-class of $xgx^{-1}$ does not depend on the choice of $x$. For a class function $f$ on $\mathbf{G}^F$, we can then define the Asai-twisting operator $t^*$ by $$t^*(f)(g)=f(xgx^{-1})\quad\textrm{for }g\in\mathbf{G}^F,\ \textrm{and }x\in\mathbf{G}\ \textrm{such that }x^{-1}F(x)=g.$$
For a pair $(g,\psi)$ with $g\in\mathbf{G}^F$ and $\psi$ an $F$-stable irreducible character of the component group $A(g)=\operatorname{C}_{\mathbf{G}}(g)/\operatorname{C}_{\mathbf{G}}(g)^{\circ}$, we can associate a class function~$\Psi_{(g,\psi)}$ which depends on the choice of an extension of $\psi$ to $A(g)\semi F$. Therefore $\Psi_{(g,\psi)}$ is an eigenvector of $t^*$, and the corresponding eigenvalue $\lambda_{(g,\psi)}$ is equal to $\psi(\overline{g})/\psi(1)$, where $\overline{g}$ denotes the image of $g$ in $A(g)$.
Let $(c_i,\ i\in I)$ be a row of a Fourier matrix. Then the construction of the Fourier matrices given by Geck-Malle show that there is a pair $(g,\psi)$ as above, such that $$\Psi_{(g,\psi)}=\sum_{i\in I}c_i\xi_i.$$
To $\chi$ an $F$-stable unipotent character of $\mathbf{G}^{F^2}$, we associate the pair $(g,\psi)$ attached as above to the row of the Fourier matrix corresponding to $u_{\chi}\operatorname{Sh}_{F^2/F}(\widetilde{\chi})$. We have
$$\begin{array}{l|c} \chi&\lambda_{(g,\psi)}\\ \hline \theta_{1}&1\\ \theta_{2}&1\\ \theta_{10}&1\\ \theta_{11}&-1\\ \theta_{12[1]}&\frac{1}{2}(-1-\sqrt{3}i)\\ \theta_{12[-1]}&\frac{1}{2}(-1+\sqrt{3}i) \end{array}$$
We observe that the element $u_{\chi}$ chosen in the proof of Theorem~\ref{conDM} is a root of the polynomial $$X^2-\lambda_{(g,\psi)}.$$ Moreover, we remark that we can choose for $u_{\chi}$ an arbitrary root of this polynomial, because a row of a Fourier matrix is defined up to a sign.
Finally, we notice that in the situation of type $B_2$ with $F$ the Frobenius map defining the Suzuki groups, this observation also holds~\cite[4.2]{Br3}. \end{remark} \begin{remark} Let $\mathbf{G}$ be a simple algebraic group of type $F_4$ over $\overline{\mathbb{F}}_2$, and let $F$ be the Frobenius map on $\mathbf{G}$ that defines the Ree group $^2F_4(2^{2n+1})$. In this situation, the character table of $\mathbf{G}^{F^2}\semi F$ is actually unknown. Suppose that \begin{itemize} \item The conjecture of Digne-Michel holds. \item We know how to associate to every unipotent character of $^2F_4(q)$ its root of unity as above. \item The observation of Remark~\ref{rem} holds. \end{itemize} Then, using~\cite[5.4(c)]{GM} and \cite{MaF4}, we can describe the values of the unipotent characters of $\mathbf{G}^{F^2}\semi F$ on the coset $\mathbf{G}^{F^2}.F$. \end{remark}
\noindent \textit{Acknowledgments.}\quad I wish to thank Gunter Malle for a helpful and motivating discussion on this work.
\end{document} | arXiv |
If $6a^2 + 5a + 4 = 3,$ then what is the smallest possible value of $2a + 1$?
We proceed as follows: \begin{align*}
6a^2 + 5a + 4 &= 3\\
6a^2 + 5a + 1 &= 0\\
(2a + 1)(3a + 1) &= 0.
\end{align*}This gives us $a = -\frac{1}{2}$ or $a = -\frac{1}{3}.$ Of these, $a = -\frac{1}{2}$ gives the smaller value of $2a + 1 = \boxed{0}.$ | Math Dataset |
Home » Econometrics » What Are The Econometric Models?
What Are The Econometric Models?
What Are The Econometric Models? How do you view and understand your Econometrics? Take time out once you reach the goal to understand Econometrics' model. Abstract For my last three years in business, I practiced the following: the process of simplifying the data structure of a financial database using the popular methods that would be of use to save the data. Model simplification. Simple simplification of a database for managing and managing the data structures. Model-in-consult. In-crediting models in order to communicate with a manager. Model-at-discuss. In-crediting models in order to communicate with a manager. Model-in-desk. In-crediting models in order to communicate with a manager. I work on a number of documents and I want to be able to make detailed changes or just "wish" for a certain document. At the moment I need 5k documents and I've spent 5 years, then, to know what I need to change, I'd use econometrics experts, or even just an average of those experts, will I need to search for the data base I want. My request was to search on the online database web site and if the data isn't there, I'll just click the "Find Database" button (when it is on a business page). All of this being said, let's just say that I was really naive about the standard model that we're using or at least, my naive approach was that my client will choose my solution, it will select my data for analysis or even for discussion and I can look at the data. Of course I want to look at the data which is available from other people and that's where I decided to choose my data. As for the new data in "basic data" data, I get the option to "go with the data when I've covered my story." But it seems like most people would like my data which happens to run the same standard as what the average is and that is my next picture (note this isn't my final picture). Getting to the data All of the above seems like this is the best practice. It's very simple and it can be tedious and that's the point to keep in mind. Model Simple This would instead be an interaction between the manager and the main data collector, this is where I tend to find the data and if it can be edited, there's nothing to it.
Fixed Vs Random Effects In R
Next I could do this using just the names of the data, or name the model to read or use for reference. Summary Of An Interface Management Is Not A Fun Any new and more complex data model comes with a lot of different features and abilities and I'd like to talk about the commonalities and strengths of each of those differences. I want to see whether one user can find only a subset of of such details and I do want to take a look at that view and see what algorithms and their similarity or not. Please first have a look at the full post below: I hate to say what are the most fundamental aspects of the Data ManagementWhat Are The Econometric Models? Aspects of the New Agenda? What are the New Assumptions and What Are Our Expectations? Econometrics; Analytical and Applied Mathematics; MPA, NMR and others 2) Aspects of The New Agenda 2 Assumptions. If we are in need of a framework for more in-depth analysis and the study of complex and dynamical systems of interest then clearly no set of theories of the present status are appropriate. The purposes of this supplement are to perform general statistical analyses, to provide descriptive and critical analyses of the most prominent examples, e.g., from which the basic assumptions, their implications and key questions are derived. 2) Aspects of the New Agenda How can we do better? We follow the standard agenda with some key definitions of the theoretical framework that we refer to below. The framework we refer to is the general framework of quantum physics. The other key definitions are very briefly given below. Section 3 discusses briefly the full set of descriptive and important examples. This section of the appendix presents some key ideas and a presentation of the techniques of the future. See Section 4.3 in this supplement for details. 3. The formalism of mathematical models 2) -Econometric development 4) -Electron dynamics and its effects 5) -Associations of quantities between parties 2) -Estimation of the relative error of different quantities in measurements 2) -Formulation of a model based on classical random matrices, a wide range of micro-statistical possibilities, and a model-independent approach atlas of the theory of interest 2) -Derivation of a detailed theory of the basic assumptions and the new conceptual frameworks 4) -Problem for a model – e.g., estimation of the relative error of elements of the state space of a normalised state 4) -Further overview as regards: the key concepts and notation 4) -The functional equation process 4.1 We comment as regards the methods and models used throughout the issue so far.
Goals Of Econometrics
According to the current situation most systematic methods are of course based on classical statistical tools such as Monte Carlo or Monte Carlo programmes. The only formal reference is [@BS02 p. 1105]. In the recent section of the appendix we will briefly review these two approaches in more detail and in order to be clear we will apply the strategies of the current section. The mathematical model for the Econometric Model using the Gompertz Framework ========================================================================== The Gompertz framework originated within Xuch sight its first attempt at a concrete mathematical model. According to this proposal the classical (monospin, quark, and quarkonium) model is the field in which modern mathematical physics started out quite a decade ago [@DS01]. From the new conception of the model developed in Sec. 3.2 it was initially sought to impose some abstract conditions on the local variables and their values, and finally to introduce a suitable computational framework [@CDS07; @Csusi02; @Csusi03]. The model of interest in the present paper, being a dynamic and long-standing mathematical object, is an example of a generalized model for one of the major models of social sciences used in the field. In this paper we shall concentrate on the equations of motion governing the dynamics and in particular the differential equations for the specific cases of the three physical systems according to the one of interest (see below). The equations of motion for the three physical systems and their derivatives were formulated in the framework of the Gompertz model [@BS02]. For the corresponding formalism of differential equations one is most interested in them: When defining for any two real functions $f^*,g^*,x^*$ one has to sum up in order to obtain the functional equation [@CDS07; @Csusi02] [**A**$_0^{1-\alpha}$;**G**^*_0^{1-\alpha}$;**A**$_0^{1-\alpha}$;**x**$^{*}$]{}[@Csusi03] $x^*=g^*+2f^*$ with $g^*, g\in {\mathfrak{q}^{2/3}}$. In addition, one has to use the framework introduced in [@KMW09] forWhat Are The Econometric Models? The current economic situation in France is one in which social equality and prosperity are at the core of the problem. How can a country find itself without economic growth? Not exactly impossible, certainly. The financial crisis cost French society after the end of the 1980s. It was the initial price of French life on the world stage—the monetary crisis of the 1930s. The French modernized, they added a tax which reduced the amount of services and invested time. As we had expect in the 1950s and 1960s, the rate of GDP as a percentage of GDP rose from 20 percent to 30 percent. This was enough to push society back from the top.
Advanced Econometrics Book Pdf
But then there was the inflationary tax. While these ideas are not yet out of the question, the ideas put out by most economists are very good. This was the first time economists ever had a firm grasp of economic theory. The ideas were developed largely through a search of all the papers that had been submitted to the Social Science Department of the National Social Research Division of the United States (National Social Research Group) during the intervening months. They were devoted mainly to social science theory and its development as a field in modern times such as the 1970s and 1980s. Then there were the paper reviews of economic circles, from those of Stern to Kuhn. The paper reviews concerned the social order. The problems and the methods were taken a step further—and thus far new schooled methods from the field of economics. The paper reviews were in no way an attempt at economic theory but rather a product of the ongoing publication of more recent papers that covered both technical and economic matters. Some of the chapters were entirely in print from the beginning, all going back to 1906. Others were Homework Help by the Social Science Department of the National Social Sciences Division of the United States while they were being pushed to the margins of the daily media. When the publication of publication became available to the public, it was announced that major new academics were coming to the field: Alfred Fittler, Max Stern, Pierre Lapidot, and M. Dutour-Ghelukh: Their foundations today take over the whole of social science as a journal if not as a field. Benny Gerevich: Former president of the Société Nationale de tout message «socie»; former editor of the French Communist Society; M. de la Laveau in chief editor of the journal La Colla; and, in return for his participation, his work on economics and his research. **New Phases** * One thought: * More research on the social psychology is possible * More science is in the making * In the meantime, let me restate a piece that might be omitted. (1980) Also remembered is his work on contemporary political and economic theory. At the same time, the author's work on economic problems has gone out of fashion. Nor is his thesis submitted to the Social Science Department, but he has largely followed his methods in the field. The paper reviews are always reviewed as part of his research.
Applied Econometrics With R Solutions
The only change in the way in which his methods work is noticeable are the new set of rules-based theory writers. The new set of rules has not changed. Rather, his new style is radically
Do My Homework For Cheap
Help For Statistics Students
Assignment Helpers | CommonCrawl |
\begin{definition}[Definition:Uniform Equivalence/Metric Spaces]
Let $M_1 = \struct {A_1, d_1}$ and $M_2 = \struct {A_2, d_2}$ be metric spaces.
Then the mapping $f: A_1 \to A_2$ is a '''uniform equivalence''' of $M_1$ with $M_2$ {{iff}} $f$ is a bijection such that $f$ and $f^{-1}$ are both uniformly continuous.
\end{definition} | ProofWiki |
Cash–Karp method
In numerical analysis, the Cash–Karp method is a method for solving ordinary differential equations (ODEs). It was proposed by Professor Jeff R. Cash [1] from Imperial College London and Alan H. Karp from IBM Scientific Center. The method is a member of the Runge–Kutta family of ODE solvers. More specifically, it uses six function evaluations to calculate fourth- and fifth-order accurate solutions. The difference between these solutions is then taken to be the error of the (fourth order) solution. This error estimate is very convenient for adaptive stepsize integration algorithms. Other similar integration methods are Fehlberg (RKF) and Dormand–Prince (RKDP).
The Butcher tableau is:
0
1/51/5
3/103/409/40
3/53/10−9/106/5
1−11/545/2−70/2735/27
7/81631/55296175/512575/1382444275/110592253/4096
37/3780250/621125/5940512/1771
2825/27648018575/4838413525/55296277/143361/4
The first row of b coefficients gives the fifth-order accurate solution, and the second row gives the fourth-order solution.
See also
• Adaptive Runge–Kutta methods
• List of Runge–Kutta methods
Notes
1. Jeff R. Cash, Professor of Numerical Analysis, Imperial College London
References
• J. R. Cash, A. H. Karp. "A variable order Runge-Kutta method for initial value problems with rapidly varying right-hand sides", ACM Transactions on Mathematical Software 16: 201-222, 1990. doi:10.1145/79505.79507.
| Wikipedia |
How to prove the continuity of the metric function?
Given a metric space $(X,d)$, how to prove that the function $d \colon X \times X \to \mathbf{R}$ is continuous?
If we take any two arbitrary real numbers $a$ and $b$ such that $a < b$, then we need to show that the set $d^{-1} (a,b)$ given by
$$ d^{-1} (a,b) := \{ (x, y) \in X \times X | a < d(x,y) < b \} $$
is open in the product topology on $X \times X$.
A basis for this product topology is the collection of all cartesian products of open balls in $(X,d)$.
metric-spaces
k.stm
Saaqib MahmoodSaaqib Mahmood
$\begingroup$ See also: Is the mapping $d : X\times X \mapsto \mathbb {R}$ continuous?, Metric is continuous function, Continuity between metric d with respect to product topology. $\endgroup$ – Martin Sleziak Jul 26 '17 at 7:39
For $a, b ∈ ℝ$, let $(x,y) ∈ d^{-1} (a..b)$, i.e. $a < d(x,y) < b$. Now choose $ε$ such that $U_{2ε} (d(x,y)) ⊂ (a..b)$ and look at $U_ε (x) × U_ε (y)$.
For any tuple of points $(x',y') ∈ U_ε (x) × U_ε (y)$ you have $$d(x',y') ≤ d(x',x) + d(x,y) + d(y,y') < d(x,y) + 2ε$$ as well as $$d(x,y) ≤ d(x,x') + d(x',y') + d(y',y) < d(x',y') + 2ε$$ This means $a < d(x,y) - 2ε < d(x',y') < d(x,y) + 2ε < b$.
Therefore $U_{ε} (x) × U_{ε} (y) ⊂ d^{-1} (a..b)$.
k.stmk.stm
$\begingroup$ Sorry, I didn't recognize the notation, but by $(a..b)$ do you mean the set of all real numbers from $a$ to $b$ exclusive (i.e. the open interval)? $\endgroup$ – Jason Nichols Jul 16 '16 at 21:07
$\begingroup$ Yeah, it's a notation introduced by Knuth, I believe. It's quite handy. It avoids the confusion of regarding $(a,b)$ both as a tuple and as an open interval. See the good notations thread on mathoverflow. $\endgroup$ – k.stm Jul 17 '16 at 17:53
Not the answer you're looking for? Browse other questions tagged metric-spaces or ask your own question.
Metric is continuous function
Infimum of distance in compact metric spaces.
Is the mapping $ d : X\times X \mapsto \mathbb {R} $ continuous?
Continuity between metric d with respect to product topology
Is $f(x,y) = d(x,y)$ a continuous function from a Metric space $X \times X$ to $\mathbb R$?
Showing continuity using definition in Topology
How is the metric topology the coarsest to make the metric function continuous?
How to prove that the uniform topology is different from both the product and the box topology?
Show that the countable product of metric spaces is metrizable
The continuity of a distance function
Metric is continuous, on the right track?
Proving a metric induces the product topology
Two problems related to continuity of a metric from Munkres' topology book
Prob. 3 (b), Sec. 21, in Munkres' TOPOLOGY, 2nd ed: A Countably Infinite Cartesian Product Of Metrizable Spaces Is Metrizable
Prob. 10, Sec. 21, in Munkres' TOPOLOGY, 2nd ed: How to establish the continuity of this map from $\mathbb{R} \times \mathbb{R}$ into $\mathbb{R}$? | CommonCrawl |
Affine action
Let $W$ be the Weyl group of a semisimple Lie algebra ${\mathfrak {g}}$ (associate to fixed choice of a Cartan subalgebra ${\mathfrak {h}}$). Assume that a set of simple roots in ${\mathfrak {h}}^{*}$ is chosen.
The affine action (also called the dot action) of the Weyl group on the space ${\mathfrak {h}}^{*}$ is
$w\cdot \lambda :=w(\lambda +\delta )-\delta $ :=w(\lambda +\delta )-\delta }
where $\delta $ is the sum of all fundamental weights, or, equivalently, the half of the sum of all positive roots.
References
• Baston, Robert J.; Eastwood, Michael G. (1989), The Penrose Transform: its Interaction with Representation Theory, Oxford University Press.
| Wikipedia |
Effects of ensemble and summary displays on interpretations of geospatial uncertainty data
Lace M. Padilla1,2,
Ian T. Ruginski1 &
Sarah H. Creem-Regehr1
Cognitive Research: Principles and Implications volume 2, Article number: 40 (2017) Cite this article
Ensemble and summary displays are two widely used methods to represent visual-spatial uncertainty; however, there is disagreement about which is the most effective technique to communicate uncertainty to the general public. Visualization scientists create ensemble displays by plotting multiple data points on the same Cartesian coordinate plane. Despite their use in scientific practice, it is more common in public presentations to use visualizations of summary displays, which scientists create by plotting statistical parameters of the ensemble members. While prior work has demonstrated that viewers make different decisions when viewing summary and ensemble displays, it is unclear what components of the displays lead to diverging judgments. This study aims to compare the salience of visual features – or visual elements that attract bottom-up attention – as one possible source of diverging judgments made with ensemble and summary displays in the context of hurricane track forecasts. We report that salient visual features of both ensemble and summary displays influence participant judgment. Specifically, we find that salient features of summary displays of geospatial uncertainty can be misunderstood as displaying size information. Further, salient features of ensemble displays evoke judgments that are indicative of accurate interpretations of the underlying probability distribution of the ensemble data. However, when participants use ensemble displays to make point-based judgments, they may overweight individual ensemble members in their decision-making process. We propose that ensemble displays are a promising alternative to summary displays in a geospatial context but that decisions about visualization methods should be informed by the viewer's task.
Understanding how to interpret uncertainty in data, specifically in weather forecasts, is a problem that affects visualization scientists, policymakers, and the general public. For example, in the case of hurricane forecasts, visualization scientists are tasked with providing policymakers with visual displays that will inform their decision on when to call for mandatory evacuations and how to allocate emergency management resources. In other circumstances, the general public may view hurricane forecasts to make decisions about when and how to evacuate. Even though these types of decisions are costly and have a high impact on health and safety, the literature provides few recommendations to visualization scientists about the most effective way to display uncertainty in hurricane forecasts to a novice audience. Previous research has shown that novice viewers misinterpret widely used methods to visualize uncertainty in hurricane forecasts. The current work examines how novice users interpret two standard methods to display uncertainty in hurricane forecasts, namely ensemble and summary displays. We demonstrate how salient elements of a display – or elements in a visualization that attract attention – can influence interpretations of visualizations. We also provide specific recommendations based on empirical evidence for best practices with each technique.
Ensemble data is the most commonly used type of forecast data across many scientific domains, including weather prediction and climate modeling (Sanyal et al., 2010). Scientists create ensemble datasets by generating or collecting multiple data values or 'ensemble members' (Brodlie, Osorio, & Lopes, 2012; Potter et al., 2009). Then, scientists plot all, or a subset of, the ensemble members on the same Cartesian coordinate plane, creating an ensemble display (Harris, 2000). Despite ensemble display use in scientific practice, it is more common to utilize summary displays for public presentations (Pang, 2008). Scientists construct summary displays by plotting statistical parameters, such as the mean, median, distribution, standard deviations, confidence intervals (CIs) and, with some advanced techniques, outliers, of the ensemble members (Whitaker, Mirzargar, & Kirby, 2013). Among the studies that have attempted to assess the efficacy of ensemble and summary visualizations, there is disagreement about the best method to communicate uncertainty to the general public. This work aims to test the efficacy of both approaches in the context of hurricane forecasts.
Supporters of ensemble displays suggest that there are benefits to this visualization method, including (1) the ability to depict all or the majority of the ensemble data, making a representative portion of the data visually available (Liu et al., 2016); (2) the fact that ensemble displays depict non-normal relationships in the data such as bimodal distributions, perceived as discrete clusters (Szafir, Haroz, Gleicher, & Franconeri, 2016); (3) the preservation of relevant outlier information (Szafir et al., 2016); and (4) the fact that viewers can, in some cases, accurately report some statistical parameters depicted by ensemble displays such as probability distributions (Cox, House, & Lindell, 2013; Leib et al., 2014; Sweeny, Wurnitsch, Gopnik, & Whitney, 2015; Szafir et al., 2016), trends in central tendency (Szafir et al., 2016), and mean size and orientation (Ariely, 2001) (for comprehensive reviews see, Alvarez, 2011; Whitney, Haberman, & Sweeny, 2014). Sweeny et al. (2015) further showed that children as young as four could accurately judge the relative average size of a group of objects. Researchers argue that viewers perceive the aforementioned data parameters in ensemble displays because they can mentally summarize visual features of ensemble displays by perceiving the gist or integrating ensemble data into rich and quickly accessible information (Correll & Heer, 2017; Leib et al., 2014; Oliva & Torralba, 2006; Rousselet, Joubert, & Fabre-Thorpe, 2005). In relation to this, Szafir et al. (2016) detailed four types of tasks (identification, summarization, segmentation, and structure estimation) that are well suited for ensemble displays because they utilize ensemble coding or the mental summarization of data. In line with this work, Correll and Heer (2017) found that participants were effective at estimating the slope, amplitude, and curvature of bivariate data when displayed with scatter plots. In contrast, researchers found that viewers had a strong bias when estimating correlations from scatter plots but also demonstrated that the laws that viewers followed remained similar across variations of encoding techniques and data parameters such as changes in density, aspect ratio, color, and the underlying data distribution (Rensink, 2014, 2016). In sum, there is evidence that adult novice viewers and children can, in some cases, derive statistical information from ensemble displays and that ensemble displays can preserve potentially useful characteristics in the ensemble data.
While previous research indicates that there are various benefits to ensemble displays, there are also some drawbacks. The primary issue with ensemble displays is that visual crowding may occur, which happens when ensemble members are plotted too closely together and cannot be easily differentiated, increasing difficulty in interpretation. While researchers have developed algorithms to reduce visual crowding (e.g., Liu et al., 2016), visual crowding may still occur when all of the ensemble data is plotted.
Summary displays are an alternative to ensemble displays and are suggested to be easier and more effective for users to understand. Work in cartography argues that choropleth maps, which are color encodings of summary statistics such as the average value over a region, are more comprehensible than displaying all of the individual data values (Harrower & Brewer, 2003; Watson, 2013). Michael Dobson argued that the summarization in choropleth maps decreases mental workload and time to perform tasks while improving control of information presentation and pattern recognition (Dobson, 1973, 1980). Beyond choropleth maps, summarization techniques have been developed that can encode advanced summary statistics, such as quartiles, outlier data, and task-relevant features, in ensemble datasets (Mirzargar, Whitaker, & Kirby, 2014; Whitaker et al., 2013).
However, researchers have also documented drawbacks to summarization techniques. First, displays of summary statistics, such as median, mean, and standard deviations, can hide important features in the data such as bimodal or skewed distributions and outliers (Whitaker et al., 2013). Second, summary displays that include boundaries, such as line plots of summary statistics, produce more biased decisions than scatter plots of the same data (Correll & Heer, 2017). Finally, studies have demonstrated that even simple summary displays, such as statistical error bars, are widely misinterpreted by students, the public, and even trained experts (Belia, Fidler, Williams, & Cumming, 2005; Newman & Scholl, 2012; Sanyal, Zhang, Bhattacharya, Amburn, & Moorhead, 2009; Savelli & Joslyn, 2013).
In the context of hurricane forecasts, there is evidence that summary displays may result in more misinterpretations than ensemble displays (Ruginski et al., 2016). A notable example is the National Hurricane Center's (NHC) 'cone of uncertainty' (Fig. 1).
An example of a hurricane forecast cone typically presented to end-users by the National Hurricane Center (http://www.nhc.noaa.gov/aboutcone.shtml)
Forecasters create the cone of uncertainty by averaging a 5-year sample of historical hurricane forecast tracks, resulting in a border where locations inside the boundary have a 66% likelihood of being struck by the center of the storm (Cox et al., 2013). Even though the cone of uncertainty is used by the NHC, it does not follow well-established cartographic principles (e.g., Dent, 1999; Robinson, Morrison, Muehrcke, Kimerling, & Guptill, 1995), including hierarchical organization, which asserts that the level of salience should correspond to the importance of information in a display. However, the cone of uncertainty does support the general view that simplifying complex ensemble data will make decisions easier for users. Ruginski et al. (2016) compared five different encodings of ensemble data (three summary displays, one display of the mean, and one ensemble display) of hurricane forecast tracks, using a task where participants predicted the extent of damage that would occur at a given location. The three summary displays included a standard cone of uncertainty, which had a mean line, a cone without the mean line, and a cone in which the color saturation corresponded to the probability distribution of the ensemble data. Results revealed that, with the summary displays, participants believed that locations at the center of the hurricane that were at a later point in time would receive more damage than at an earlier time point. Strikingly, ensemble displays showed the reverse pattern of responses, with damage rated to be lower at the later time. Further, we found that participants viewing any of the summary displays compared to the ensemble display were significantly more likely to self-report that the display depicted the hurricane growing in size over time. In fact, the cone only depicts a distribution of potential hurricane paths and no information about the size (Cox et al., 2013). One consistency between the three summary displays was the growing diameter of the cone boundaries (as illustrated in Fig. 2a). A possible interpretation of this finding is that viewers focused on the increasing size of the cone, rather than mapping increasing uncertainty to the size of the cone.
Examples of the cone (a, c) and ensemble display (b, d) visualization techniques of hurricane one (a, b) and two (c, d)
More generally, one potential source of the misinterpretation of both summary and ensemble displays is their salient visual features. Salient visual features are defined as the elements in a visualization that attract bottom-up attention (e.g., Itti, Koch, & Niebur, 1998; Rosenholtz & Jin, 2005). Researchers have argued that salience is also influenced by top-down factors (e.g., training or prior knowledge), particularly for tasks that simulate real world decisions (Fabrikant, Hespanha, & Hegarty, 2010; Hegarty, Canham, & Fabrikant, 2010; Henderson, 2007). Hegarty et al. (2010) demonstrated that, in a map-based task, top-down task demands influenced where participants looked on the page, and then salience influenced what information they attended to in the region of interest. This work suggests that both top-down processing and salience guide attention. As described above, a salient visual feature of the cone of uncertainty is the border, which surrounds the cone shape and which grows in diameter with time (Fig. 2a). A salient feature of ensemble displays is the individual ensemble members and their relationship to one another (Fig. 2b). It is possible that the salient features of both the cone of uncertainty and ensemble displays of the same data attract viewers' attention and bias their decisions (Bonneau et al., 2014).
The motivation for this work was to address both an applied and a theoretical goal. The applied goal was to test whether salient features of summary and ensemble displays contributed to some of the biases reported in prior work (Ruginski et al., 2016), whereas the theoretical goal was to examine whether salient visual features inform how viewers interpret displays. In the case of the cone of uncertainty, viewers may associate the salient increasing diameter of the cone with changes in the physical size of the hurricane. To test this possibility, in the first experiment, we expanded on our previous paradigm by having participants make estimates of the size and intensity of a hurricane with either ensemble or summary displays. In a second experiment, we focused further on the ensemble visualization and judgments of potential damage across the forecast, testing whether the role of the individual lines presented in an ensemble display would be misinterpreted because of their salience in the display. Finally, in a third experiment, we replicate the second experiment and extend the findings beyond a forced choice task.
In line with our prior work (Ruginski et al., 2016), we hypothesized that participants viewing the cone of uncertainty would report that the hurricane was larger at a future time point. It was an open question whether judgments of intensity would also be associated with the depicted size of the cone. We predicted that those viewing the ensemble display would report that the size and intensity of the storm remained the same in the future because the size cue from the cone was not present. On the other hand, for ensemble hurricane track displays (Fig. 2b, d), it is possible that the individual tracks and their relationship to one another are the salient features used to interpret the hurricane forecast. The tracks in the ensemble display employed by Ruginski et al. (2016) became increasingly farther apart as the distance from the center of the storm increased, which could be associated with a decrease in perceived intensity of the storm. We predicted that participants viewing the ensemble display would believe that the storm was less intense where the individual tracks were farther apart (an effect of distance from the center of the storm). However, because the cone of uncertainty lacks this salient spread of tracks, we predicted that judgments of intensity when viewing the cone would not be affected by distance from the center of the storm.
An example of the cone visualization, shown with the 12 possible oil rig locations. Only one location was presented on each trial (and km were not presented)
Participants were 182 undergraduate students currently attending the University of Utah who completed the study for course credit. Three individuals were excluded from final analyses for failing to follow instructions. Of the 179 included in analyses, 83 were male and 183 were female, with a mean age of 21.78 years (SD = 5.72). Each participant completed only one condition, either size task with cone (n = 40), size task with ensemble display (n = 42), intensity task with cone (n = 48), or intensity task with ensemble display (n = 48).
Stimuli were presented online using the Qualtrics web application (Qualtrics [Computer software], 2005). In each trial, participants were presented with a display depicting a hurricane forecast. The hurricane forecast images were generated using prediction advisory data from two historical hurricanes, available on the NHC website (http://www.nhc.noaa.gov/archive). The cone of uncertainty and an ensemble display technique were both used to depict the two hurricanes (Fig. 2).
A custom computer code was written to construct the summary and ensemble displays, using the algorithm described on the NHC website (http://www.nhc.noaa.gov/aboutcone.shtml). The ensemble and summary displays were created using the code of Cox et al. (2013). The resulting displays were a subset of the five visualization techniques used in Ruginski et al. (2016), which depicted two hurricanes and were randomly presented to participants. All were digitally composited over a map of the U.S. Gulf Coast that had been edited to minimize distracting labeling. These images were displayed to the subjects at a pixel resolution of 740 × 550. A single location of an 'oil rig' depicted as a red dot was superimposed on the image at one of 12 locations defined relative to the centerline of the cone and the cone boundaries. We chose the following distances to place the oil rigs relative to the centerline of the cone, 69, 173, 277, 416, 520, and 659 km (Fig. 3), which correspond to 0.386, 0.97, 1.56, 2.35, 2.94, and 3.72 cm from the center line of the hurricane on the map.
Relative points with respect to the center and cone boundary were chosen so that three points fell outside the cone boundary (277, 173, and 69 km), three points fell within the cone boundary (416, 520, and 659 km), and so that no points appeared to touch the visible center line or boundary lines. Underneath the forecast, a scale ranging from A to I was displayed along with visual depictions. For the intensity task, the scale was indicated by gauges, and for the size task the scale was indicated by circles (Fig. 4). Each circle was scaled by 30% from the prior circle. Each gauge was scaled by 1 'tick' from the prior gauge. The starting size and intensity of the hurricane were overlaid on the beginning of the hurricane track forecast for each trial. Three starting sizes and intensities (C, E, G) were presented in a randomized order.
An example of the visual depiction of the Likert scales, which depicts intensity with gauges (top) and size with the diameter of the circle (bottom)
Salience assessment
To test the previously stated prediction about salience of features of ensemble and summary displays, we utilized the Itti et al. (1998) salience model. Prior research has employed the Itti et al. (1998) salience model to test the salience of cartographic images and found that this model is a reasonable approximation of bottom-up attention (Fabrikant et al., 2010; Hegarty et al., 2010). The Itti et al. (1998) salience model was run in Matlab (2016, Version 9.1.0.441655) using the code provided by Harel et al., (2007). The results of this analysis suggest that the most salient visual features of the cone of uncertainty are the borders of the cone and the centerline (Fig. 5a). Additionally, the salient visual features of the ensemble display are the relative spread of hurricane tracks (Fig. 5b).
Example of the visual output generated using the Itti et al. (1998) salience model, which shows example stimuli used in this experiment. Brighter coloration indicates increased salience. a The summary display. b The ensemble display
We utilized a 2 (visualization type) × 2 (hurricane) × 3 (starting size or intensity) × 12 (oil rig location) mixed factorial design for each task (size and intensity). Hurricane starting size or intensity and the oil rig location were within-participant variables, resulting in a total of 72 trials per participant. Participants were randomly assigned to one of two visualization conditions (summary or ensemble display) and one of two tasks (size or intensity) as between-participant factors.
Individuals were first given a simple explanation of the task and visualization. Participants completing the size task were provided with the following instructions:
"Throughout the study you will be presented with an image that represents a hurricane forecast, similar to the image shown above. You will be provided with the initial hurricane size (diameter) at a particular point in time, indicated by the circle shown at the apex (beginning) of the hurricane forecast. An oil rig is located at the red dot. Assume that the hurricane were to hit the oil rig (at the red dot). Your task will be to select the size that best represents what the hurricane's diameter would be when it reaches the location of the oil rig."
Additionally, each trial included the text as a reminder of the task, "Assume that the hurricane were to hit the oil rig (at the red dot). Your task is to select the size that best represents what the hurricane's diameter would be when it reaches the location of the oil rig."
For the intensity task, participants were provided the instructions:
"Throughout the study you will be presented with an image that represents a hurricane forecast, similar to the image shown above. You will be provided with the initial hurricane wind speed at a particular point in time, indicated by the gauge shown at the apex (beginning) of the hurricane forecast. As the arm of the gauge rotates clockwise, the wind speed increases. For example, gauge A represents the lowest wind speed and gauge I the highest wind speed. An oil rig is located at the red dot. Assume that the hurricane were to hit the oil rig (at the red dot). Your task will be to select the gauge that best represents what the hurricane's wind speed would be when it reaches the location of the oil rig."
Each trial also contained the instructions, "Assume that the hurricane were to hit the oil rig (at the red dot). Your task is to select the gauge that best represents what the hurricane's wind speed would be when it reaches the location of the oil rig."
Following the instructions, participants completed all of the trials presented in a different random order for each participant. Finally, participants answered questions related to comprehension of the hurricane forecasts. These included two questions specifically relevant to the current research question: "The display shows the hurricane getting larger over time." and "The display indicates that the forecasters are less certain about the path of the hurricane as time passes." These questions also included a measure of the participants' understanding of the response glyphs used in the experiment by asking them to indicate which of two wind gauges had a higher speed or to match the size of circles. Participants who did not adequately answer these questions were excluded from the analysis (two participants for the wind speed gauges, one for the size circles).
Multilevel models (MLM) were fit to the data using Hierarchical Linear Modeling 7.0 software and restricted maximum likelihood estimation procedures (Raudenbush & Bryk, 2002). Multilevel modeling is a generalized form of linear regression used to analyze variance in experimental outcomes predicted by both individual (within-participants) and group (between-participants) variables. A MLM was appropriate for modeling our data and testing our hypotheses for two major reasons. Firstly, MLM allows for the inclusion of interactions between continuous variables (in our case, distance) and categorical predictors (in our case, the type of visualization). Secondly, MLM uses robust estimation procedures appropriate for partitioning variance and error structures in mixed and nested designs (repeated measures nested within individuals in this case).
We transformed the dependent variable before analysis by calculating the difference between the starting value of the hurricane (either size or intensity) and the participant's judgment. A positive value of the difference score represents an increase in judged size or intensity. In addition, although an ordinal variable by definition, we treated the dependent variable Likert scale as continuous in the model because it contained over five response categories (Bauer & Sterba, 2011).
For the distance variable, we analyzed the absolute value of oil rig distances, regardless of which side of the hurricane forecast they were on, as none of our hypotheses related to whether oil rigs were located on a particular side. We divided the distance by 10 before analysis so that the estimated model coefficient would correspond to a 10-km change (rather than a 1-km change). The mixed two-level regression models tested whether the effect of distance from the center of forecasts (level 1) varied as a function of visualization (level 2). Visualization was dummy coded such that the cone visualization was coded as 0 and the ensemble display as 1. We tested separate models for the intensity and size tasks. Self-report measures of experience with hurricanes and hurricane prone regions were also collected. As the participants were students at the University of Utah, so few had experienced a hurricane (3%) or had lived in hurricane-affected regions (7%) that we did not include these measures as covariates.
Results – Size
Level 1 of our multilevel model is described by:
$$ Chang{e}_{ij}={\beta}_{0j}+{\beta_{1j}}^{\ast}\left( Distanc{e}_{ij}\right)+{r}_{ij}; $$
and level 2 by:
$$ {\upbeta}_{0\mathrm{j}}={\upgamma}_{00}+{\upgamma_{01}}^{\ast}\left({\mathrm{Visualization}}_{\mathrm{j}}\right)+{\mathrm{u}}_{0\mathrm{j}} $$
Where i represents trials, j represents individuals, and the β and γ are the regression coefficients. The error term rij indicates the variance in the outcome variable on a per trial basis, and u0j on a per person basis. Though people are assumed to differ on average (u0j) in the outcome variable, we tested to determine whether the effect of distance differed per person (u1j) using a variance-covariance components test. We found that the model including a random effect of distance fit the data better than the model not including this effect, and so the current results reflect that model (χ2 = 955.95, df = 2, P < 0.001). Including this term allowed us to differentiate between the variance accounted for in judgments specific to a fixed effect of distance and the variance accounted for in judgments specific to a random effect of person.
Our primary hypothesis was that we would see greater size judgments with the cone compared to the ensemble display, reflecting a misinterpretation that the hurricane grows over time. Consistent with this prediction, we found a significant main effect of visualization type on average change in size judgments (γ 01 = −0.69, standard error (SE) = 0.33, t-ratio = −2.08, df = 80, P = 0.04). This effect indicates that, at the center of the hurricane, individuals viewing the cone visualization had a 0.69 greater increase in their original size judgment compared with individuals viewing the ensemble visualization (Fig. 6). However, the oil rig distance from the center of the storm did not significantly alter change in size judgments (γ 10 = 0.01, SE = 0.01, t-ratio = 1.43, df = 80, P = 0.16) and the effect of distance from the center of the storm on change in size judgments did not differ based on visualization type (γ 11 = −0.01, SE = 0.01, t-ratio = −1.32, df = 80, P = 0.19). Further, the main effect of visualization type on the average change in size judgment was also supported by results of the post-test question. A t-test, in which yes was coded as 1 and no as 0, revealed that participants viewing the cone (M = 0.70, SE = 0.04) were significantly more likely to report that the display showed the hurricane getting larger over time compared to the ensemble display (M = 0.39, SE = 0.05), t(176) = 4.436, P < 0.001, 95% CI 0.17–0.45, Cohen's d = 0.66.
The effect of distance from center and visualization type on change in size judgments. Grey shading indicates ± 1 standard error. Accurate interpretation would be indicated by a '0' change score. A one-unit change represents a one-step change in circle size along a 9-point scale (see Fig. 4 for the 9-point scale)
Results – Intensity
The multilevel model used for the intensity data included the exact same variables as the size model. Similar to the first model, we found that the model including a random effect of distance fit the data better than the model not including this effect, and so the current results reflect that model (χ2 = 704.81, df = 2, P < 0.001).
For intensity, we expected to see a greater effect of distance from the center of the storm on judgments with the ensemble display compared to the cone, reflecting participants' attention to the increasing spread of tracks as the distance from the center increase for the ensemble display. First, we found a significant main effect of visualization type on average change in intensity judgments (γ 01 = −0.85, SE = 0.33, t-ratio = −2.58, df = 95, P = 0.01). This indicates that, at the center of the hurricane, individuals viewing the cone visualization increased their intensity judgment by 0.85 (almost a full wind gauge) more than those who viewed the ensemble visualization at the center of the hurricane. Second, we found a significant main effect of distance from the center of the storm (γ 10 = −0.02, SE = 0.01, t-ratio = −3.28, df = 95, P = 0.001), which is qualified by a significant cross-level interaction between distance and visualization type (γ 11 = −0.02, SE = 0.01, t-ratio = −3.33 df = 95, P = 0.001). To decompose the interaction between distance from the center of the storm and visualization type, we computed simple slope tests for the cone and ensemble visualizations (Fig. 7). This revealed that the association between distance from the center of the hurricane and change in intensity judgment is different from zero for each visualization (cone visualization: Estimate = −0.02, SE = 0.01, χ2 = 64.74, P < 0.001; ensemble visualization: Estimate = −0.04, SE = 0.004, χ2 = 10.74, P = 0.001) and stronger for the ensemble visualization (χ2 = 101.89, P < 0.001). This result suggests that judgments of intensity decreased with distance more for the ensemble display than for the cone, consistent with a focus on the relative spread of hurricane tracks. In addition, using a t-test, a post-test question revealed that participants viewing the ensemble display (M = 0.53, SE = 0.04) were more likely to report that the display indicated the forecasters were less certain about the path of the hurricane over time compared to the cone (M = 0.39, SE = 0.05), t(176) = −1.97, P = 0.04, 95% CI −0.29 to −0.0003, Cohen's d = 0.29.
Simple slopes of the interaction between distance and visualization type on change in intensity judgments. Grey shading indicates ± 1 standard error. Accurate interpretation would be indicated by a '0' change score. A one-unit change represents a one-step change in gauge intensity along a 9-point scale (see Fig. 4 for the 9-point scale)
The results of this experiment showed that novice users interpret the size and intensity of a hurricane represented by ensemble and summary displays differently. Our prior work showed different damage ratings over time with the cone compared to the ensemble display, but it was unclear whether these were being driven by interpretations of size or intensity because a more general concept of 'damage' was used (Ruginski et al., 2016). In the current study, we found a similar pattern of greater increase in both size and intensity reported at the center of the hurricane with the cone, compared to the ensemble display. Furthermore, we found an effect of decreasing intensity judgments with distance from the center of the storm that was greater for the ensemble display than for the cone.
These findings support our hypothesis that a salient feature of the cone is the border that shows the diameter of the cone, which is more likely to influence viewers' beliefs that the storm is growing over time compared to the ensemble display, which does not have this visually salient feature. We saw evidence of the participants' beliefs that the cone represented the storm growing in size with both objective judgments of size (which increased more relative to judgments made using the ensemble display) and self-reported interpretations of the cone of uncertainty. Our second hypothesis that participants viewing the ensemble display would believe that the storm was less intense where the individual tracks were farther apart was supported by results of the intensity task conditions. Here, while intensity ratings were higher for the cone compared to the ensemble display, the rate of decrease in ratings of intensity as distance from the center of the storm increased was greater for the ensemble display than the cone. Together, these findings demonstrate that, in the context of hurricane forecasts, the salient visual features of the display bias viewers' interpretations of the ensemble hurricane tracks.
More generally, we suggest that summary displays will be most effective for cases in which spatial boundaries of variables such as uncertainty cannot be misconstrued as presenting physical boundaries. In contexts like cartography, where spatial layouts inherently represent physical space, ensemble displays provide a promising alternative to summary displays. Although our findings suggest that ensemble displays seem to have some advantages over summary displays to communicate data with uncertainty in a geospatial context, it may also be the case that ensemble displays provoke additional unintended biases. We tested one potential ensemble display bias in Experiment 2.
While the findings of Experiment 1 suggested that viewers of the ensemble visualization are less likely to believe that the hurricane is growing in size, it is possible that ensemble displays also elicit unique biases. One possible bias is that the individual tracks of an ensemble display can lead a viewer to overestimate the impact of the hurricane for locations covered by a path. The storm tracks presented are only a sampling of possible ways the hurricane could go and not an exhaustive list of all routes. It would be a misconception to believe that a hurricane would travel the full extent of any one track. Further, it would also be incorrect to believe that locations that are not covered by a path have little to no possibility of being hit by the storm. Rather, the relative density of tracks indicates the comparative probability of a hurricane being in a given region at future time points.
To test whether viewers' decisions are biased by the individual paths of the ensemble visualization, we conducted a second experiment in which the locations of the oil rigs were changed so that one oil rig was always superimposed on a hurricane path. We examined whether viewers would maintain the strategy to rate higher damage closer to the center of the storm, as reported in Ruginski et al. (2016) (i.e., selecting the closest rig to the center), or whether the salience of the ensemble track location would decrease the strength of the distance-based strategy (i.e., selecting the rig that was superimposed on a hurricane path, even when located farther away from the center of the storm). In this experiment, participants were presented with two oil rigs, one that was located on a hurricane path and one that was either closer (Fig. 8a) or farther from the center of the storm (Fig. 8b) than the one that was located on the path.
Examples of the stimuli used in Experiment 2 depicting two hurricanes. a Condition in which the farther rig from the center of the storm was located on a hurricane track. b Condition where the closest rig was located on a hurricane track
Participants were then asked to decide which of the two oil rigs would receive the most damage. Our hypothesis was that the likelihood of choosing the rig closer to the center of the storm would decrease if the rig farther from the center of the storm fell on a hurricane path, supporting the notion that the individual paths are salient features of the ensemble display that could lead to biased responses. In the rest of the paper we will refer to the close oil rig, meaning the oil rig that is closer to the center of the hurricane forecast display, and the farther oil rig, which is the rig farther away from the center of the hurricane forecast than the closer oil rig.
Participants were 43 undergraduate students currently attending the University of Utah who completed the study for course credit; 12 participants were male and 31 were female, with a mean age of 23.56 years (SD = 7.43).
Stimuli were presented using the previously detailed approach. On each trial, participants were presented with a display depicting a hurricane forecast and two oil rigs (Fig. 8). The distance between the oil rigs was roughly 100 km and remained constant for all of the trials. The 16 locations of the rig pairs were chosen selectively in areas where one rig was always located on a track and the other oil rig was at the same time point but not on a track, with an equal number of locations on each side of the hurricane. The rig on the track was either closer to the center or farther from the center relative to the rig that was not touching a track. Underneath the forecast, radio buttons were presented that allowed participants to indicate which oil rig they believed would receive the most damage. Damage was used for the response measure because it was found that participants were more likely to use a strategy based on distance from the center of the hurricane when making judgments about damage. This measure allowed us to determine if the colocation of an oil rig and a hurricane track modified the types of distance-based damage judgments reported in Ruginski et al. (2016).
We utilized a within-subjects design, 2 (close oil rig on line or far oil rig on line) × 2 (hurricane) × 16 (oil rig pair locations), resulting in a total of 32 trials per participant. Oil rig on line refers to whether the closer or farther oil rig from the center of the hurricane were located on the hurricane track.
Individuals were first given a simple explanation of the task and visualization.
"Throughout the study you will be presented with an image that represents a hurricane forecast, similar to the image shown above. An oil rig is located at each of the two red dots. Your task is to decide which oil rig will receive more damage based on the depicted forecast of the hurricane path."
Additionally, each trial included the text, "Your task is to decide which oil rig will receive the most damage from the hurricane." Following the instructions, participants checked a box indicating which oil rig they believed would receive the most damage. The trials were presented in a different random order for each participant. Finally, participants answered demographic questions and questions related to hurricane experience.
A multilevel logistic regression model was fit to the data using the lme4 package in R and maximum likelihood Laplace approximation estimation procedures (Bates, Maechler, Bolker, & Walker, 2015). A logistic MLM was appropriate for modeling our data and testing our hypotheses because it uses robust estimation procedures appropriate for partitioning variance and error structures in mixed and nested designs (repeated measures nested within individuals in this case) for binary outcomes (choosing which oil rig would receive more damage in this case).
$$ Close\kern0.5em Strateg{y}_{ij}={\beta}_{0j}+{\beta_{1j}}^{\ast}\left( Far\ Rig\ On\ Line\right)+{r}_{ij}; $$
$$ {\upbeta}_{0\mathrm{j}}={\upgamma}_{00}+{\mathrm{u}}_{0\mathrm{j}} $$
$$ {\upbeta}_{1\mathrm{j}}={\upgamma}_{10} $$
Far Rig On Line was dummy coded such that the farther rig overlapping with a line corresponded to 1, while the closer rig being on the line corresponded to 0. Our outcome variable, Close Strategy, was coded such that selecting the close oil rig to receive more damage corresponded to 1 and selecting the far oil rig to receive more damage corresponded to 0. We found that the model not including a random effect of On Line fit the data better than the model including this effect, and so the current results reflect the former (χ2 = 5.79, df = 1, P = 0.02). This indicates that there was a consistent fixed effect of On Line across people.
The participants had very high odds of deciding that the closer oil rig would receive the most damage when the closer oil rig was on the line (and by design, the farther oil rig was not on a line) (γ 00 = 5.75, SE = 0.52, odds ratio (OR) = 314.19,Footnote 1 z = 11.19, P < 0.001). Expressed in terms of predicted probability, this effect indicates that participants chose the closer oil rig to receive more damage 99.68% of the time when the closer oil rig was on a line (Fig. 8). This very high proportion makes sense, as this condition combined properties of close location to the center and a location falling on the path. Importantly, our model indicated a strong effect of Far Rig On Line, such that predicted probability of choosing the closer oil rig as receiving the most damage decreased to 64.15% when the farther oil rig was on the line (γ 10 = −5.17, SE = 0.37, OR = 0.006, z = −13.85, P < 0.001; Fig. 8). In this condition, the far oil rig was chosen in 304 of the 688 trials, compared to only 12 of the 688 trials when it was not on the line.
In other words, while participants chose the closer oil rig more often in both conditions, the result that the tendency to choose the farther rig increased by about 35% when the farther rig fell on a visual path strongly supports the use of the individual path as a salient feature influencing decisions (Fig. 9).
Predicted probabilities of choosing the close oil rig to receive more damage. Bars represent 95% confidence intervals. Accurate interpretation would be to choose the close oil rig 100% of the time
We found that non-experts almost always chose the closer oil rig to the center of the hurricane forecast when the oil rig fell on an individual hurricane track, consistent with prior work showing a strategy to report more damage to locations close to the center (Ruginski et al., 2016). However, when the farther oil rig visually overlapped with a single ensemble track, judgments were significantly biased by the individual path, decreasing the likelihood of choosing the close location. The results of the second study suggest that ensemble displays have their own set of interpretation biases, as individual ensemble members can be over weighted in participants' judgments.
In an effort to replicate the prior study and test if the findings were robust to a non-forced choice task, a third study was conducted that was identical to Experiment 2 but with an additional response option of 'Equal Damage'. By adding an 'Equal Damage' response participants could indicate that neither oil rig A or B would receive more damage. The same methods and data analysis were used as Experiment 2. Participants were 35 undergraduate students currently attending the University of Utah who completed the study for course credit; 10 participants were male, and 24 were female, with a mean age of 22.06 years (SD = 4.5).
As in Experiment 2, we used a multilevel logistic regression model to determine the impact of the colocation of an ensemble track and an oil rig. Prior to analysis, trials for which participants reported 'Equal Damage' (219 trials, 19.55% of total) were removed. Of the trials where participants reported equal damage, 79 occurred when the close rig was on a line and 140 occurred when the far rig was on a line. Models including fixed effects only and random effects fit the data equally well and results detail the more parsimonious model not including the random effect (χ2 = 0, df = 1, P = 1.00). This indicates that there was a consistent fixed effect of the oil rig touching an ensemble track across people.
Consistent with Experiment 2, participants had high odds of deciding that the closer oil rig would receive the most damage when it was on the line (γ 00 = 10.94, SE = 1.52, OR = 56387.34,Footnote 2 z = 7.2, P < 0.001). In other words, participants indicated that the closer oil rig would receive more damage 99.99% of the time when it was on a line. This finding replicates the results of our prior experiment. Further, our results showed a similar effect compared to Experiment 2 for Far Rig On Line, such that predicted probability of choosing the closer oil rig as receiving the most damage decreased to 54.59% when the farther oil rig was on the line (γ 10 = −10.76, SE = 1.29, OR = 0.00002, z = −8.36, P < 0.001; Fig. 10). In this condition, the far oil rig was chosen in 238 of 420 trials, compared to only 1 of 481 trials when it was not on the line. In sum, Experiment 3 replicates the take home points of Experiment 2, but the SE increased in Experiment 3. It is likely that including the response option of 'Equal Damage' increased the variability of the responses by decreasing sample size (reducing trials) and choosing the far rig more often (almost 50–50) for those trials that were not decided as equal damage.
In Experiment 3, we replicated Experiment 2, showing that participants were significantly biased by the colocation of an oil rig and an individual ensemble track. In the third study, in 19.55% of trials, individuals believed that the two oil rigs would receive equal damage, and about twice as many of these trials occurred when the far oil rig was on the line, providing additional evidence that the line competes with proximity to the center in evaluation of damage. For the rest of the trials where individuals chose either the close or far oil rig, results were consistent with Experiment 2, showing a decrease in the likelihood of choosing the close location when the far oil rig fell on the line. Together, these studies demonstrate that decisions about ensemble displays of hurricane forecast tracks change when making judgments about specific points that intersect with a track. More broadly, this work suggests that individual members of an ensemble display may be overweighed when the ensemble member happens to overlap with a point of interest. For example, individuals may be more likely to evacuate or take precautionary actions if a hurricane forecast track overlaps with their own town, but feel less concerned if not. These results suggest that visualization scientists should consider using ensemble displays in cases where users do not need to make decisions about specific points that may be influenced by an ensemble member. Instead, ensemble displays may be best suited for cases in which viewers are making judgments about patterns in the data or about areas, which is consistent with tasks proposed for ensemble displays by Szafir et al. (2016).
Our findings may be influenced by the nature of the task in a geospatial context, where asking about a single point biases users towards more of an outlier-identification strategy (Szafir et al., 2016). Future work involving interpretation of geospatial uncertainty may help to disentangle this by implementing tasks that require individuals to make judgments about larger areas of space (such as a county), which may force individuals to summarize the visualization and be less biased by individual tracks. Correll and Heer (2017) provide support for the claim that tasks influence the nature of biases by demonstrating that viewers are not affected by outliers when making judgments about the overall trends in ensemble data.
Our first study demonstrated that novice users interpret the size and intensity of a hurricane represented by an ensemble display and the cone of uncertainty differently, with relative lower size and intensity judgments over time for the ensemble display compared to the cone. These findings support our hypothesis that viewers of the cone of uncertainty are more likely to incorrectly believe that the visualization depicts the hurricane growing over time, consistent with the results of Ruginski et al. (2016). Furthermore, in the intensity task condition, we found a stronger effect of distance from the center of the hurricane for the ensemble display than for the cone. This result is in line with our predictions, providing evidence that a salient feature of the ensemble display is the tracks and their relationship to one another. In sum, these studies suggest that the type of visualization technique used to depict hurricane tracks significantly influences viewers' judgments of size and intensity – these effects are likely driven by the salient features of the displays, consistent with prior work (Correll & Heer, 2017; Newman & Scholl, 2012). Beyond hurricane forecasts, this work proposes that salient visual features in a display can attract viewers' attention and bias their decisions. Attention may bias viewers' judgments by manipulating the relative importance of features. Viewers may overweight the importance of salient features because they are attending to them more or they may devalue other features that they pay less attention to.
Despite their benefits, ensemble displays are not free of biases that negatively affect uncertainty comprehension. Our second and third studies found that, while novice users predominantly make judgments as if ensemble displays are distributions of probable outcomes, they also indicate that locations that are touching an individual ensemble track will receive more damage. However, we speculate that individual ensemble members may only influence judgments of specific points and may not influence users making judgments about areas. This assertion is consistent with work that suggests ensemble displays are well suited to conveying the gist of a scene (Correll & Heer, 2017; Oliva & Torralba, 2006; Rousselet et al., 2005). Further, the types of tasks that Szafir et al. (2016) propose for ensemble displays all include identifying patterns in groups of data that are spatially organized rather than point-based judgments. This suggests that visualization scientists should consider the types of tasks that their users will be completing when selecting the appropriate visualization technique, and that ensemble displays are most appropriate for tasks that do not require judgments about specific points.
Understanding human reasoning with static ensemble displays is a necessary first step to unpacking ensemble cognition; however, many visualization scientists may desire to present ensemble displays as animations or time-varying displays (Liu et al., 2016). Time-varying displays continually update the visualization with simulations, fading simulations out as a function of their time on the screen, which could reduce the salience of individual simulations. Directly manipulating the salience of features with animations, in line with Fabrikant et al. (2010) and Hegarty et al. (2010), is a possible future direction for this work. While animations may reduce biases produced by individual tracks, they may not be entirely beneficial (Tversky, Morrison, & Betrancourt, 2002) and often show little benefit when learning information from visualizations (Hegarty, Kriz, & Cate, 2003). However, the aforementioned work predominantly examined process diagrams and the negative impact of animations may not generalize to decision-making with uncertainty visualizations. Additionally, many animated visualization techniques also include user interaction capabilities. To determine the specific contributions of animation and user interaction to ensemble cognition, a systematic study is needed that tests both area and point-based judgments using these techniques.
Future work is also needed to address claims of how ensemble and summary displays are used beyond geospatial weather forecasting. Hurricanes are an example of geospatial data forecasting involving movement over space and time. It is possible that interpretations of ensemble versus summary displays differ across data dimensionality (e.g., 1-D bar charts or violin plots, see Correll & Gleicher, 2014) as well as across domains. For example, GPS-location data visualizations elicit top-down influences that can modify viewers' judgments (Hegarty, Friedman, Boone, & Barrett, 2016). However, it is unclear if viewers of weather forecasting data visualizations demonstrate the same top-down influences. Additionally, the current studies provided limited information about the nature of the displays. This may have led viewers to rely more on visually salient features than they would have if provided with more specific instructions highlighting common misconceptions about uncertainty visualizations, including that changes in size of the display can represent other information than physical size changes and that ensemble members are not always an exhaustive representation of all of the data. If we had given participants more information about what the cone or ensemble represents, they might have misinterpreted it less. Future work could add supplemental instruction before display presentation and assess how effectively that information facilitates desired interpretations. Other biases may have resulted from the specific visual information depicted in the display. Perceptual biases and limitations of the visual system, such as simultaneous contrast effect and just noticeable differences, were not controlled for. Prior work shows that perception interacts with visualization techniques (e.g., Cleveland & McGill, 1986; Kosara & Skau, 2016). As such, future work is needed to generalize these findings beyond a geospatial context and to other visualization techniques.
While there is disagreement about the optimal ways to visualize ensemble data, our work argues that both summary and ensemble displays have inherent biases based on their salient visual features. We propose that summary displays of geospatial uncertainty can be misinterpreted as displaying size information, while ensemble displays of the same information are not subject to this bias. On the other hand, when participants use ensemble displays to make point-based judgments, they may overweight individual ensemble members in their decision-making process. Overall, both user expertise and the intended visualization goal should be considered when visualization scientists decide to implement either summary or ensemble displays to communicate uncertainty. Current practice in visualization tends to emphasize the development of visualization methods more than testing usability (Isenberg, Isenberg, Chen, Sedlmair, & Möller, 2013), although there is a growing acknowledgment of the importance of incorporating human cognition and performance in visualization research (Carpendale, 2008; Kinkeldey, MacEachren, Riveiro, & Schiewe, 2015; Plaisant, 2004). As data availability and associated uncertainty visualization techniques continue to expand across the academic, industry, and public spheres, scientists must continue to advance the understanding of end-user interpretations in order for these visualizations to have their desired impact.
The odds for the γ 00 intercept and subsequent odds ratio for the γ10 term are extreme values due to the far oil rig only being chosen in 12 out of 688 trials when the close rig was on a line.
The odds for the γ 00 intercept and subsequent odds ratio for the γ10 term are extreme values due to the far oil rig only being chosen in 1 out of 481 trials when the close rig was on a line.
MLM:
multilevel model
NHC:
Alvarez, G. A. (2011). Representing multiple objects as an ensemble enhances visual cognition. Trends in Cognitive Sciences, 15(3), 122–131.
Ariely, D. (2001). Seeing sets: Representation by statistical properties. Psychological Science, 12(2), 157–162.
Bates, D., Maechler, M., Bolker, B., Walker, S., Christensen, R. H. B., & Singmann, H. (2015). lme4: Linear mixedeffects models using Eigen and S4, 2014. R package version, 1(4).
Bauer, D. J., & Sterba, S. K. (2011). Fitting multilevel models with ordinal outcomes: Performance of alternative specifications and methods of estimation. Psychological Methods, 16(4), 373.
Belia, S., Fidler, F., Williams, J., & Cumming, G. (2005). Researchers misunderstand confidence intervals and standard error bars. Psychological Methods, 10(4), 389.
Bonneau, G.-P., Hege, H.-C., Johnson, C. R., Oliveira, M. M., Potter, K., Rheingans, P., …Schultz, T. (2014). Overview and state-of-the-art of uncertainty visualization. In Scientific Visualization (pp. 3–27). New York: Springer.
Brodlie, K., Osorio, R. A., & Lopes, A. (2012). A review of uncertainty in data visualization. In J. Dill, R. Earnshaw, D. Kasik, J. Vince, & P. C. Wong (Eds.), Expanding the frontiers of visual analytics and visualization (pp. 81–109). New York: Springer.
Carpendale, S. (2008). Evaluating information visualizations. In Information Visualization (pp. 19–45). New York: Springer.
Cleveland, W. S., & McGill, R. (1986). An experiment in graphical perception. International Journal of Man-Machine Studies, 25(5), 491–500.
Correll, M., & Gleicher, M. (2014). Error bars considered harmful: Exploring alternate encodings for mean and error. IEEE Transactions on Visualization and Computer Graphics, 20(12), 2142–2151.
Correll, M., & Heer, J. (2017). Regression by Eye: Estimating Trends in Bivariate Visualizations. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 1387–1396). New York: ACM.
Cox, J., House, D., & Lindell, M. (2013). Visualizing uncertainty in predicted hurricane tracks. International Journal for Uncertainty Quantification, 3(2), 143–156.
Dent, B. D. (1999). Cartography-thematic map design. New York: WCB/McGraw-Hill.
Dobson, M. W. (1973). Choropleth maps without class intervals?: a comment. Geographical Analysis, 5(4), 358–360.
Dobson, M. W. (1980). Perception of continuously shaded maps. Annals of the Association of American Geographers, 70(1), 106–107.
Fabrikant, S. I., Hespanha, S. R., & Hegarty, M. (2010). Cognitively inspired and perceptually salient graphic displays for efficient spatial inference making. Annals of the Association of American Geographers, 100(1), 13–29.
Harel, J., Koch, C., & Perona, P. (2007). Graph-based visual saliency. In Advances in neural information processing systems (pp. 545–552).
Harris, R. L. (2000). Information graphics: A comprehensive illustrated reference. Oxford: Oxford University Press.
Harrower, M., & Brewer, C. A. (2003). ColorBrewer.org: an online tool for selecting colour schemes for maps. The Cartographic Journal, 40(1), 27–37.
Hegarty, M., Canham, M. S., & Fabrikant, S. I. (2010). Thinking about the weather: How display salience and knowledge affect performance in a graphic inference task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36(1), 37.
Hegarty, M., Friedman, A., Boone, A. P., & Barrett, T. J. (2016). Where are you? The effect of uncertainty and its visual representation on location judgments in GPS-like displays. Journal of Experimental Psychology: Applied, 22(4), 381.
Hegarty, M., Kriz, S., & Cate, C. (2003). The roles of mental animations and external animations in understanding mechanical systems. Cognition and Instruction, 21(4), 209–249.
Henderson, J. M. (2007). Regarding scenes. Current Directions in Psychological Science, 16(4), 219–222.
Isenberg, T., Isenberg, P., Chen, J., Sedlmair, M., & Möller, T. (2013). A systematic review on the practice of evaluating visualization. IEEE Transactions on Visualization and Computer Graphics, 19(12), 2818–2827.
Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11), 1254–1259.
Kinkeldey, C., MacEachren, A. M., Riveiro, M., & Schiewe, J. (2015). Evaluating the effect of visually represented geodata uncertainty on decision-making: systematic review, lessons learned, and recommendations. Cartography and Geographic Information Science, 44(1), 1–21.
Kosara, R., & Skau, D. (2016). Judgment error in pie chart variations. In Proceedings of the Eurographics/IEEE VGTC Conference on Visualization: Short Papers (pp. 91–95). Geneva: Eurographics Association.
Leib, A. Y., Fischer, J., Liu, Y., Qiu, S., Robertson, L., & Whitney, D. (2014). Ensemble crowd perception: A viewpoint-invariant mechanism to represent average crowd identity. Journal of Vision, 14(8), 26.
Liu, L., Boone, A., Ruginski, I., Padilla, L., Hegarty, M., Creem-Regehr, S. H., …House, D. H. (2016). Uncertainty Visualization by Representative Sampling from Prediction Ensembles. IEEE Transactions on Visualization and Computer Graphics, 23(9), 2165–2178.
Matlab. (2016). Version 9.1.0.441655.. Natick: The MathWorks Inc.
Mirzargar, M., Whitaker, R. T., & Kirby, R. M. (2014). Curve boxplot: Generalization of boxplot for ensembles of curves. IEEE Transactions on Visualization and Computer Graphics, 20(12), 2654–2663.
Newman, G. E., & Scholl, B. J. (2012). Bar graphs depicting averages are perceptually misinterpreted: the within-the-bar bias. Psychonomic Bulletin & Review, 19(4), 601–607. https://doi.org/10.3758/s13423-012-0247-5.
Oliva, A., & Torralba, A. (2006). Building the gist of a scene: The role of global image features in recognition. Progress in Brain Research, 155, 23–36.
Pang, A. (2008). Visualizing uncertainty in natural hazards. In Risk Assessment, Modeling and Decision Support (pp. 261–294). New York: Springer.
Plaisant, C. (2004). The challenge of information visualization evaluation. In Proceedings of the working conference on Advanced visual interfaces (pp. 109–116). New York: ACM.
Potter, K., Wilson, A., Bremer, P. T., Williams, D., Doutriaux, C., Pascucci, V., …Johnson, C. R. (2009). Ensemble-vis: A framework for the statistical visualization of ensemble data. In Data Mining Workshops, 2009. ICDMW'09. IEEE International Conference (pp. 233–240). Miami: IEEE.
Qualtrics [Computer software]. (2005). Retrieved from http://www.qualtrics.com.
Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods (Vol. 1). Thousand Oaks: Sage.
Rensink, R. A. (2014). On the prospects for a science of visualization. In Handbook of human centric visualization (pp. 147–175). New York: Springer.
Rensink, R. A. (2016). The nature of correlation perception in scatterplots. Psychonomic Bulletin & Review, 24, 776–797.
Robinson, A. H., Morrison, J. L., Muehrcke, P. C., Kimerling, A. J., & Guptill, S. C. (1995). Elements of cartography. New York: John Wiley & Sons.
Rosenholtz, R., & Jin, Z. (2005). A computational form of the statistical saliency model for visual search. Journal of Vision, 5(8), 777.
Rousselet, G., Joubert, O., & Fabre-Thorpe, M. (2005). How long to get to the "gist" of real-world natural scenes? Visual Cognition, 12(6), 852–877.
Ruginski, I. T., Boone, A. P., Padilla, L. M., Liu, L., Heydari, N., Kramer, H. S., …Creem-Regehr, S. H. (2016). Non-expert interpretations of hurricane forecast uncertainty visualizations. Spatial Cognition & Computation, 16(2), 154–172.
Sanyal, J., Zhang, S., Bhattacharya, G., Amburn, P., & Moorhead, R. (2009). A user study to compare four uncertainty visualization methods for 1d and 2d datasets. IEEE Transactions on Visualization and Computer Graphics, 15(6), 1209–1218.
Sanyal, J., Zhang, S., Dyer, J., Mercer, A., Amburn, P., & Moorhead, R. J. (2010). Noodles: A Tool for Visualization of Numerical Weather Model Ensemble Uncertainty. IEEE Transactions on Visualization and Computer Graphics, 16(6), 1421–1430. https://doi.org/10.1109/TVCG.2010.181.
Savelli, S., & Joslyn, S. (2013). The advantages of predictive interval forecasts for non‐expert users and the impact of visualizations. Applied Cognitive Psychology, 27(4), 527–541.
Sweeny, T. D., Wurnitsch, N., Gopnik, A., & Whitney, D. (2015). Ensemble perception of size in 4–5‐year‐old children. Developmental Science, 18(4), 556–568.
Szafir, D. A., Haroz, S., Gleicher, M., & Franconeri, S. (2016). Four types of ensemble coding in data visualizations. Journal of Vision, 16(5), 11.
Tversky, B., Morrison, J. B., & Betrancourt, M. (2002). Animation: can it facilitate? International Journal of Human-Computer Studies, 57(4), 247–262.
Watson, D. (2013). Contouring: a guide to the analysis and display of spatial data. Oxford: Pergamon.
Whitaker, R. T., Mirzargar, M., & Kirby, R. M. (2013). Contour boxplots: A method for characterizing uncertainty in feature sets from simulation ensembles. Visualization and Computer Graphics, IEEE Transactions, 19(12), 2713–2722.
Whitney, D., Haberman, J., & Sweeny, T. D. (2014). From textures to crowds: multiple levels of summary statistical perception. In J. S. Werner & L. M. Chalupa (Eds.), The new visual neurosciences (pp. 695–710). Boston: MIT Press.
We are thankful to Donald House and Le Liu for their assistance with stimulus generation.
This work was supported by the National Science Foundation under Grant No. 1212806.
All datasets on which the conclusions of the manuscript rely were deposited in a publicly accessible GitHub repository link.
University of Utah, Salt Lake City, USA
Lace M. Padilla, Ian T. Ruginski & Sarah H. Creem-Regehr
Department of Psychology, University of Utah, 380 S. 1530 E., Room 502, Salt Lake City, UT, 84112, USA
Lace M. Padilla
Ian T. Ruginski
Sarah H. Creem-Regehr
LMP is the primary author of this study, and she was central to the experimental design, data collection, interpretation of results, and manuscript preparation. ITR also significantly contributed to experimental design, data collection, data analysis, and manuscript preparation. SHC contributed to the theoretical development and manuscript preparation. All authors read and approved the final manuscript.
Correspondence to Lace M. Padilla.
LMP is a Ph.D. student at the University of Utah in the Cognitive Neural Science department. LMP is a member of the Visual Perception and Spatial Cognition Research Group directed by Sarah Creem-Regehr, Ph.D., Jeanine Stefanucci, Ph.D., and William Thompson, Ph.D. Her work focuses on graphical cognition, decision-making with visualizations, and visual perception. She works on large interdisciplinary projects with visualization scientists and anthropologists. ITR received his B.A. in Cognitive Science and Religious Studies from Vassar College and his M.S. in Psychology from the University of Utah. He is currently a Ph.D. student in the Department of Psychology at the University of Utah. ITR's research interests include applying cognitive theory to uncertainty visualization design and evaluation, as well as the influence of emotional, social, and individual differences factors on perception and performance. SHC is a Professor in the Psychology Department of the University of Utah. She received her MA and Ph.D. in Psychology from the University of Virginia. Her research serves joint goals of developing theories of perception-action processing mechanisms and applying these theories to relevant real-world problems in order to facilitate observers' understanding of their spatial environments. In particular, her interests are in space perception, spatial cognition, embodied cognition, and virtual environments. She co-authored the book Visual Perception from a Computer Graphics Perspective, and was previously Associate Editor of Psychonomic Bulletin & Review and Experimental Psychology: Human Perception and Performance.
The research reported in this paper was conducted in adherence to the Declaration of Helsinki and received IRB approval from the University of Utah, #IRB_00057678. Participants in the studies freely volunteered to participate and could elect to discontinue the study at any time.
Consent to publish was obtained from all participants in the study.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Padilla, L.M., Ruginski, I.T. & Creem-Regehr, S.H. Effects of ensemble and summary displays on interpretations of geospatial uncertainty data. Cogn. Research 2, 40 (2017). https://doi.org/10.1186/s41235-017-0076-1
Ensemble data
Summary display
Visual salience
Hurricane forecast
Visualization cognition | CommonCrawl |
Chisanbop
Chisanbop or chisenbop (from Korean chi (ji) finger + sanpŏp (sanbeop) calculation [1] 지산법/指算法), sometimes called Fingermath,[2] is an abacus-like finger counting method used to perform basic mathematical operations. According to The Complete Book of Chisanbop[3] by Hang Young Pai, chisanbop was created in the 1940s in Korea by Sung Jin Pai and revised by his son Hang Young Pai, who brought the system to the United States in 1977.
With the chisanbop method it is possible to display all numbers from 0 to 99 on two hands, and to perform the addition, subtraction, multiplication and division of numbers.[4] The system has been described as being easier to use than a physical abacus for students with visual impairments.[5]
Basic concepts
Each finger (but not the thumb) of the right hand has a value of one. Holding both hands above the table, press the index finger of the right hand onto the table to indicate "one". Press the index and middle fingers for "two", the three leftmost fingers for "three", and all four fingers of the right hand to indicate "four".
The thumb of the right hand indicates the value "five". For "six", press the right thumb and index finger onto the table. Thumb plus one finger indicates "five plus one", and 5+1=6.
The left hand represents the tens digit. It works like the right hand, but each value is multiplied by ten. Each finger on the left hand represents "ten", and the left thumb represents "fifty". In this way, all values between zero and ninety-nine can be indicated on two hands. [6]
Adoption in the United States
A school in Shawnee Mission, Kansas, ran a pilot program with students in 1979. It was found that although they could add large numbers quickly, they could not add them in their heads. The program was dropped. Grace Burton of the University of North Carolina said, "It doesn't teach the basic number facts, only to count faster. Adding and subtracting quickly are only a small part of mathematics."[7]
See also
• Finger binary
• bi-quinary coded decimal
References
1. chisanbop. (n.d.). Dictionary.com Unabridged (v 1.1). Retrieved June 29, 2007, from Dictionary.com website: http://dictionary.reference.com/browse/chisanbop
2. Lieberthal, Edwin M (1979). The Complete Book of Fingermath: Simple, Accurate,Scientific. London : Souvenir Press. ISBN 0285624385.
3. Pai, Hang Young (1981). The Complete Book of Chisanbop: Original Finger Calculation Method Created by Sung Jin Pai and Hang Young Pai. Van Nostrand Reinhold. ISBN 0-442-27568-4.
4. Casebeer, William D. (2001). Natural Ethical Facts: Evolution, Connectionism and Moral Cognition. University of California, San Diego. Retrieved 7 November 2019.
5. "Education of the Visually Handicapped". 11–12. Association for Education of the Visually Handicapped. 1979. Retrieved 7 November 2019. {{cite journal}}: Cite journal requires |journal= (help)
6. Pai, Hang Young (1981). The Complete book of Chisanbop, 1981 (pages from 2 to 19) (PDF). Van Nostrand Reinhold. ISBN 0-442-27568-4.
7. "What about Chisanbop -- who's using it?". Christian Science Monitor. 1982-05-17.
Further reading
• Lieberthal, Edwin M. (1979). The Complete Book of Fingermath. New York: McGraw-Hill. ISBN 0-07-037680-8.
External links
• Interactive demonstration of Chisenbop
• Instructable: How to count higher than 10 on your fingers, step 3: Chisenbop
Gestures
Friendly gestures
• Air kiss
• Applause
• Cheek kiss
• Dap
• Elbow bump
• Eskimo kiss
• Finger heart
• Fist bump
• Forehead kiss
• Hand heart
• Handshake
• Hand wave
• Hat tip
• High five
• Hongi
• ILY sign
• Kiss
• Liberian snap handshake
• Namaste
• OK
• Pinky swear
• Pound hug
• Shaka
• Thumb signal
Gestures of respect
• Adab
• Añjali Mudrā
• Bow
• Curtsy
• Gadaw
• Genuflection
• Hand-kiss
• Kowtow
• Kuji-in
• Mano
• Mudra
• Namaste
• Pranāma
• Prostration
• Sampeah
• Sembah
• Schwurhand
• Wai
• Zolgokh
• Ojigi
Salutes
• Bellamy salute
• Nazi salute
• Raised fist
• Roman salute
• Scout sign and salute
• Three-finger salute (Serbian)
• Three-finger salute (pro-democracy)
• Two-finger salute
• Vulcan salute
• Zogist salute
Celebratory gestures
• Applause
• Crossed hands
• Fist pump
• High five
• Low five
• Victory clasp
• V sign
Finger-counting
• Finger binary
• Chinese number gestures
• Chisanbop
Obscene gestures
• Anasyrma
• Bras d'honneur
• Cornuto
• Fig sign
• Middle finger
• Mooning
• Mountza
• Nazi salute
• Reversed V sign
• Shocker
• Thumb/index-finger ring
• Wanker
Taunts
• Akanbe
• Finger
• Loser
• Talk to the hand
Head motions
• Head shake
• Head bobble
• Nod
Other gestures
• Air quotes
• Allergic salute
• Aussie salute
• Awkward turtle
• Che vuoi?
• Crossed fingers
• Distress
• Duterte fist
• Facepalm
• Eyelid pull
• Finger gun
• Gang signal
• Hand-in-waistcoat
• Hand rubbing
• Jazz hands
• Laban sign
• Merkel-Raute
• Pointing
• Pollice verso
• Shrug
• Sign of the cross
• Sign of the horns
Related
• List of gestures
• Articulatory gestures
• Hand signals
• Manual communication
• Mudras
• Nonverbal communication
• Sign language
| Wikipedia |
Identification of Novel SNPs in Bovine Insulin-like Growth Factor Binding Protein-3 (IGFBP3) Gene
Kim, J.Y.;Yoon, D.H.;Park, B.L.;Kim, L.H.;Na, K.J.;Choi, J.G.;Cho, C.Y.;Lee, H.K.;Chung, E.R.;Sang, B.C.;Cheong, I.J.;Oh, S.J.;Shin, Hyoung Doo 3
https://doi.org/10.5713/ajas.2005.3 PDF KSCI
The insulin-like growth factors (IGFs), their receptors, and their binding proteins play key roles in regulating cell proliferation and apoptosis. Insulin-like growth factor binding protein-3 (IGFBP3, OMIM #146732) is one of the proteins that bind to the IGFs. IGFBP3 is a modulator of IGF bioactivity, and direct growth inhibitor in the extravascular tissue compartment. We identified twenty-two novel single nucleotide polymorphisms (SNPs) in IGFBP3 gene in Korean cattle (Hanwoo, Bos taurus coreanae) by direct sequencing of full gene including -1,500 bp promoter region. Among the identified SNPs, five common SNPs were screened in 650 Korean cattle; one SNP in promoter (IGFBP3 G-854C), one in 5'UTR region (IGFBP3 G-100A), two in intron 1 (IGFBP3 G+421T, IGFBP3 T+1636A), and one in intron 2 (IGFBP3 C+3863A). The frequencies of each SNP were 0.357 (IGFBP3 G-854C), 0.472 (IGFBP3 G-100A), 0.418 (IGFBP3 G+421T), 0.363 (IGFBP3 T+1636A) and 0.226 (IGFBP3 C+3863A), respectively. Haplotypes and their frequencies were estimated by EM algorithm. Six haplotypes were constructed with five SNPs and linkage disequilibrium coefficients (|D'|) between SNP pairs were also calculated. The information on SNPs and haplotypes in IGFBP3 gene could be useful for genetic studies of this gene.
Effect of Family Size and Genetic Correlation between Purebred and Crossbred Halfsisters on Response in Crossbred and Purebred Chickens under Modified Reciprocal Recurrent Selection
Singh, Neelam;Singh, Raj Pal;Sangwan, Sandeep;Malik, Baljeet Singh 8
Response in a modified reciprocal recurrent selection scheme for egg production was evaluated considering variable family sizes and genetic correlation between purebred and crossbred half sisters. The criteria of selection of purebred breeders included pullet's own performance, purebred full and half sisters and crossbred half sister's performance. Heritability of egg production of crossbreds (aggregate genotype) and purebred's was assumed to be 0.2 and genetic correlation between purebred and crossbred half sisters ($r_{pc}$) as 0.1, 0.2, 0.3, 0.4, 0.5, 1.0, -0.1, -0.2, -0.3, -0.4, -0.5 and -1.0. Number of dams per sire to produce purebred and crossbred progenies assumed to be 5, 6, 7, 8, while number of purebred female progeny ($N_p$) and crossbred progeny ($N_c$) per dam were considered to be 3, 4, 5 and 6 in each case. Considering phenotypic variance as unity, selection indices were constructed for different combinations of dams and progeny for each value of $r_{pc}$. Following selection index theory, response in crossbred and purebred for egg production was computed. Results indicated that response in crossbreds depended mainly on crossbred family size and also on magnitude of$r_{pc}$ irrespective of its direction, and response was greater with large crossbred family size than the purebred families. Correlated response in purebreds depends both on magnitude and direction of $r_{pc}$ and was expected to be greater with large purebred family size only. Inclusion of purebred information increased the accuracy of selection for crossbred response for higher magnitude of$r_{pc}$ irrespective of its direction. Present results indicate that desirable response in both crossbred and purebred performance is a function of $r_{pc}$ and family sizes. The ratio of crossbred and purebred family sizes can be optimized depending on the objective of improving the performance of crossbreds and/or of purebreds.
Genetic Variation of H-FABP Gene and Association with Intramuscular Fat Content in Laiwu Black and Four Western Pig Breeds
Zeng, Y.Q.;Wang, G.L.;Wang, C.F.;Wei, S.D.;Wu, Y.;Wang, L.Y.;Wang, H.;Yang, H.L. 13
https://doi.org/10.5713/ajas.2005.13 PDF KSCI
This study was performed to detect genetic variation of the heart fatty acid-binding protein (H-FABP) gene by PCRRFLPs approach and its association with intramuscular fat (IMF) content. Data from 223 individuals, including one Chinese native pig breed and four western pig breeds, were analyzed. The results showed that for the H-FABP gene, there was one polymorphic HinfI site in the 5'-upstream region, whereas there were one HaeIII and one HinfI (marked as $HinfI^*$) polymorphic site in the second intron, respectively. The three PCR-RFLPs were present in all breeds tested. The allele frequencies, however, revealed significant differences between them (p<0.05). Furthermore, the allele frequency distribution of HinfI in the Laiwu Black and that of $HinfI^*$ in the Hampshire breed were at disequilibrium, which might be the result of selective breeding. Results also indicated that for HinfI, HaeIII and $HinfI^*$ HFABP RFLP, significant (p<0.05) contrasts of 0.78%, -0.69% and 0.72% were detected in the least square means of IMF content between the homozygous genotype HH and hh, DD and dd, BB and bb classes, respectively. It implied that the HHddBB genotype had the highest IMF content in this experimental population and these H-FABP RFLPs could serve, to some extent, as genetic markers for use in improvement of IMF content.
Effects of Melatonin on Gene Expression of IVM/IVF Porcine Embryos
Jang, H.Y.;Kong, H.S.;Choi, K.D.;Jeon, G.J.;Yang, B.K.;Lee, C.K.;Lee, H.K. 17
The effect of melatonin on in vitro embryo development and the expression of antioxidant enzyme gene in preimplantation porcine embryos was determined by modified semi-quantitative single cell RT-PCR. Porcine embryos derived from in vitro maturation /in vitro fertilization were cultured in 5% $CO_2$ and 20% $O_2$ at $37^{\circ}C$ in NCSU23 medium. Melatonin was added to medium at concentration of 1nM, 5 nM, and 10 nM. When treated with 1nM (39.0%) of melatonin, the developmental rate of embryos beyond the morula stage were higher than that of control group (31.0%) (p<0.05). Number of inner cell mass and tropectoderm cell in control (23.0${\pm}$0.5 and 17.3${\pm}$0.8), 1 nM (23.6${\pm}$0.6 and 19.0${\pm}$0.5), and 5 nM (23.3${\pm}$1.1 and 16.3${\pm}$0.8) treated with melatonin were higher than in 10 nM (20.0${\pm}$0.5 and 13.3${\pm}$0.8) treated with melatonin (p<0.05). To develop an mRNA phenotypic map for the expression of catalase, bax and caspase-3, single cell RT-PCR analysis were carried out in porcine IVM/IVF embryo. Catalase was detected in 0, 1 and 5 nM supplemented with melatonin, but bax and caspase-3 were detected in 10 nM treated with melatonin.
Effects of Pelleted Sugarcane Tops on Voluntary Feed Intake, Digestibility and Rumen Fermentation in Beef Cattle
Yuangklang, Chalermpon;Wanapat, M.;Wachirapakorn, C. 22
Four male crossbred beef steers about 2 years old were used in a 4$\{times}$4 Latin square design to investigate the effect of pelleted sugarcane tops on voluntary feed intake, rumen fermentation and digestibility of nutrients. Experimental treatments were; Control (dried-chopped sugarcane tops (DCST)); PS1 (Pelleted sugarcane tops at 1 cm of diameter); PS2 (Pelleted sugarcane tops at 2 cm of diameter) and PS3 (Pelleted sugarcane tops at 3 cm of diameter). Roughage intake and total dry matter intake were 1.59, 1.62, 1.61, 1.63% BW and 2.09, 2.12, 2.11 and 2.13% BW in control, PS1, PS2 and PS3 treatments, respectively (p<0.05). Digestibility of DM, OM and CP were similar in control and PS3 treatment but there was significant difference (p<0.05) between control and PS1, PS2 treatments. Digestibility of neutral detergent fiber (NDF) and acid detergent fiber (ADF) were 52.89, 50.01, 50.05 and 50.56% and 41.91, 39.96, 39.91 and 39.69% in control, PS1, PS2 and PS3, respectively (p<0.05). Total volatile fatty acids concentrations in rumen contents was 67.68, 65.93, 66.15 and 66.67 mM in control, PS1, PS2 and PS3, respectively (p<0.05). Even though, concentrations of acetate and butyrate (%) were significant different (p<0.05) but concentration of propionate (%) was not affected by treatments (p>0.05). Rumen pH, ammonia nitrogen and plasma urea nitrogen were significantly different (p<0.05) among treatments. From this experiment, it was found that dried-chopped sugarcane tops increased digestibility of nutrients whereas pelleted sugarcane tops increased feed intake in beef cattle. However, pelleted sugarcane tops at 3 cm of diameter did similar result in digestibility and rumen parameters with DCST. Therefore, it could be concluded that pelleting sugarcane top is an alternative way to improve the quality of sugarcane tops for use as ruminant roughage source.
Effect of Microwave Treatment on Chemical Composition and In sacco Digestibility of Wheat Straw in Yak Cow
Dong, Shikui;Long, Ruijun;Zhang, Degang;Hu, Zizhi;Pu, Xiaopeng 27
Wheat straw was treated with microwave for 4 min and 8 min at a power of 750 W and frequency of 2,450 MHz. Chemical compositions of untreated, 4 min treated and 8 min treated straws were analyzed and in sacco degradabilities of all these straws in yak rumens were measured. Microwave treatment didn't significantly (p>0.05) affect the chemical composition of the straw. In sacco dry matter (DM) degradability of the straw after 18 h incubation in rumen was significantly (p<0.01) improved by microwave treatment. In sacco crude protein (CP) degradability of the straw was not (p>0.05) affected by microwave treatment. In sacco organic matter (OM) degradability of the straw was increased (p<0.01) by around 20% for both the 4 min and 8 min microwave treatment, that of acid detergent fibre (ADF) was increased (p<0.01) by 61.6% and 62.8%, and that of ash free ADF was enhanced by 72.1% and 69.6% for the 4 min and 8 min microwave treatment respectively. No significant difference was observed between the 4 min and 8 min microwave treatment on the degradability of DM, OM, CP, ADF and ash-free ADF of the straw.
Effect of Salt Level in Water on Feed Intake and Growth Rate of Red and Fallow Weaner Deer
Ru, Y.J.;Glatz, P.C.;Bao, Y.M. 32
Under a typical Mediterranean environment in southern Australia, the evaporation rate increases significantly in hot summers, resulting in highly saline drinking water for grazing animals. Also in the cropping areas, dryland salinity is a problem. Grazing animals under these environments can ingest excessive amount of salt from feed, drinking water and soil, which can lead to a reduction in growth rate. To understand the impact of high salt intake on grazing deer, two experiments were conducted to assess the effect of salt levels in drinking water on feed intake and growth rate of red and fallow weaner deer. The results revealed that fallow deer did not show any abnormal behaviour or sickness when salt level in drinking water was increased from 0% to 2.5%. Feed intake was not affected until the salt content in water exceeded 1.5%. Body weight gain was not affected by 1.2% salt in drinking water, but was reduced as salt content in water increased. Compared with deer on fresh water, the feed intake of red deer on saline water was 11-13% lower when salt level in drinking water was 0.4-0.8%. An increase in salt level in water up to 1% resulted in about a 30% reduction in feed intake (p<0.01). Body weight gain was significantly (p=0.004) reduced when salt level reached 1.2%. The deer on 1% salt tended to have a higher (p=0.052) osmotic pressure in serum. The concentration of P, K, Mg and S in serum was affected when salt level in water was over 1.0%. The results suggested that the salt level in drinking water should be lower than 1.2% for fallow weaner deer and 0.8% for red weaner deer to avoid any reduction in feed intake. Deer farmers need to regularly test the salt levels in drinking water on their farms to ensure that the salt intake of grazing deer is not over the levels that deer can tolerate.
Effect of Niacin Supplementation on Rumen Metabolites in Murrah Buffaloes (Bubalus bubalis)
Kumar, Ravindra;Dass, R.S. 38
An experiment was conducted on 3 male rumen fistulated adult buffaloes fed on wheaten straw and concentrate mixture in a Latin square design to study the impact of niacin supplementation on rumen metabolites. Three animals were fed wheaten straw+concentrate mixture (group I, control), wheaten straw+concentrate mixture+100 ppm niacin (group II), and wheaten straw +concentrate mixture+200 ppm niacin (group III). After 21 days feeding, rumen liquor was drawn for 3 consecutive days at different time intervals (0, 2, 4, 6 and 8 h) to study the various rumen metabolites i.e., rumen pH, ammonia-N, total-N, trichloroacetic acid precipitable-N, non-protein nitrogen, total volatile fatty acids, their fractions and number of protozoa. Mean pH values in strained rumen liquor (SRL) of animals in 3 groups were 6.64, 6.71 and 6.67, indicating no statistically significant difference. Results revealed a significant (p<0.01) increase in TVFA concentration among the supplemented groups (group II and III) in comparison to control group. Mean TVFA concentration (meq/dl) was 9.75, 10.97 and 11.44 in 3 groups respectively. The highest concentration of TVFA was observed at 4 h and minimum at 0 h in all the 3 groups. The percentage of acetic, propionic, butyric and isobutyric acid was statistically similar among the three groups. The mean ammonia-N concentration (mg/dl SRL) was significantly (p<0.01) lower in group II (16.38) and group III (15.42) than group I (18.14). Ammonia-N concentration was higher (p<0.01) at 4 h as compared to all the time intervals. The mean total-N concentration (mg/dl SRL) was higher (p<0.01) in group II (74.16) and group III (75.47) as compared to group I (62.04). Total-N concentration was higher (p<0.01) at 4 h as compared to other time intervals and lowest value was recorded at 0 h.Concentration of TCA-ppt-N (mg/dl SRL) was significantly (p<0.01) lower in control group as compared to niacin supplemented groups. Mean value of NPN (mg/dl SRL) was significantly (p<0.01) lower in group III (23.21) as compared to group I (25.71), whereas groups I and II, and groups II and III were similar to each other. Total protozoa number (${\times}10^4$/ml SRL) ranged from 18.06 to 27.41 in group I, 20.89 to 38.44 in group II and 27.61 to 39.45 in group III. The mean protozoa number was significantly (p<0.01) higher in SRL of group II (27.60) and III (30.59) as compared to group I (22.48). It can be concluded from the study that supplementation of niacin in the diet of buffaloes had improved the rumen fermentation by decreasing the concentration of ammonia-N and increasing protein synthesis.
Can Moringa oleifera Be Used as a Protein Supplement for Ruminants?
Kakengi, A.M.V.;Shem, M.N.;Sarwatt, S.V.;Fujihara, T. 42
The possibility of using Moringa oleifera as a ruminant protein supplement was investigated by comparison between nutritive and anti-nutritive value of its different morphological parts with that of conventionally used Leucaena leucocephala leaf meal (LL). Parameters determined were chemical composition, rumen degradable protein (RDP), acid detergent insoluble protein (ADIP), pepsin soluble protein (PESP), non-protein nitrogen (NPN) total soluble protein (TSP) and protein potentially digested in the intestine (PDI). Total phenols (TP) and total extractable tannins (TET) were also evaluated as anti-nutritive factors. In vitro gas production characteristics were measured and organic matter digestibility (OMD) was estimated basing on 24 h-gas production. Crude protein content ranged from 265-308 g/kg DM in M. oleifera leaves (MOL) and seed cake (MOC) respectively. Leucaena leucocephala and Moringa oleifera soft twigs and leaves (MOLSTL) had CP content of 236 and 195 g/kg DM while Moringa oleifera soft twigs alone (MOST) and Moringa oleifera bucks (MOB) had 160, 114 and 69.3 g/kg DM respectively. RDP was highest in (MOC) (181 g/kg DM) followed by (MOL) (177 g/kg DM) and was lowest in MOB (40 g/kg DM). The proportion of the protein that was not available to the animal (ADIP) was (p<0.05) higher in MOL and MOC (72 and 73 g/kg DM) respectively and lowest in LL (29 g/kg DM). The PDI was high in LL (74 g/kg DM) followed by MOC (55 g/kg DM) then MOL (16 g/kg DM). PESP was highest (p<0.05) in MOC followed by MOL then LL (273, 200 and 163 g/kg DM respectively). MOC exhibited highest NPN content (116 g/kg DM) and was lowest in MOB (18 g/kg DM) (p<0.05). Highly (p<0.05) TSP was observed in MOC and MOL (308 and 265 g/kg DM respectively) followed by LL (236 g/kg DM). MOL had negligible TET (20 g/kg DM) when compared with about 70 g/kg DM in LL. Highly (p<0.05) b and a+b values were observed for MOLSTL (602 and 691 g/kg DM respectively) followed by MOL (490 and 538 g/kg DM). Highest c value was observed in MOSTL followed by MOC and MOL (0.064, 0.056 and 0.053 rate/hour) respectively. OMD was highest (p<0.05) for MOSTL followed by MOC and then MOL (579, 579 and 562 g/kg DM respectively). LL exhibited lower (p<0.05) OMD (467 g/kg DM). It was concluded from this study that the high crude protein content in MOL and MOLST could be well utilized by ruminant animals and increase animal performance however, high proportion of unavailable protein to the lower gut of animals and high rumen degradable protein due to negligible tannin content render it a relatively poor protein supplement for ruminants. MOC can be a best alternative protein supplement to leaves and leaves and soft twigs for ruminants.
Macro- and Micro-nutrient Utilization and Milk Production in Crossbred Dairy Cows Fed Finger Millet (Eleucine coracana) and Rice (Oryza sativa) Straw as Dry Roughage Source
Gowda, N.K.S.;Prasad, C.S. 48
Finger millet straw and rice straw are the major source of dry roughage in southern India. They distinctly vary in their morphological and nutritional characters. Hence an effort was made to study the nutrient utilization, milk yield and composition in crossbred dairy cows fed either finger millet (group 1) or rice straw (group 2) as a source of dry roughage. The cows in both the groups were fed as per requirement with concentrate, green fodder and straw in the ratio of 30:45:25 parts (DM). At the end of 50 days of preliminary feeding a digestibility trial was conducted for 7 days and pooled samples of feed, fodder, feces, urine and milk were analysed for macro and micro nutrient content. Finger millet straw contained more CP, Ca, P, Mg, Cu, Zn and Co than rice straw and rice straw contained higher ADF, ash and silica. The intake of DM, CP, EE, NDF, ADF and most micronutrients (Ca, P, Mg, Cu, Zn, Fe, Mn and Co) was significantly higher in cows fed finger millet straw. The digestibility of DM, CP, NDF and ADF was significantly higher in cows fed finger millet straw and the gut absorption of Ca, Cu, Mn and Co was significantly higher in cows fed finger millet straw. The dietary requirement of all micronutrients in both the group of cows could be met irrespective of the type of roughage fed except that of Ca, which was low (0.61 and 0.40%) in rice straw fed cows. The average daily milk yield (L/cow) was also higher (7.0 L) in cows fed finger millet straw as compared to cows fed rice straw (6.3 L). The average milk composition also did not differ except that of milk fat which was significantly (4.7 and 4.5%) low in cows fed rice straw. The overall results of this study have indicated that finger millet straw is a better source of dry fodder than rice straw and while feeding rice straw as the sole roughage to dairy cows there is need to supplement additional calcium as this could be one of the limiting nutrients for milk production.
Chemical Composition, Degradation Characteristics and Effect of Tannin on Digestibility of Some Browse Species from Kenya Harvested during the Wet Season
Osuga, I.M.;Abdulrazak, S.A.;Ichinohe, T.;Fujihara, T. 54
A study was conducted with the objective of evaluating the nutritive value of some browse species from Kenya. The species evaluated included: Bauhinia alba, Bauhinia variegata, Bridelia micrantha, Calliandra calothyrsus, Carisa edulis, Cratylia argentea, Gliricidia sepium, Lantana camara, Maerua angolensis, Sesbania micrantha and S. sesban. The browses were evaluated by their chemical composition including phenolics, in vitro gas production and tannin activity (tannin bioassay). All the species had high crude protein content (149-268 g/kg DM) and low NDF content (239-549 g/kg DM). The feeds had varying contents of total extractable tannins (TET) ranging from low (3-22 mg/g DM), moderate (42-58 mg/g DM) and high (77-152 mg/g DM). Calliandra calothyrsus had the highest tannin content. Significant (p<0.05) variation in gas production was recorded among the species. Sesbania micrantha had the highest (p<0.05) potential gas production while Gliricidia sepium had the highest (p<0.05) rate of gas production. Use of polyethylene glycol (PEG 6000), to assess the adverse affect of tannins, indicated that tannins in browse species with high tannin content had inhibitory effects on rumen microbial fermentation as indicated by the gas production. Estimated organic matter digestibility and metabolizable energy also increased with PEG addition. The results of this study indicate that such Kenyan browse species have the potential to be used as feed supplements for ruminant animals.
Nutritional Requirements of Actinomyces Isolated from Rumen of Goat
Park, Ki Moon;Shin, Hyung Tai;Kang, Kook Hee;Lee, Jae Heung 61
The objective of this work was to investigate the nutritional requirements for the growth of Actinomyces sp. 9RCC5 isolated from the rumen of a native goat in Korea. The growth of strain 9RCC5 on the basal medium or the medium minus certain ingredients from the basal medium demonstrated that strain 9RCC5 showed absolute requirement of vitamin B complex mixture, while hemin and volatile fatty acids (VFA) were stimulatory to growth to some extent. The 9RCC5 strain grew well with casein hydrolysate as the sole added nitrogen source. However, neither a complex of 18 amino acids nor ammonium sulfate effectively replaced casein hydrolysate. Vitamins such as riboflavin and pantothenate were essential for growth, while thiamin and biotin were stimulatory. With regard to VFA, the growth was stimulated by acetic acid but inhibited by valeric acid. Relatively large quantities of $Na^+$, $K^+$ and $Ca^{2+}$ were absolutely required for growth. Supplementation of clarified rumen fluid to the basal medium in a range of 0-10% (vol/vol) resulted in an increased rate of growth as well as an increased extent of growth.
Properties of Aspergillar Xylanase and the Effects of Xylanase Supplementation in Wheat-based Diets on Growth Performance and the Blood Biochemical Values in Broilers
Wu, Yubo;Lai, Changhua;Qiao, Shiyan;Gong, Limin;Lu, Wenqing;Li, Defa 66
Three experiments were conducted to study the property of xylanase and the effects of xylanase in wheat-based diets on growth performance of broilers, respectively. Experiment 1 was performed in vitro to evaluate the effect of different pH and temperature on xylanase activity, and to evaluate the enzymic stability under different conditions. The results indicated that the optimum temperature and pH for xylanase activity were $50^{\circ}C$ and 4.5, respectively. The activity of enzyme solution was reduced rapidly after the treatment of water bath above $60^{\circ}C$ for 10 min. The enzyme was relatively stable at pH 3.5 to 8.0 and deteriorated when incubated at pH below 3.5. In Experiment 2, a total of 378 d-old male Arbor Acres broilers were randomly distributed to 7 different treatments with 6 replicates (9 birds) in each treatment. The treatments were as follows: (1) corn based diet (CS), (2) wheat based diet (WS), (3) WS+ 0.05% xylanase, (4) WS+0.15% xylanase, (5) WS+0.25% xylanase, (6) WS+0.35% xylanase, (7) WS+0.45% xylanase. The results showed that the body weight and feed/gain ratio of the broilers fed wheat-based diets have been significantly improved (p<0.05) compared to that fed corn-based diet in the first 3 wk. With regard to the wheat-based diets, the xylanase supplementation had a tendency to improve the growth performance in first 3 wk. After 3 wk, no significant difference (p>0.05) was found among all these different treatments. The supplementation of xylanase and the type of diets did not affect the feed intake but increased the concentration of triglyceride in serum. In Experiment 3, a total of 360 d-old male Arbor Acres broilers were assigned to 30 groups with 12 birds in each group randomly. These groups were then randomly distributed to 5 different treatments with 6 replicates within each treatment. The broilers of each treatment were fed one of the diets as follows: (1) Corn based diet, (2) White wheat based diet (WW) (3) White wheat based diet+0.25% xylanase, (4) Red wheat based diet, (5) Red wheat based diet+0.25% xylanase. The results showed that the body weight and feed/gain ratio had been significantly improved (p<0.05) by xylanase supplementation in the first 2 or 3 wk. The effect of xylanase in red wheat diet is a little higher than that used in white wheat diet. From the results of the present experiments, it can be concluded that the supplementation of Aspergillar xylanase can improve the performance of the broilers fed the wheat-based diet.
The Effect of Spray-dried Porcine Plasma and Tryptophan on Feed Intake and Performance of Weaning Piglets
Hsia, Liang Chou 75
There were three trials involved in this experiment. All piglets in Trial 1 were randomly distributed into the following 4 treatments. Treatment 1. Corn-soybean diet with 5% SDPP. The tryptophan level was 0.237%. Treatment 2. Corn-soybean diet with 10% meat and bone meal. The tryptophan level was 0.177%. Treatment 3. Treatment 1+0.0662% synthetic tryptophan. The total tryptophan level was 0.303. Treatment 4. Treatment 2+0.0662% synthetic tryptophan. The total tryptophan level was 0.236. Piglets in Trial 2 were distributed randomly into the following 4 treatments. Treatment 1: corn-soybean diet+10% meat and bone meal. The total tryptophan level was 0.176%. Treatment 2: corn-soybean diet+10% meat and bone meal+5% SDPP. The total tryptophan level was 0.180%. Treatment 3: Treatment 1 diet+0.004% synthetic tryptophan. The total tryptophan level was 0.180%. Treatment 4: Treatment 1 diet+0.631% synthetic tryptophan. The total tryptophan level was 0.237%. There were 4 treatments in Trial 3. Treatment 1: cornsoybean diet+10% meat and bone meal. The total tryptophan level was 0.176%. Treatment 2: Treatment 1 diet+0.061% synthetic tryptophan. The total tryptophan level was 0.237%. Treatment 3: Treatment 2 diet+0.061% synthetic tryptophan. The total tryptophan level was 0.298%. Treatment 4: corn-soybean diet+10% meat and bone meal+5% SDPP. The total tryptophan level was 0.180%. The results of Trial 1 showed that the piglets ate significantly more (p<0.05) when feed included SDPP in the diet during the first 2 weeks. The feed intake also increased when synthetic tryptophan was added in the 5% meat and bone meal diet; however, the difference did not reach a significant level (p>0.05) during the first 2 weeks. Three weeks onwards the feed intake of 5% meat and bone meal treatment was significantly lower (p<0.05) than for the other three treatments. The results of Trial 2 showed that the feed intake could be significantly improved only when the total tryptophan level reached 0.237%. Piglets in the 5% SDPP treatment had higher feed intake than piglets in 10% meat and bone meal treatment with 0.180% of tryptophan, but did not reach a significant level (p<0.05). Body weight gain also had the same trend as feed intake. The pigs in Treatment 1, the lowest total level of tryptophan treatment (0.176%), had lowest feed intake and weight gain, but the difference did not reach a significant level (p>0.05). The pigs in Treatment 1 of Trial 3 had the lowest feed intake and weight gain (p>0.05). Treatment 2 (0.237%) had the highest average feed intake from Week 1 to Week 5; the second best result was recorded in Treatment 4. As for the weight gain of the piglets in Treatment 4 (5% SDPP), they had a higher average weight during the first 3 weeks. The feed efficiency was better for Treatment 4 (5% SDPP) during the first 2 weeks. The results of these trials showed that both SDPP and tryptophan had a trend to improve the feed intake and weight gain.
Utilization of Graded Levels of Finger Millet (Eleusine coracana) in Place of Yellow Maize in Commercial Broiler Chicken Diets
Rama Rao, S.V.;Raju, M.V.L.N.;Reddy, M.R.;Panda, A.K. 80
An experiment was conducted to study the performance, carcass traits, serum lipid profile and immune competence in commercial broilers (2 to 42 d of age) fed graded levels (25, 50, 75 and 100%) of finger millet (FM) (Elusine coracana) in place (w/w) of yellow maize (YM). Each diet was fed to eight replicates (five female Vencobb broilers/replicate) housed in stainless steel battery brooders. The estimated metabolizable energy content of FM was about 540 kcal less than the YM. FM contained more protein (10.42 vs. 9.05%) and fibre (9.52 vs. 2.24%) compared to YM. Body weight gain, ready to cook yield, relative weights of giblet, liver, intestine and length of intestine at 42 d of age was not affected due to replacing YM with FM. But, the feed efficiency decreased in broilers fed diets containing 75 and 100% FM in place of YM at both 21 and 42 d of age. The amount of fat deposited in abdominal area decreased and the relative weight of gizzard increased with increase in level of FM in the diet. The serum HDL cholesterol at 21 and 42 d of age and serum triglycerides at 42 d of age decreased with increase in level of FM in diet. The relative weight of spleen and antibody titers against sheep red blood cells (SRBC) at 5 d post inoculation (PI) decreased in broilers fed FM at 100% of YM. However, the relative weight of bursa, SRBC titers at 10 d PI, antibody titers against ND virus and mortality were not affected due to incorporation of FM in place of YM in diet. The fat content in thigh muscle and liver decreased, while the protein content in these tissues increased with increase in the level of FM in broiler diet. Based on the results, it may be concluded that YM can be replaced with FM up to 25% on weight basis without affecting weight gain, carcass yields and immunity in commercial broiler diet (up to 42 d of age). Further, inclusion of finger millet reduced the fat deposition in thigh muscle, liver and in abdominal area compared to those fed maize as the principal source of energy.
Effects of Green Tea Polyphenols and Fructo-oligosaccharides in Semi-purified Diets on Broilers' Performance and Caecal Microflora and Their Metabolites
Cao, B.H.;Karasawa, Y.;Guo, Y.M. 85
This study was conducted to examine the effects of green tea polyphenols (GTP) and fructo-oligosaccharides (FOS) supplement on performance, counts of caecal microflora and its metabolites production. In female broiler chickens fed on semi-purified diets from 28 to 42 d of age, dietary green tea polyphenols (GTP) and fructo-oligosaccharides (FOS) significantly reduced mortality (p<0.05). Dietary GTP significantly decreased the total count of caecal microflora, each colonic population count and caecal flora metabolites contents when compared to other groups (p<0.05). Dietary FOS did not influence the total count of caecal flora but it selectively increased Bifidobacteri and Eubacteria counts (p<0.05) and decreased the count of other microflora and concentrations of caecal phenols and indole (p<0.0.5). These results suggest that GTP and FOS in semi-purified diets can decrease mortality and change the caecal colonic flora population, but GTP shows antibiotic-like effects of non-selectively decreasing all colonic flora and then metabolites, and FOS acts selectively by increasing profitable microflora and decreasing production of caecal microflora metabolites besides volatile fatty acids.
Identification and Characterization of Hydrogen Peroxide-generating Lactobacillus fermentum CS12-1
Kang, Dae-Kyung;Oh, H.K.;Ham, J.-S.;Kim, J.G.;Yoon, C.H.;Ahn, Y.T.;Kim, H.U. 90
Lactic acid bacteria were isolated from silage, which produce high level of hydrogen peroxide in cell culture supernatant. The 16S rDNA sequences of the isolate matched perfectly with that of Lactobacillus fermentum (99.9%), examined by a 16S rDNA gene sequence analysis and similarity search using the GenBank database, thus named L. fermentum CS12-1. L. fermentum CS12-1 showed resistance to low pH and bile acid. The production of hydrogen peroxide by L. fermentum CS12-1 was confirmed by catalase treatment and high-performance liquid chromatography. L. fermentum CS12-1 accumulated hydrogen peroxide in culture broth as cells grew, and the highest concentration of hydrogen peroxide reached 3.5 mM at the late stationary growth phase. The cell-free supernatant of L. fermentum CS12-1 both before and after neutralization inhibited the growth of enterotoxigenic Escherichia coli K88 that causes diarrhea in piglets.
The Effect of Wet Pad and Forced Ventilation House on the Reproductive Performance of Boar
Chiang, S.H.;Hsia, L.C. 96
There were two trials involved in the experiment. Trial 1: the trial was conducted on two Taiwan Sugar Corporation (TSC) pig farms. One was located in the north of Taiwan and the other was located in the south. Both farms had wet pad and forced ventilation (WPFV) and conventional open design (COD) boar and sow houses. There were 12 Duroc boars, age ranging from 12-24 months. Half of them (6 boars) were raised in a WPFV pig house, and the other half were kept in a COD house. Semen was collected at 5-day intervals from May $1^{st}$ to the end of October. Sixteen sows (2-8 parity) were served by artificial insemination each week from the beginning of May to the end of Oct. These sows were checked for heat from 18 days to 25 days after insemination. Trial 2: there were four MPFV boar houses involved in the test. Two houses were located in the north of Taiwan, and the other two houses were located in the south. The test was conducted from January 2000 to December 2001. The total number of serviced sows by MPFV-housed boars was 35,105 head and for COD-housed boars 103,065 head. The results showed that the total semen volume, density of sperm, total sperm per ejaculate, sperm motility and morphological abnormality were significantly better (p<0.01) for boar raised in WPFV house than for COD houses. Average sperm motility in June and July was lower than for the other months. Morphological abnormality was higher during May, June and July. Although the results did not reach a significant level, the average value showed that the total volume of boar semen was higher in the north than for the south. The total semen volume production of boar raised in WPFV was higher than for boars raised in COD house, reaching a significant level only in summer. Boars kept in WPFV house had higher total sperm number than boars kept in COD house, reaching a significant level in spring (p<0.05), summer (p<0.01), and fall (p<0.05) but not in winter (p>0.05). Boars raised in WPFV house had significantly higher sperm motility than boars in COD house during spring (p<0.001), summer (p<0.001), fall (p<0.01) and winter (p<0.05). The average farrowing rate and piglets born alive were higher for boars in WPFV house than for boars in COD house, but neither reached a significant level (p>0.05). The present experiment shows that WPFV house can improve the reproduction performance of boars.
Fermentation for Liquid-type Yogurt with Lactobacillus casei 911LC
Ko, I.H.;Wang, M.K.;Jeon, B.J.;Kwak, H.S. 102
https://doi.org/10.5713/ajas.2005.102 PDF KSCI
This study was carried out to find the attributes for liquid-type yogurt with Lactobacillus casei 911LC during 72 h fermentation at $37^{\circ}C$. The pH decreased up to 32 h and plauteaued thereafter, and the titratable acidity increased up to 40 h. The growth of lactic acid bacteria sharply increased with $3.5{\times}10^7$ cfu/ml up to 40 h of fermentation and slowly decreased thereafter. The free amino acids produced during fermentation reached the maximum value at 44 h and gradually decreased thereafter. Bitterness sensory scores were the highest at 44 h of fermentation. In the result of electrophoresis, the band mostly disappeared at 72 h fermentation. The present data showed that the range of optimum fermentation time for liquid-type yogurt using Lactobacillus casei 911LC was from 40 to 44 h.
Augmentation of Thermotolerance in Primary Skin Fibroblasts from a Transgenic Pig Overexpressing the Porcine HSP70.2
Chen, Ming-Yu;Tu, Ching-Fu;Huang, San-Yuan;Lin, Jyh-Hung;Tzang, Bor-Show;Hseu, Tzong-Hsiung;Lee, Wen-Chuan 107
A high environmental temperature affects the economic performance of pigs. Heat shock protein 70 (HSP70) has been reported to participate importantly in thermotolerance. This study aims to produce transgenic pigs overexpressing porcine HSP70.2, the highly inducible one of HSP70 members, and to prove the cellular thermotolerance in the primary fibroblasts from the transgenics. A recombinant plasmid in which the sequence that encodes the porcine HSP70.2 gene is fused to green fluorescence protein (GFP) was constructed under the control of cytomegalovirus (CMV) enhancer and promoter. Two transgenic pigs were produced by microinjecting pCMV-HSP70-GFP DNA into the pronucleus of fertilized eggs. Immunoblot assay revealed the varied overexpression level (6.4% and 1.4%) of HSP70-GFP in transgenic pigs. After heating at $45^{\circ}C$ for 3 h, the survival rate (78.1%) of the primary fibroblast cells from the highly expressing transgenic pig exceeded that from the non-transgenic pig (62.9%). This result showed that primary fibroblasts overexpressing HSP70-GFP confer cell thermotolerance. We suggest that transgenic pigs overexpressing HSP70 might improve their thermotolerance in summer and therefore reduce the economic loss in animal production.
Free-range Poultry Production - A Review
Miao, Z.H.;Glatz, P.C.;Ru, Y.J. 113
With the demand for free-range products increasing and the pressure on the intensive poultry industry to improve poultry welfare especially in western countries, the number of free-range poultry farms has increased significantly. The USA, Australia and European countries have developed Codes of Practice for free-range poultry farming which detail the minimum standards of husbandry and welfare for birds. However, the performance and liveability of free-range birds needs to be improved and more knowledge is required on bird husbandry, feed supply, disease control and heat wave management. This review examines the husbandry, welfare, nutrition and disease issues associated with free-range poultry systems and discusses the potential of incorporating free-range poultry into a crop-pasture rotation system.
Effect of Orally Administered Branched-chain Amino Acids on Protein Synthesis and Degradation in Rat Skeletal Muscle
Yoshizawa, Fumiaki;Nagasawa, Takashi;Sugahara, Kunio 133
Although amino acids are substrates for the synthesis of proteins and nitrogen-containing compounds, it has become more and more clear over the years that these nutrients are also extremely important as regulators of body protein turnover. The branched-chain amino acids (BCAAs) together or simply leucine alone stimulate protein synthesis and inhibit protein breakdown in skeletal muscle. However, it was only recently that the mechanism(s) involved in the regulation of protein turnover by BCAAs has begun to be defined. The acceleration of protein synthesis by these amino acids seems to occur at the level of peptide chain initiation. Oral administration of leucine to food-deprived rats enhances muscle protein synthesis, in part, through activation of the mRNA binding step of translation initiation. Despite our knowledge of the induction of protein synthesis by BCAAs, there are few studies on the suppression of protein degradation. The recent findings that oral administration of leucine rapidly reduced $N^{\tau}$-methylhistidine (3-methylhistidine; MeHis) release from isolated muscle, an index of myofibrillar protein degradation, indicate that leucine suppresses myofiblilar protein degradation. The details of the molecular mechanism by which leucine inhibits proteolysis is just beginning to be elucidated. The purpose of this report was to review the current understanding of how BCAAs act as regulators of protein turnover. | CommonCrawl |
\begin{document}
\title[]{Geodesic bi-angles and Fourier coefficients of restrictions of eigenfunctions} \author{Emmett L. Wyman} \address{Department of Mathematics, Northwestern University, Chicago IL} \email{[email protected]}
\author{Yakun Xi} \address{School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, PR China} \email{[email protected]}
\author{Steve Zelditch} \address{Department of Mathematics, Northwestern University, Chicago IL} \email{[email protected]}
\maketitle
\setcounter{tocdepth}{1}
\begin{abstract} This article concerns joint asymptotics of Fourier coefficients of restrictions of Laplace eigenfunctions $\varphi_j$ of a compact Riemannian manifold to a submanifold $H \subset M$. We fix a number $c \in (0,1)$ and study the asymptotics of the thin sums, $$ N^{c} _{\epsilon, H }(\lambda): =
\sum_{j, \lambda_j \leq \lambda} \sum_{k: |\mu_k - c \lambda_j | < \epsilon} \left| \int_{H} \varphi_j \overline{\psi_k}dV_H \right|^2 $$ where $\{\lambda_j\}$ are the eigenvalues of $\sqrt{-\Delta}_M,$ and $\{(\mu_k, \psi_k)\}$ are the eigenvalues, resp. eigenfunctions, of $\sqrt{-\Delta}_H$. The inner sums represent the `jumps' of $ N^{c} _{\epsilon, H }(\lambda)$ and reflect the geometry of geodesic c-bi-angles with one leg on $H$ and a second leg on $M$ with the same endpoints and compatible initial tangent vectors $\xi \in S^c_H M, \pi_H \xi \in B^* H$, where $\pi_H \xi$ is the orthogonal projection of $\xi$
to $H$. A c-bi-angle occurs when $\frac{|\pi_H \xi|}{|\xi|} = c$. Smoothed sums in $\mu_k$ are also studied, and give sharp estimates on the jumps. The jumps themselves may jump as $\epsilon$ varies, at certain values of $\epsilon$ related to periodicities in the c-bi-angle geometry. Subspheres of spheres and certain subtori of tori illustrate these jumps. The results refine those of the previous article \cite{WXZ20} where the inner sums run over $k: | \frac{\mu_k}{\lambda_j} - c| \leq \epsilon$ and where geodesic bi-angles do not play a role.
\end{abstract}
\tableofcontents
\section{Introduction}
Let $(M, g)$ be a compact, connected Riemannian manifold of dimension $n$ without boundary, let $\Delta_M = \Delta_g$ denote its Laplacian, and let $\{\varphi_j\}_{j=1}^{\infty} $ be an orthonormal basis of its eigenfunctions, $$(\Delta_M + \lambda_j^2) \varphi_j = 0, \qquad \int_M \varphi_j \overline{\varphi_k} dV_M = \delta_{jk}, $$ where $dV_M$ is the volume form of $g$, and where the eigenvalues are enumerated in increasing order (with multiplicity), $\lambda_0 =0 < \lambda_1 \leq \lambda_2 \cdots \uparrow \infty.$
Let $H \subset M$ be an embedded submanifold of dimension $d \leq n -1$ with induced metric $g |_{H}$, let $\Delta_H$ denote the Laplacian of $(H, g |_H)$, and let
$\{\psi_k\}_{k=1}^{\infty} $ be an orthonormal basis of its eigenfunctions on $H$, $$(\Delta_H + \mu_k^2) \psi_k = 0, \qquad \int_H \psi_j \overline{\psi_k} dV_H = \delta_{jk}, $$
where $dV_H$ is the volume form of $g|_H$. We denote the restriction operator to $H$ by $$\gamma_H: C(M) \to C(H), \;\; \gamma_H f = f |_H. $$ This article is concerned with the Fourier coefficients \begin{equation} \label{FCDEF} a_{\mu_k} (\lambda_j) := \langle \gamma_H \varphi_j, \psi_k \rangle_H = \int_H (\gamma_H \varphi_j) \psi_k dV_H \end{equation} in the $L^2(H)$-orthonormal expansion, \begin{equation} \label{ONR} \gamma_H \varphi_j (y) = \sum_{k=1}^{\infty} \langle \gamma_H \varphi_j, \psi_k \rangle_H\; \psi_k(y) \end{equation} of the restriction of $\varphi_j$ to $H$.
Our goal is to understand the joint asymptotic behavior of the Fourier coefficients \eqref{FCDEF} as the eigenvalue pair $(\lambda_j, \mu_k)$ tends to infinity along a `ray' or `ladder' in the joint spectrum of $(\sqrt{-\Delta_M} \otimes I, I \otimes \sqrt{- \Delta_H})$ on $M \times H$.
The motivating problem is to determine estimates or asymptotics for all of the Fourier coefficients \eqref{FCDEF} of an individual eigenfunction $\varphi_j$ as $\lambda_j \to \infty$.
This problem originates in the spectral theory of automorphic forms, where $(M,g)$ is a
compact (or finite area, cusped) hyperbolic surface, and where $H$ is a closed geodesic, or a distance circle, or a closed horocycle (in the cusped case). In this case, \eqref{FCDEF} takes the form, \begin{equation} \label{FOURIER} \gamma_H\varphi_j (s)= \sum_{n \in {\mathbb Z}} a_n(\lambda_j) e^{\frac{2 \pi i n s}{L}}, \end{equation} where $L $ is the length of $H$ and $s$ is the arc-length parameter. See Section \ref{HYPERBOLIC} for a
more precise statement when $H$ is a closed horocycle. Classical estimates and asymptotics of Fourier coefficients of modular forms are studied by Rankin \cite{Ra77}, Selberg \cite{Sel65}, Bruggeman \cite{Br81}, Kuznecov \cite{K80}, and many others. Systematic expositions in this context may be found in \cite{Br81, G83, I02}. It is `folklore', and proved also in this article, that $ |a_n(\lambda_k)| = O_{k, \epsilon} (n^{-N})$ for all $N \geq 0$ for $n \geq \lambda_k + \epsilon$, and for any $\epsilon > 0$; see also \cite{Wo04, Xi} for some estimates of this type. If one thinks of $\lambda_k$ as the `energy' and $n$ as the angular momentum, then rapid decay occurs in the ``forbidden region'' where the angular momentum exceeds the energy see Section \ref{AFSECT}). Hence, we restrict to the case where $|n| \leq \lambda_j$. The motivating question is, how does the dynamics of the geodesic flow of $(M,g)$ and of $(H, g |_H)$ determine the equidistribution properties of the restricted Fourier coefficients?
Studying the joint asymptotics of the Fourier coefficients \eqref{FCDEF} for individual eigenfunctions of general compact Riemannian manifolds $(M,g)$ and submanifolds $H \subset M$ is a very difficult problem and all but intractable except in special cases such as subspheres of standard spheres. Asymptotic knowledge of the coefficients \eqref{FCDEF} would afortiori imply asymptotic knowledge of the $L^2$ norms of restrictions of eigenfunctions,
\begin{equation} \label{PLANCHEREL} \int_H |\gamma_H \varphi_j|^2 d S_H = \sum_{k} |a_{\mu_k}(\lambda_j)|^2, \end{equation} which are themselves only known in special cases. General estimates of \eqref{PLANCHEREL} are proved in \cite{BGT}, but are only sharp in special cases. Knowledge of the distribution of Fourier coefficients and their relations to the geodesic flow are yet (much) more complicated. In the case where $(M,g)$ has ergodic geodesic flow and $H \subset M$ is a hypersurface, the $L^2$ norms of restrictions is known as the QER (quantum ergodic restriction)
problem, i.e. to determine when $\{\gamma_H \varphi_j\}_{j=1}^{\infty}$ (or at least a subsequence of density one) is quantum ergodic along $H$. For instance, if $H$ is a hypersurface of a compact negatively curved manifold, it is shown in \cite{TZ13} that there exists a full density subsequence of the restrictions $\{\gamma_H \varphi_{j_k} \}$ such that \begin{equation} \label{QER} \int_H |\gamma_H \varphi_{j_k}|^2 dS_H = \sum_{m: \mu_m \leq \lambda_{j_k} } |a_{\mu_{m} } (\lambda_{j_k})|^2 \simeq 1, \;\; (k \to \infty). \end{equation} The Fourier coefficient distribution problem is to determine how the `mass' of the Fourier coefficients are distributed.
It has been conjectured that when the geodesic flow is `chaotic' (e.g. for a negatively curved surface), the Fourier coefficients should
exhibit `equipartition of energy', i.e. all be roughly of the same size. By comparison, for standard spherical harmonics $Y_N^m $ on ${\mathbb S}^2$, the restrictions have only one non-zero Fourier coefficient. The general Fourier coefficient problem is to find conditions on $(M, g, H)$ under which the Fourier coefficients $a_{\mu}(\lambda_j)$ concentrate at one particular frequency (tangential eigenvalue), or under which they exhibit equipartition of energy.
In this article, we take the first step in this program by studying the averages in $\lambda_j$ of sums of squares of a localized set of Fourier coefficients. Equivalently, we study the joint asymptotics of \eqref{FCDEF} of the thinnest possible sums over the joint spectrum $\{(\lambda_j, \mu_k)\}_{j, k =1}^{\infty}$. Here, `thin' refers to both the width of the window in the spectral averages in $\lambda_j$, and also the width of the window of Fourier modes $\mu_k$. In comparison, conic or wedge windows are studied in \cite{WXZ20}.
\subsection*{Kuznecov-Weyl sums}
The Kuznecov sum formula
in the sense of \cite{Zel92} refers to the asymptotics of the Weyl-type sums,
\begin{equation} \label{f} N_H(\lambda; f): = \sum_{j: \lambda_j \leq \lambda} \left| \int_H f \varphi_j dV_H \right|^2, \end{equation} where $f \in C^{\infty}(H)$. Here, and henceforth, we drop the restriction operator $\gamma_H$ from $\gamma_H \varphi_j$ if it is obvious from the context that $\varphi_j$ is being restricted. The asymptotics are controlled by the structure of the `common orthogonals', i.e. the set of geodesic arcs which hit $H$ orthogonally at both endpoints. The original Kuznecov formulae pertained to special curves on arithmetic hyperbolic surfaces \cite{K80}, but the Weyl asymptotics have been generalized to submanifolds of general Riemannian manifolds in \cite{Zel92}. Recent improvements of spectral projection estimates under various conditions appear in \cite{ ChS15, CGT17, SXZh17, Wy17a, CG19, WX}. Fine asymptotics in the arithmetic setting are given in \cite{M16}. We refer to these articles for many further references.
In this article, we are interested in the joint asymptotics of $ |\langle \gamma_H \varphi_j, \psi_k \rangle_{L^2(H)}|^2$ as
the pair $(\lambda_k, \mu_j)$ tends to infinity along a ray in ${\mathbb R}^2$. Except for exceptional $(M, g, H)$ it is not possible to obtain asymptotics along a single ray. Instead, we study the joint asymptotics in a thin strip around the ray. Our
main result deploys a smooth version of such a strip, sometimes called `fuzzy ladder asymptotics' in the sense of \cite{GU89}. Let $\psi \in \mathcal{S}({\mathbb R})$ (Schwartz space)
with $\hat{\psi} \in C_0^{\infty}({\mathbb R})$ a positive test function. We then define the fuzzy-ladder sums,
\begin{equation} \label{cpsi}
N^{c} _{\psi, H }(\lambda): =
\sum_{j: \lambda_j \leq \lambda} \sum_{k=0}^{\infty} \psi( \mu_k - c \lambda_j) \left| \int_{H} \varphi_j \overline{\psi_k}dV_H \right|^2.
\end{equation}
The asymptotics of \eqref{cpsi} depend on the geometry of what we term $(c, s, t)$ bi-angles in Section \ref{GEOMSECT}. Roughly speaking, such
a bi-angle consists of two geodesic arcs, one a geodesic of $H$ of length $s$, the other a geodesic of $M$ of length $c s + t$, with common endpoints
and making a common angle ${\rm arccos} \; c$ with $H$. When $t= 0$, the set of such bi-angles is defined by,
\begin{equation} \label{EQs=0} \mathcal{G}^0_c = \{(q, \xi) \in S^*_HM: \; |\pi_H \xi| \; = c |\xi|, \;G_H^{ - s} \circ \pi_H \circ G_M^{c s} (q, \xi) = (q, \pi_H (\xi))\},
\end{equation} where $\pi_H: T^*_q M \to T_q^*H$ is the orthogonal projection at $q \in H$.
It turns out that the ``spectral edge" case $c=1$ requires different techniques from the case $c < 1$. It is the edge or `interface' between the allowed interval $[0, 1] $ and the complementary `forbidden' intervals of $c$-values (see Section \ref{AFSECT} for discussion). As often happens at interfaces, there is a variety of possible behaviors when $c=1$. As discussed below, the case $c=1$ corresponds
to tangential intersections of geodesics with $H$, which depend on whether
$H$ is totally geodesic or whether it has non-degenerate second fundamental form. For this reason, we separate out the cases $0< c <1 $ and $c=1$ and only study $0< c < 1$ in this article. The case of $c=1$ and $H$
totally geodesic is studied in \cite{WXZ+}. The case of $c=1$ and $H$ with a non-degenerate second fundamental form requires quite different
techniques from the totally geodesic case, and is currently under investigation.
The first result is a ladder refinement of the main result of \cite{WXZ20}. For technical convenience, we assume the test function $\psi$ is even and non-negative. \begin{theorem} \label{main 2} Let $\dim M = n$ and let $\dim H =d$. Let $0 < c <1$ and assume that $\mathcal{G}_c^0$ \eqref{EQs=0} is clean in the sense of Definition \ref{CLEAN}. Then, if $\psi\ge0$ is even, $\hat{\psi} \in C^{\infty}_0({\mathbb R})$, and $\mathrm{supp}\, \hat{\psi}$ is contained in a sufficiently small interval around $s = 0$, there exist universal constants $C_{n,d}$ such that $$N_{\psi, H}^{c} (\lambda) =
C_{n,d} \;\; a_c^0(H, \psi) \lambda^{n-1 } + R_{\psi, H}^{c} (\lambda), \;\; \text{with}\;\; R_{\psi, H}^{c} (\lambda) = O(\lambda^{n-2}),
$$ where the leading coefficient is given by, \begin{equation} \label{acHpsi0} a_c^0(H, \psi): =
\hat{\psi}(0) \; c^{d-1} (1 - c^2)^{\frac{n-d-2}{2}} {\mathcal H}^{d}(H),
\end{equation} \end{theorem}
The definition of ``$\mathrm{supp}\,\; \hat{\psi}$ is sufficiently small" is given in Definition \ref{SUFFSMALLDEF} below. For the moment, we say that when $0 < c < 1$, it means that the only component of \eqref{EQs=0} with $s \in \mathrm{supp}\,\hat{\psi}$ is the component with $s=0$.
It is straightforward to remove the condition that $\rm{supp} \;\hat{\psi}$ is small, but then there are further contributions from other components of $\mathcal{G}_c^0$, which require further definitions and notations. We state the generalization in Theorem \ref{main 5} below.
The coefficient $a_c^0(H, \psi)$ is discussed in Section \ref{SHARPWKINTRO}.
Under additional dynamical assumptions on the geodesic flow $G^t_M$ of $(M,g)$ and the geodesic flow $G_H^t$ of $(H, g|_{H})$, one can improve the remainder estimate of Theorem \ref{main 2} (see Theorem \ref{main 2b} and Theorem \ref{main 3}.) The improvement requires the inclusion of all components of $\mathcal{G}_c^0$ and therefore requires the generalization Theorem \ref{main 5} of Theorem \ref{main 2}. Since it requires further notation and definitions, we postpone it until later in the introduction.
\subsection{Jumps in the Kuznecov-Weyl sums} To obtain results for individual eigenfunctions (more precisely, for individual eigenvalues) by the techniques of this article, we study the jumps, \begin{equation} \label{JDEFpsi} J^{c} _{ \psi, H }(\lambda_j): = \sum_{\ell: \lambda_{\ell} = \lambda_j}
\sum_{ k =0}^{\infty} \psi( \mu_k - c \lambda_j) \left| \int_{H} \varphi_{\ell} \overline{\psi_k}dV_H \right|^2,\end{equation} in the Kuznecov-Weyl sums \eqref{cpsi} at the eigenvalues $\lambda_j$. The sum over $\ell$ is a sum over an orthonormal basis for the eigenspace ${\mathcal H}(\lambda_j)$ of $-\Delta_M$ of eigenvalue $\lambda_j^2$. Since the leading term is continuous, the jumps are jumps of the remainder, \begin{equation} \label{JUMPREM} J^{c}_{\psi, H}(\lambda_j) = R_{\psi, H}^{c} (\lambda_j + 0 ) - R_{\psi, H}^{c} (\lambda_j - 0). \end{equation} By \eqref{JUMPREM} and Theorem \ref{main 2}, we get \begin{corollary} \label{main 2cor} With the same assumptions and notations as in Theorem \ref{main 2}, for any positive even test function $\psi$ with $\hat{\psi} \in C_0^{\infty}({\mathbb R})$, there exists a constant $C_{c, \psi} > 0$ such that $$
J_{\psi, H}^{c} (\lambda) \leq \begin{array}{ll} C_{c, \psi}\; \lambda^{n-2}, & 0 < c < 1. \end{array} $$ \end{corollary} Corollary \ref{main 2cor} has a further implication on the `sharp jumps' where we replace $\psi$ by an indicator function, ${\bf 1}_{[-\epsilon, \epsilon]}$. \begin{equation} \label{JDEF} J^{c} _{ \epsilon, H }(\lambda_j): = \sum_{\ell: \lambda_{\ell} = \lambda_j}
\sum_{ \substack{k: | \mu_k - c \lambda_j| \leq \epsilon}} \left| \int_{H} \varphi_{\ell} \overline{\psi_k}dV_H \right|^2. \end{equation}
By choosing $\psi$ carefully in Corollary \ref{main 2cor} we prove,
\begin{corollary} \label{JUMPCOR} With the above notation and assumptions, $$
J^{c}_{\epsilon, H}(\lambda_j) = O_{c,\epsilon} (\lambda^{n-2}), \qquad 0 < c < 1. $$ \end{corollary} This estimate is shown to be sharp for closed geodesics in general spheres ${\mathbb S}^n$ in Section \ref{SnSHARPSECT}. The sharpness of the jump estimates is intimately related to the periodicity properties of both $G^t_M$ and $G^t_H$, and is discussed in detail in Section \ref{CLUSTERSECT}. The dependence on $\epsilon$ in Corollary \ref{JUMPCOR} is complicated in general, as illustrated in Section \ref{S2SPARSE} and Section \ref{SnSHARPSECT}. This is because of potential concentration of the Fourier coefficients at joint eigenvalues $(\lambda_j, \mu_k)$ at the `edges' or endpoints of the strip of width $\epsilon$ around a ray. See Lemma \ref{JUMPSPHERE} for the explicit formula in the case of subspheres of spheres.
When $\dim M =2,\, \dim H=1$, the eigenvalues of $|\frac{d }{ds}|$ of $H $ are of course multiples of $n \in {\mathbb N}$ involving the length of $H$ and are therefore separated by gaps of size depending on the length. The jumps \eqref{JDEF} can themselves jump as $\epsilon $ varies, as illustrated by restrictions from spheres to subspheres in the same section. See also Section \ref{REMAINDERSECT}
for further discussion. These edge effects arise again in Theorem \ref{main 3}. For $\epsilon$ sufficiently small, only one eigenvalue will occur in the sum over $k: |\mu_k - c \lambda_j| \leq \epsilon$ and therefore Corollary \ref{GENCOR} gives a bound on the size of an individual Fourier coefficient of an individual eigenfunction under the constraints on $(c, H)$ in the corollary. We refer to Section \ref{2Sphere} for examples of curves on ${\mathbb S}^2$. We also refer to the second author's work \cite{Xi} and the first two authors' work \cite{WX} for prior results on bounds on Fourier coefficients.
\begin{remark} \label{GENCOR}
For a generic metric $g$ on any manifold $M$, the spectrum of the Laplacian is simple, i.e. all eigenvalues have multiplicity one \cite{U}. In this case, the $\lambda_j$ sum of \eqref{JDEF} reduces to a single term and one gets,
hence,
\begin{equation} \label{INDIVIDEST} \left| \int_{H} \varphi_j \overline{\psi_k}dV_H \right|^2 \leq J^{c}_{\epsilon, H}(\lambda_j):=
\sum_{ \substack{k: | \mu_k - c \lambda_j| \leq \epsilon}} \left| \int_{H} \varphi_{j} \overline{\psi_k}dV_H \right|^2, \;\; \forall k: |\mu_k - c \lambda_j| \leq \epsilon. \end{equation}
\end{remark}
\subsection{Allowed and forbidden joint eigenvalue regions}\label{AFSECT} We briefly indicate why we restrict to $c \in [0,1]$. The reason is that the asymptotics are trivial outside this range.
\begin{lemma}\label{FORBIDDEN} If $\mu_k/\lambda_j \geq 1+ \epsilon$, then for any $N \geq 1$,
$ \left| \int_{H} \varphi_j \overline{\psi_k}dV_H \right|^2 \leq C_N(\epsilon) \;\lambda_j^{- N}$. Hence, \eqref{JDEF} for $c > 1$ is rapidly decaying in $\lambda$.
\end{lemma}
The proof is contained in the proof of Theorem \ref{main 3}. The main point of the proof is that, if one uses a cutoff $\psi( \mu_k - c \lambda_j)$ with $c > 1$, then there
$N_{H, \psi}^c(\lambda)$ is rapidly decaying. This is also proved in detail in \cite{Xi} for the case $\dim H = 1$.
We will not comment on this issue further in this article. As Lemma \ref{FORBIDDEN} indicates, $c =1$ is an `interface' in the spectral asymptotics, separating an allowed and
a forbidden region. It would be interesting to study the interface asymptotics around $c=1$, i.e. the transition
from the asymptotics of Theorem \ref{main 2} to trivial asymptotics for $c > 1$;
we plan to study the scaling asymptotics around $c=1$ in future work.
\subsection{Geometric objects: $(c, s,t)$ bi-angles}\label{GEOMSECT}
We now give the geometric background to Theorem \ref{main 2} and Theorem \ref{main 3}, in particular defining the relevant notion of `cleanliness' and explaining the coefficients. We will need some further notation (see \cite{TZ13} for further details). Let $G_M^t$, resp. $G^t_H$ denote the geodesic flow of $(M,g)$ on $\dot{T}^*M$, resp. $\dot{T}^* H$ (since the flow is homogeneous, and $S^*M$ is invariant,
it is sufficient to consider the flow on $S^*M$).
\begin{remark} \label{NOTATION} Notational conventions: For any manifold $X$ we denote by
$\dot{T}^* X = T^* X \backslash 0$ the punctured cotangent bundle. All of the canonical relations in this paper
are homogeneous and are subsets of $\dot{T}^* X$. \end{remark}
We denote by $S^*_H M$ the covectors $(y, \xi) \in S^*M$ with footpoint $y \in H$, and by
$\pi_H(y, \xi) = (y, \eta) \in B^*H$ the orthogonal projection of $\xi$ to the unit coball $B_y^*H$ of $H$ at $y$.
We also denote by
$T^c_H M$ the cone of covectors in $\dot T^*_HM$ making an angle $\theta$ to $T^* H$ with
$\cos \theta = c$. It is the cone through
\begin{equation} \label{SHc}
S^c_H M = \{(y, \xi) \in S^*_H M: |\pi_H \xi| = c \}. \end{equation}
\begin{definition} \label{BIANGLEDEF}
By a $(c, s, t)$-bi-angle through $(q, \xi) \in S^c_H M$, we mean a periodic, once-broken orbit of the composite geodesic flow solving the equations,
\begin{equation} \label{EQ} \begin{array}{l} G_H^{ - s} \circ \pi_H \circ G_M^{cs + t} (q, \xi) = \pi_H(q, \xi),\;\; (q, \xi) \in S^c_H M
\end{array} \end{equation}
Thus, a bi-angle is a pair of geodesic arcs with common endpoints, for which the $M$-length is $cs +t$ and whose $H$ length is $s$.
\end{definition}
Note that the geometry changes considerably when $c=1$ and $H$ is totally geodesic. In this case, when $t=0$, a $(1, s, 0)$ bi-angle is a geodesic arc of $H$, traced forward for time $s$ and backwards for time $s$. The consequences are explored in \cite{WXZ+}.
The projection of a $(c, s, t)$ bi-angle to $M$ consists of a geodesic arc of $M$ of length $ct +s$, with both endpoints on $H$, making the angle
$\theta$ with $\cos \theta = c$ at both endpoints $q$ resp. $ q'$, and an $H$-geodesic arc of oriented $H$-length $ -s$ with initial velocity
$(q, \pi_H \xi)$ and with terminal velocity $(q', \pi_H G^t_M(q, \xi))$. Equivalently, let $\gamma^M_{x, \xi}$ denote the geodesic of $M$ with initial data $(x, \xi) \in S^*_x M$,
and let $\gamma^H_{y,\eta}$ denotes the geodesic of $H$ with initial data $(y, \eta) \in S^*H$. Then the
defining property of a $(c, S, T)$- bi-angle is that it consists of a geodesic arc $\gamma^M_{x, \xi}$ of $M$ of length $T$, and a geodesic arc $\gamma^H_{x, \eta}$ of $H$ of length $S$,
such that:
$$\left\{ \begin{array}{l}
\gamma^M_{x, \xi}(0)= x = \gamma^H_{x, \eta}(0) \in H, \pi_H \dot{\gamma}^M_{x, \xi}(0) = \dot{\gamma}^H_{x, \eta}(0); \\ \\
| \dot{\gamma}^M_{x, \xi}(0)| = 1, | \dot{\gamma}^H_{x, \eta}(0)| = c; \\ \\
\gamma^M_{x, \xi}(T)= \gamma^H_{x, \eta}(S) , \pi_H \dot{\gamma}^M_{x, \xi}(T) = \dot{\gamma}^H_{y, \eta}(S).
\end{array}. \right. $$
Some examples on the sphere $S^2$ are given in Section \ref{SHARPSECT}.
We denote the set of all solutions with a fixed $c$ by \begin{equation} \label{gcalc} \mathcal{G}_c = \{(s, t, q, \xi) \in {\mathbb R} \times {\mathbb R} \times S^c_H M: \eqref{EQ} \text{ is satisfied}\}. \end{equation} For $t$ fixed, we also define \begin{equation} \label{gcalct}
\mathcal{G}_c^t = \{(s, y, \xi) \in {\mathbb R} \times S_H^cM: (s, t, y, \xi) \in \mathcal{G}_c\}. \end{equation} We decompose $\mathcal{G}_c$ into the subsets, $$\mathcal{G}_c = \mathcal{G}_c^0 \bigcup \mathcal{G}_0^{\not=0}, \;\;\; \mathcal{G}_c^0 = \mathcal{G}_c^{(0,0)} \bigcup \mathcal{G}_c^{(0, \not=0)}, $$ where (cf. \eqref{EQs=0})
\begin{equation} \label{gcalc0} \left\{ \begin{array}{l} \mathcal{G}_c^0 = \{(s, 0, y, \xi) \in {\mathbb R} \times S_H^cM: \eqref{EQ} \text{ is satisfied}\}, \\ \\
\mathcal{G}_c^{0,0} \simeq S^c_H M= \{(0,0, y, \xi) \in S_H^cM: \eqref{EQ} \text{ is satisfied}\} \\ \\
\mathcal{G}_c^{\not= 0} = \{(s,t, y, \xi) \in {\mathbb R} \times {\mathbb R} \times S_H^cM: t \not= 0, \eqref{EQ} \text{ is satisfied}\}. \end{array} \right. \end{equation}
The principal term of \eqref{cpsi} only involves the set $ \mathcal{G}_c^0$.
When $c < 1$, or when $c=1$ and $H$ has non-degenerate
second fundamental form, the equation cannot hold for small enough $s$ because the $M$-geodesic
between the endpoints must be shorter than the $H$ geodesic. More formally, we state,
\begin{lemma} \label{ISOLATED} When $0 < c < 1$, or if $c =1$ and $H$ has non-degenerate second fundamental form, $\mathcal{G}_c^{0,0}$ is a connected component of $\mathcal{G}_c^0$. If $H$ is totally geodesic and $0 < c < 1$, $\mathcal{G}_c^{0,0}$ is also a connected component, i.e. there do not exist any $(c, s, 0)$ bi-angles when $s \not= 0$ is sufficiently small (depending on $c$). When $c=1$ and $H$ is totally geodesic, $\mathcal{G}_1^{0,0}$ is not a connected component of $\mathcal{G}_1^0$. \end{lemma}
In general, the order of magnitude of $N^{c}_{\psi, H}(\lambda)$ depends on the dimension of \eqref{gcalc0} and on its symplectic volume measure. \begin{definition} For $0< c < 1$ the symplectic volume ${\rm Vol}(\mathcal{G}_c^0)$ of \eqref{gcalc0} is the Euclidean measure of $\mathcal{G}_c^0.$ The pushforward volume form acquires an extra factor $(1 - c^2)^{-{\frac{1}{2}}}$ (see \cite{WXZ20} for more details). \end{definition}
The additional components only contribute to the main term of the asymptotics, when they have the same dimension as $\mathcal{G}_c^{0,0}$.
\begin{definition} \label{DOMDEF} For $0 < c < 1$ define the `principal component' of $\mathcal{G}_c^0$ to be $\mathcal{G}_c^{0, 0} $.
We say that the principal component is {\it dominant} if $\dim \mathcal{G}_c^{0,0} $ is strictly
greater than any other component of $\mathcal{G}_c^0$. \end{definition}
Examples on spheres illustrating the various scenarios are given in Section \ref{SECTEX}. If $c \in {\mathbb Q},\, c < 1$, then $G_{{\mathbb S}^d} ^{-s}$ and $G_{{\mathbb S}^n}^{c s}$ have an arithmetic progression of common periods, hence there are infinitely many components of the fixed point set \eqref{EQs=0} of the same dimension as $\mathcal{G}_c^{0,0}$, all contributing to the main term of the asymptotics. For instance, if $H \subset {\mathbb S}^2$ is a latitude circle, then for $0 < c \leq 1$, $\mathcal{G}_c^{0,0}$ is not dominant, due to periodicity of the geodesic flow and of rotations (see Section \ref{SHARPSECT}). On a negatively curved surface, $\mathcal{G}_c^{0,0}$ is dominant. Note that $\mathcal{G}_c^0 \cap \{0\} \times {\mathbb R}_s \times S^*H$ is a single component when $c=1$ and $H$ is totally geodesic so that the notion of principal and dominant is vacuous then. When $c=1$ \eqref{gcalc} consists
of bi-angles formed by an $M$ geodesic hitting $H$ tangentially at both endpoints, closed up by an $H$-geodesic arc through the same endpoint
velocities.
When $c = 0$, $\xi = \nu_y$ is the unit (co-)normal and \eqref{gcalc} consists of $M$-geodesic arcs hitting $H$ orthogonally
at both endpoints. These are the arcs relevant to the original Kuznecov formula of \cite{Zel92}.
To eliminate the contribution of non-principal components when $0 < c < 1$, we assumed in Theorem \ref{main 2} that $\mathrm{supp}\, \hat{\psi}$ is `sufficiently small'. We now define
the term more precisely.
\begin{definition}\label{SUFFSMALLDEF} When $0 < c < 1$, we say that $\mathrm{supp}\, \hat{\psi}$ is sufficiently small if $\mathcal{G}_c^{0,0}$ is the only component of $\mathcal{G}_c^0$ with values $s \in \mathrm{supp}\, \hat{\psi}$. \end{definition}
\subsection{Cleanliness and Jacobi fields}\label{CLEANJF}
As in most asymptotics problems, we will be using the method of stationary phase, mainly implicitly when composing canonical relations. This requires
the standard notion of cleanliness from \cite{DG75}.
\begin{definition} \label{CLEAN} We say that \eqref{gcalc}, resp. \eqref{gcalc0}, is {\it clean} if $\mathcal{G}_c$, resp. $\mathcal{G}_c^0$, is a submanifold of ${\mathbb R} \times {\mathbb R}\times S^c_H M$, resp. ${\mathbb R} \times S^c_H M$, and if its tangent space at each point is the subspace fixed by $D_{\zeta} G_H^{ - s} \circ \pi_H \circ G_M^{cs + t} $ (resp. the same with $s = 0$), where $\zeta$ denotes the $S^c_H M$ variables. \end{definition}
Some examples are given in Section \ref{HSECT}. It is often not very difficult to determine whether $\mathcal{G}_c, \mathcal{G}_c^0$ are manifolds. The tangent cleanliness condition is often difficult to check. It may be stated in terms of {\it bi-angle Jacobi fields}, as follows.
A Jacobi field along a geodesic arc $\gamma$ is the vector field $Y(t)$ along $\gamma(t)$ arising from a 1-parameter variation $\alpha(t; s)$
of geodesics, with $\alpha(t; 0) = \gamma(t)$. An $(S, T)$ bi-angle consists of an $H$-geodesic arc $\gamma^H(t) $ of length $S$ and
an $M$ geodesic arc $\gamma^M(t)$ of length $T$ which have the same initial and terminal point, and such that the projection of the initial and terminal velocities to $\gamma^M(T)$ equal those of $\gamma^H(S) $. A bi-angle Jacobi field $Y(t)$ arises as the variation vector field of a one-parameter variation $(\gamma_{\epsilon}^M(t), \gamma_{\epsilon}^H(t) $) of
$(S(\epsilon), T(\epsilon)) $- bi-angles. It consists of a Jacobi field $J_M(t)$ along $\gamma^M(t) $ and
a Jacobi field $J_H(t) $ along $\gamma^H(t) $ which are compatible at the endpoints. Namely, if we differentiate in $\epsilon$ the equations $$
\begin{array}{l} \gamma^H_{\epsilon} (0) = \gamma^M_{\epsilon} (0),
\gamma^H_{\epsilon} (S(\epsilon)) = \gamma^M_{\epsilon} (T(\epsilon) ), \\ \\ \pi_* \dot{\gamma}^M_{\epsilon} (0) = \dot{\gamma}^M_{\epsilon}(0),\;\;
\pi_* \dot{\gamma}^M(S(\epsilon)) = \dot{\gamma}^M(T(\epsilon)), \ |\dot{\gamma}^H_{\epsilon} (0)| = c \end{array}$$
at $\epsilon =0$ we get
$$J_H(0) = J_M(0), \;\; J_H (t_0) \dot{S} = J_M(T) \dot{T},$$
together with two equations for $\frac{D}{Dt} J_H, \frac{D}{Dt} J_M.$
\begin{definition} A bi-angle Jacobi field is a pair $(J_H(t), J_M(t))$ of Jacobi fields, as above, along the arcs of the bi-angle which satisfy
the above compatibility conditions at the endpoints. \end{definition}
In defining variations of bi-angles, we may allow the angle parameter $c$ to vary with $\epsilon$ in a variation, or to hold it fixed. In Definition \ref{CLEAN}, the parameter $c$ is held
fixed, and therefore all variations must have the same value of $c$.
As with standard Jacobi fields, the bi-angle Jacobi field arises by varying the initial tangent vector $(x, \xi) \in S^*_x M, x \in H$
along $S^*_H M$. A `horizontal' bi-angle Jacobi field is one that arises by varying $x \in H$, and a vertical bi-angle Jacobi field arises
by fixing $x$ but varying the initial direction $\xi$.
\begin{remark} The vectors in the subspace fixed by $D G_H^{-s} \circ \pi_H \circ G_M^{cs +t} (\zeta) = \pi_H(\zeta) $ always correspond
to bi-angle Jacobi fields along the associated bi-angle. Lack of cleanliness occurs if there exists a bi-angle Jacobi field which
does not arise as the variation vector field of a variation of $(c, S, T)$-bi-angles of $(M, g, H)$. \end{remark}
Cleanliness is often difficult to verify. We refer to Section \ref{HSECT} for further discussion and un-clean examples.
\subsection{Outline of the proof of Theorem \ref{main 2} and further results} To prove Theorem \ref{main 2} we first prove the related result where the indicator functions are replaced by smooth test functions. The exposition will set the stage for the further results (Theorem \ref{main 5} and Theorem \ref{main 3}).
The sharp sums over
$\lambda_k$ in \eqref{c} and half-sharp sums \eqref{cpsi} may be replaced by `twice- smoothed' sums with shorter windows if we introduce a second eigenvalue cutoff $\rho \in \mathcal{S}({\mathbb R})$ with
$\hat{\rho} \in C_0^{\infty}({\mathbb R})$ and define, \begin{equation} \label{cpsirho}
N^{c} _{\psi, \rho, H }(\lambda): =
\sum_{j, k} \rho(\lambda - \lambda_k) \psi( \mu_j - c \lambda_k) \left| \int_{H} \varphi_j \overline{\psi_k}dV_H \right|^2.
\end{equation}
Under standard `clean intersection hypotheses,' the sums \eqref{cpsirho} admit complete asymptotic expansions. They are the raw data
of the problem, to which the methods of Fourier integral operators apply. To obtain the sharper results, we apply some Tauberian
theorems, first (when $\psi \geq 0$) to replace $\rho$ by the corresponding indicator function to obtain two-term asymptotics for \eqref{cpsi}. Then, in Theorem \ref{main 2},
we apply a second Tauberian theorem to replace $\psi$ (when possible) by the corresponding indicator function. In the following, we assume that
$\mathrm{supp}\,\hat\psi$ is `sufficiently small' as defined in Definition \ref{SUFFSMALLDEF}.
To study the asymptotics of \eqref{cpsirho}, we express the left side as the oscillatory integral, \begin{equation}\label{SpsiDEF} \begin{array}{lll} N^{c} _{\psi, \rho, H }(\lambda) & = & \int_{{\mathbb R}} \hat{\rho}(t) e^{-it \lambda} S^c(t, \psi)dt, \;\; \text{ where} \\&&\\
S^c(t, \psi): & = & \sum_{j,k} e^{it \lambda_j } \psi (\mu_k-c \lambda_j) \left| \int_H \varphi_{j,k}(x,x) dV_H(x) \right|^2, \end{array} \end{equation}
show that $S^c(t, \psi)$ is a Lagrangian distribution on ${\mathbb R}$, and determine its singularities (Proposition \ref{MAINFIOPROP}).
\begin{definition} \label{SOJOURNDEF} Let $\Sigma^c(\psi ) = \operatorname{sing\, supp} S^c(t, \psi)$ be the set of singular points of $S^c(t,\psi)$. $\Sigma^c(\psi)$ is called the
set of `sojourn times', and consists
of $t$ for which there exist solutions of \eqref{EQ} with $s \in \mathrm{supp}\, \hat{\psi}.$
\end{definition} We then have, \begin{theorem} \label{main 4} Let $\dim M = n,\, \dim H = d$. Let $\psi, \rho \in \mathcal{S}({\mathbb R})$ with $\hat{\psi}, \hat{\rho} \in C_0^{\infty}({\mathbb R})$. Assume that $\mathrm{supp}\, \hat{\rho} \cap \Sigma^c(\psi) = \{0\}$ and $\hat{\rho}(0) =1$. Also assume that $\mathrm{supp}\, \hat{\psi}$ is sufficiently small in the sense of Definition \ref{SUFFSMALLDEF} . Let $c \in (0,1)$ and assume that $\mathcal{G}_c^0$ is clean in the sense of Definition \ref{CLEAN}. Then, there exists a complete asymptotic expansion of $N^{c} _{\psi, \rho, H }(\lambda)$ with principal terms, $$ N^{c} _{\psi, \rho, H }(\lambda) = C_{n, d} \; a_c^0(H, \psi) \lambda^{n-2 } + O( \lambda^{n-3}), \qquad (0 < c < 1). $$ where the leading coefficient is given by \eqref{acHpsi0}. \end{theorem}
While this theorem follows as a corollary to \cite[Theorem 1.4]{WXZ20}, our proof here also admits a more general statement (see Theorem \ref{main 5}). Theorem \ref{main 4} is proved in Section \ref{ASYMPTOTICSECT}. Note that the order of asymptotics is $1$ less than in Theorem \ref{main 2} and Theorem \ref{main 3}. This is due to the fact that the $\lambda$ intervals are `thin' in Theorem \ref{main 4} and `wide' in the previous theorems.
To obtain the `sharp' Weyl asymptotics of Theorem \ref{main 2} in the $\lambda$-variable, we need to replace $\rho$ by an indicator function. This is done in Section \ref{SS Tauberian} by a standard Tauberian theorem.
\subsection{Asymptotics when $\mathrm{supp}\, \hat{\psi}$ is arbitrarily large}\label{supplarge}
Theorem \ref{main 2} and Theorem \ref{main 4} both assume that $\mathrm{supp}\, \hat{\psi}$ is sufficiently small in the sense of Definition \ref{SUFFSMALLDEF}. They readily admit generalizations to any $ \hat{\psi} \in C_0^{\infty}$ where we take into account the additional components of $\mathcal{G}_c^0$ coming from large $s$. Only the components of $\mathcal{G}_c^0$ of the same maximal dimension as the principal component contribute to the principal term of the asymptotics.
In the notation of \cite[Theorem 4.5]{DG75}, we are assuming that the set of $(c, s, 0)$-bi-angles with $t = 0$ is a union of clean components $Z_j(0)$ of dimension $d_j$. In our situation $Z_j(0)$ is a component of $ \mathcal{G}^0_c$. Then, for $t$ sufficiently close to $0$, each $Z_j(0)$ gives rise to a Lagrangian distribution $\beta_j(t)$ on ${\mathbb R}$ with singularities only at $t=0$, such that, \begin{equation}\label{betaeq} \begin{array}{l} S^c(t, \psi) = \sum_j \beta_j(t), \;\; \text{ where} \;\; \beta_j(t) = \int_{{\mathbb R}} \alpha_j(s) e^{- i s t} ds, \\ \\ \text{ with}\;\; \alpha_j(s) \sim (\frac{s}{2 \pi i})^{ -1 + {\frac{1}{2}} (n -d)+\frac{d_j}{2}}\;\; i^{- \sigma_j} \sum_{k=0}^{\infty} \alpha_{j,k} s^{-k}, \end{array} \end{equation} where $d_j $ is the dimension of the component $Z_j(0) \subset \mathcal{G}_c^0$.
\begin{definition}\label{MAXDEF} We say that a connected component $Z_j(0)$ of $\mathcal{G}_c^0$ is {\it maximal} if its dimension is the same as $\mathcal{G}_c^{0,0}$. We denote the set of maximal components by $\{Z_j^m(0) = \mathcal{G}_{c, j}^{0, m}\}_{j=1}^{\infty}$. When $0 < c < 1$, $s$ is constant on each maximal component and we denote these values of $s$ by $s_j^m$. \end{definition}
Existence of a maximal component other than $\mathcal{G}_c^0$ implies that there exists periodicity in the geodesic flows $G^t_M$ and $G_H^t$ (see Section \ref{CLUSTERSECT}),
and that $c$ is a special value where there exists periodicity in the flows $G^{-cs}_H$. For instance, when $H = {\mathbb S}^d \subset {\mathbb S}^n$, and when $c = \frac{p}{q} \in {\mathbb Q}$, there exist maximal components when $s = 2 k \pi q$ with $k = 1, 2, \dots$, since then the left side of \eqref{EQs=0} is the identity, $G^{-s}_{{\mathbb S}^d} \circ S^{cs }_{{\mathbb S}^n} = G^{-2 k \pi q }_{{\mathbb S}^d} \circ S^{2 k \pi p }_{{\mathbb S}^n} = {\rm Id} \times {\rm Id}.$ This is the geometric reason behind the discontinuous behavior of the jumps in Section \ref{S2SPARSE}. Recall $\Sigma^c(\psi)$ from Definition \ref{SOJOURNDEF} for the following theorem.
\begin{theorem} \label{main 5} Let $\dim M = n, \dim H = d$. Let $\psi, \rho \in \mathcal{S}({\mathbb R})$ with $\hat{\psi}, \hat{\rho} \in C_0^{\infty}({\mathbb R})$. Assume that
$\hat{\rho}(0) =1$ and that $\mathrm{supp}\, \hat{\rho} \cap \Sigma^c(\psi) = \{0\}$. Let $0 < c <1$ and assume that $\mathcal{G}_c^0$ is clean in the sense of Definition \ref{CLEAN}. Then, there exists a complete asymptotic expansion of $N^{c} _{\psi, \rho, H }(\lambda)$ with principal terms, $$ N^{c} _{\psi, \rho, H }(\lambda) =
\begin{array}{ll} C_{n, d} \; a_c^0(H, \psi) \lambda^{n-2 } + O_\psi( \lambda^{n-5/2}), & 0 < c < 1
\end{array}
$$ where the leading coefficient is given by the following sum over the maximal components $Z_j^m(0)$ of $\mathcal{G}_c^0$, $$a_c^0(H, \psi): =
\left(\sum_j \hat{\psi}(s_j^m) \right) \; c^{d-1} (1 - c^2)^{\frac{n-d-2}{2}} {\mathcal H}^{d}(H).
$$ Moreover, if we replace $\rho$ by ${\bf 1}_{[0, \lambda]}$, then $$ N^{c} _{\psi, H }(\lambda) = C_{n, d} \; a_c^0(H, \psi) \lambda^{n-1 } + O_{\psi}( \lambda^{n-3/2}). $$
\end{theorem}
The exponent $n-5/2$ in the second term of the asymptotics can arise from components $Z_j(0)$ of dimension $d_j$ possibly one less than maximal. This drops the exponent of the top term of $\beta_j$ of \eqref{betaeq} by $1/2$. If no such components occur, the remainder is of order $\lambda^{n-3}$. See Section \ref{3PROOFS} for the proofs of Theorem \ref{main 4} and Theorem \ref{main 5}.
\begin{remark} The formula for $a_c^0(H, \psi)$ reflects the fact that, under the cleanliness hypothesis of Definition \ref{CLEAN}, each maximal component $Z_j^m(0) \simeq \mathcal{G}_c^{0,0}$ must be essentially the same as the principal component modulo the change in the $s$ parameter, hence the corresponding leading coefficient $\alpha_{j, 0}$ in \eqref{betaeq} is the same as for the principal component.
The only essential change to the principal coefficient in Theorem \ref{main 4} is that it is now necessary to compute the sum of the canonical symplectic volumes of all maximal components with $s_j^m \in \mathrm{supp}\,\; \hat{\psi}$. When $c < 1$, the set of such $s_j^m$ is discrete and thus we sum $\hat{\psi}$ over this finite set of parameters. See Section \ref{CLUSTERSECT} for a discussion of the periodicity properties necessary to have maximal components.
\end{remark} \subsection{Two term asymptotics with small oh remainder} \label{REMAINDERSECT}
Theorem \ref{main 2}, Theorem \ref{main 4} and Theorem \ref{main 5} assume that $\mathrm{supp}\, \hat{\rho} \cap \Sigma^c(\psi) = \{0\}$ and $\hat{\rho}(0) =1$. By allowing general $\rho \in C^{\infty}({\mathbb R})$ with $\hat{\rho} \in C_0^{\infty}$, and by studying long-time asymptotics of $G^t_M$ and $G^t_H$, it is possible under favorable circumstances to prove two-term asymptotics of the type initiated by Y. Safarov (see \cite{SV} for background). From Theorem \ref{main 5}, we note that the order of the second term depends on the geometry of $\mathcal{G}_c$. When the second term of Theorem \ref{main 5} has order $\lambda^{n-3}$, the two term asymptotics have the following form: for any $\epsilon >0$, we have as $\lambda \to \infty$,
\begin{equation}\begin{array}{lll} C_{n, d} \; a_c^0(H, \psi) \lambda^{n - 1 } + \mathcal{Q}^c_{H, \psi} (\lambda - \epsilon ) \lambda^{n-2} - o(\lambda^{n-2}) & \leq N^c_{\psi, H}(\lambda) & \\ && \\
\leq C_{n, d} \; a_c^0(H, \psi) \lambda^{n - 1 } + \mathcal{Q}^c_{H, \psi} (\lambda + \epsilon) \lambda^{n-2} + o(\lambda^{n-2}),&& \end{array} \end{equation} where $\mathcal{Q}^c_{H, \psi}(\lambda)$ is an oscillatory function determined by the singularities for all $t$ (see Section \ref{SINGtnot0}).
The rather unusual notion of second term asymptotics is necessary and was first proved by Safarov for the pointwise Weyl
law or the fully integrated (over $M$) Weyl law; see \cite{SV} and also \cite[(29.2.16)- (29.2.17)]{HoIV}. Depending on the periodicity properties of $G_M^t$ and $ G_H^s$, $\mathcal{Q}^c_{H, \psi}(\lambda)$ can be continuous or have jumps.
Existence of jumps in $\mathcal{Q}^c_{H, \psi}(\lambda)$ is not a sufficient condition for jumps of size $\lambda^{n-2}$ in $N^c_{\psi, H}(\lambda) $ but it is a necessary one and can be used to analyze situations where maximal jumps occur.
In view of the many possible types of phenomena discussed in Section \ref{SHARPSECT}, it is a lengthy additional problem to prove such two term asymptotics and in particular to calculate $\mathcal{Q}^c_{H, \psi}(\lambda)$ explicitly, and we defer their study to a future article. In this section, we state some results on situations where $\mathcal{Q}^c_{H, \psi}= 0$.
The next result improves the remainder estimate in Theorem \ref{main 2} under the dynamical hypothesis that the $t=0$ singularity is dominant in the sense of Definition \ref{DOMDEF}.
\begin{theorem} \label{main 2b} With the notation and assumptions as in Theorem \ref{main 2}, we assume $\hat{\psi}$ has small support. We further assume that the singularity at $t=0$ is dominant, i.e. there do not exist maximal components $Z_j(T)$ for $T \not=0.$
Then,
$$ \begin{array}{l} N^{c} _{\psi, H }(\lambda) = C_{n, d} \; a_c^0(H, \psi) \lambda^{n - 1 } + R_{\psi, H}^c(\lambda), \;\; \text{ where}\\ \\
R_{\psi, H}^c(\lambda) \; = \; o_{\psi} (\lambda^{n-2}),\;\;\; J_{\psi, H}^c(\lambda) \; = \; o_{\psi}(\lambda^{n-2}). \end{array}$$
\end{theorem} To prove Theorem \ref{main 2b}, we first need to prove that the coefficient in the expansion of $N_{\psi, \rho, H}(\lambda)$ of the term of order $\lambda^{n-2}$ is zero. This is
shown in Section \ref{SUBPRINCIPALSECT}. From Theorem \ref{main 2b} we obtain an improved estimate on the jumps in the
case where both $G^t_M$ and $G^s_H$ are `aperiodic', i.e. where the Liouville measure in $S^*M$, resp. $S^*H$ of the periodic orbits
of $G^t_M$, resp. $G^s_H$ are zero. The following estimate follows directly from Theorem \ref{main 2b} and the same technique of bounding ${\bf 1}_{[-\epsilon, \epsilon]}$ by a test function $\psi $ with $\hat{\psi} \in C_0^{\infty}$ used in Corollary \ref{JUMPCOR}.
\begin{corollary}\label{JUMPCONJ} If both $G^t_M$ and $G^s_H$ are aperiodic (see Section \ref{CLUSTERSECT}), then for any $\epsilon > 0$, $$J_{\epsilon, H}^c (\lambda_j) = o_{\epsilon} (\lambda_j^{n-2}). $$ \end{corollary}
The proof of Corollary \ref{JUMPCONJ} from Theorem \ref{main 2b} follows a well-known path and is sketched in Section \ref{main 2bSECT}.
\subsection{Sharp-sharp asymptotics}\label{SHARPSHARPTHM}
Instead of the fuzzy ladder sums \eqref{cpsi}, it may seem preferable to study the sharp Weyl-Kuznecov sums,
\begin{equation} \label{c}
N^{c} _{\epsilon, H}(\lambda): =
\sum_{j, k: \lambda_j \leq \lambda, \; | \mu_k - c \lambda_j| \leq \epsilon} \left| \int_{H} \varphi_j \overline{\psi_k}dV_H \right|^2,
\end{equation}
in which we constrain the tangential modes $\mu_k$ to lie in an $\epsilon$- window around $c \lambda_j$ for $0 < c < 1$.
\begin{theorem} \label{main 3} $\dim M = n$, $\dim H = d$. Let $0 < c < 1$, and assume that $\mathcal{G}_c^0$ is clean in the sense of Definition \ref{CLEAN}. Assume that no component of $\mathcal{G}_c^0$ has maximal dimension except for the principal component (cf. Definition \ref{DOMDEF}). Then, in the notation of Theorem \ref{main 2} - Theorem \ref{main 5}, the Weyl - Kuznecov sums \eqref{c} satisfy:
$$
N^c_{\epsilon, H}(\lambda) = C_{n,d} a_c^0 (H, \epsilon) \lambda^{n - 1} + o (\lambda^{n - 1}), $$ where \begin{equation} \label{acHpsiep} a_c^0(H, \epsilon) := \epsilon c^{d-1} (1 - c^2)^{\frac{n-d-2}{2}} {\mathcal H}^{d}(H), \end{equation} where $C_{n,d}$ is some constant depending only on the dimensions $n$ and $d$. \end{theorem}
Note that the remainder estimate is much weaker than for Theorem \ref{main 2b}, which has a hypothesis on the $t \not= 0$ singularities (i.e. on the ``$\rho$ aspect''), and has a smooth test function $\psi$ with $\hat{\psi} \in C_0^{\infty}({\mathbb R})$ instead of ${\bf 1}_{[-\epsilon, \epsilon]}$. Indeed,
by Corollary \ref{JUMPCOR} we get the jump estimate, \begin{equation} \label{epjump} N^c_{\epsilon, H} (\lambda_j) - N^c_{\epsilon, H} (\lambda_j - 0) = J_{\epsilon, H}^c (\lambda_j) = O_{\epsilon} (\lambda_j^{n-2}). \end{equation} The worse remainder in Theorem \ref{main 3} is due to the explicit dependence on $\epsilon$ of the main term $a_c^0 (H, \epsilon)$, which results in two parameters having possible jumps: $\lambda_j$ and $\epsilon$. The $\epsilon$-dependence of the coefficient is discussed in Section \ref{SHARPWKINTRO}.
The large jump size reflects the new `layer' of eigenvalues one gets when $\epsilon$ hits
its critical values, in cases where each `layer' has the same order of magnitude as the principal layer. The layers correspond to
connected components of $\mathcal{G}_c^0$. In the case $c < 1$ the components indexed by certain values of $s$.
As illustrated in the case of spheres in Section \ref{S2SPARSE}, there can exist large contributions from the edges (endpoints) of the interval $\mu_k - c \lambda_j \in [-\epsilon, \epsilon]$ for special values of $\epsilon$, i.e. $J_{\epsilon, H}^c (\lambda_j)$ itself can jump as $\epsilon$ increases by the amount $\lambda_j^{n-2}$. But by Corollary \ref{JUMPCONJ} again, this cannot happen if $G^t_M$ and $G^s_H$ are aperiodic. Hence the principal component condition is necessary.
To obtain the `doubly sharp' Weyl asymptotics of Theorem \ref{main 3}, we need to replace $\psi$ by an indicator function. This is done in Section \ref{BUSECT} by a Tauberian argument of semi-classical type adapted from Petkov-Robert \cite{PR85}.
\subsection{The principal coefficient $ a_c^0(H, \psi)$ in Theorems \ref{main 2} - \ref{main 3}}\label{SHARPWKINTRO}
There are two aspects (roughly speaking) to the principal coefficient \eqref{acHpsi0}: the volume aspect and the $\psi$-aspect.
In the case where $\hat \psi$ and $\hat \rho$ have small support, the symbol calculations in Theorem \ref{main 3} are contained in \cite{WXZ20}. In this article, we use the symbol calculus of Fourier integral operators under pullback and pushforward as in \cite{GS77, GS13} to calculate symbols.
We note that for $0 < c < 1$, $ c^{d-1} (1 - c^2)^{\frac{n-d-1}{2}} {\mathcal H}^d(H)$ is the $(n-2)$-dimensional volume of the set in $S^*_{H, c} \subset S^*_H M$ projecting to $\mathcal{G}_c^{0,0}$. Indeed, for each $x \in H$, $\pi_H \xi \in \mathcal{G}_c^{0,0}$ if and only if the components
$\xi = \xi^{\perp} + \xi^{T}$ of $\xi \in S^*_x M$ of the orthogonal decomposition $T_x M = T_x H \oplus (N_x H)$ have norms $\sqrt{1-c^2}, $ resp. $c$, so that $S^*_{H, c} \simeq {\mathbb S}_c^{d-1} \times {\mathbb S}^{n-d-1}_{\sqrt{1 - c^2}}$ where $S_r^k$ is the Euclidean $k$-sphere of radius $r$; the $n-2$ dimensional surface measure of $S^*_{H, c} $ is $c^{d-1} (1 - c^2)^{\frac{n-d-1}{2}}$ times the
$n-2$-volume of ${\mathbb S}^{d-1} \times {\mathbb S}^{n-d-1}.$ The extra factor of $ (1 - c^2)^{-\frac{1}{2}} $ is due to the density $\frac{1}{|\det D \pi_H|}$ of the Leray measure relative to the Euclidean volume measure. The projection $\pi_H: S_q^* M \to S^*_q H$ has a fold singularity where $c=1$ along $S_q^*H$ with a one-dimensional kernel, so the Leray density vanishes to order $1$ when $c=1$, hence the difficulties at the $c = 1$ interface.
Now let us check the $\epsilon$-dependence of $a^0_{H, \epsilon}$ in Theorem \ref{main 3}. This is different from the analysis above, because the spectral weight ${\bf 1}_{[-\epsilon, \epsilon]}$ has a non-compactly supported Fourier transform and indeed it has very long tails. The hypothesis of Theorem \ref{main 3} rules out the case of ${\mathbb S}^d \subset {\mathbb S}^n$ when $c < 1$ because in the rational case the latter has many maximal dimensional components due to fixed point sets of the periodic Hamilton flow. In the proof, we replace ${\bf 1}_{[-\epsilon, \epsilon]}$ with $\psi_{T, \epsilon}: = \theta_T * {\bf 1}_{[-\epsilon, \epsilon]}$ and
then the principal coefficient is $a_c(H, \psi_{T, \epsilon})$, with remainder of order $\frac{1}{T}$. To obtain the remainder estimate, we take the limit as $T \to \infty$. By Theorem \ref{main 5},
we get $\hat{\psi}_{T, \epsilon}(0)$ for $c < 1$. As $T \to \infty$,
$\psi_{T, \epsilon} \to {\bf 1}_{[-\epsilon, \epsilon]}$, so $\hat{\psi}_{T, \epsilon}(0) \to 2\epsilon $.
\subsection{Remarks on Tauberian theorems} If we think of
$\lambda_j^{-1} =: \hbar_j$ as the Planck constant, then we are considering eigenvalues of the zeroth order operator $\hbar \sqrt{-\Delta}_H$ in the semi-classical thin interval $|\hbar_j\mu_k - c | \leq \epsilon \hbar_j$ (whose width is one lower order than that of the operator). The Tauberian theorem of Section \ref{BUSECT} is indeed modeled on a semi-classical Tauberian theorem. However, in several essential respects, the sharp jumps and the sharp sums in $\lambda_j $ do not behave in a semi-classical way and the results are homogeneous rather than semi-classical. For instance, the jumps rarely have asymptotic expansions.
\subsection{Related results and problems} \label{RELATED}
We first compare the results to those of \cite{WXZ20}.
In comparison to this article, `conic' sums $$ N^{m,c}_{ \epsilon, H }(\lambda): =
\sum_{j, k: \lambda_k \leq \lambda, \; | \frac{\mu_j}{\lambda_k} - c| \leq \epsilon} \left| \int_{H} \varphi_j \overline{\psi_k}dV_H \right|^2$$ are emphasized in \cite{WXZ20}.
The sums $N^{m, c}_{\epsilon, H}$ are wide in $\lambda$ and `conic' in $\mu$, so they are wide in every sense. Hence, we expect to
have asymptotics of these sums with no dynamical hypotheses.
In \cite{Zel92}, the special (and singular) case $c= 0$ was studied, and indeed, the Fourier coefficient
along $H$ as fixed at $0$. The $\lambda$-sums were wide and sharp. In more recent work, \cite{SXZh17, CGT17, CG19, Wy17a}, short and sharp
sums were considered under various geometric and dynamical hypotheses.
Some of the symbol calculations needed for this article are contained in \cite{WXZ20}. We refer there
for the calculations and do not duplicate them here.
\subsubsection{Two-term asymptotics}
As discussed in Section \ref{REMAINDERSECT}, a significant refinement of the results of this article is to prove a two-term
asymptotic expansion for $N^c_{\psi, H}(\lambda)$ and to calculate $Q_{\psi, H}^c(\lambda)$. The existence of the two-term
asymptotics is sketched in Section \ref{SINGtnot0} but the calculation of $Q_{\psi, H}^c(\lambda)$ is deferred to later work.
\subsubsection{$c=1$ and H totally geodesic}\label{c=1+} As indicated above, the case $c=1$ is the edge case, and there are several different types of `singular' or extremal behavior in this case. When $H$ is totally geodesic, the asymptotics for $c=1$ are determined
in the subsequent article \cite{WXZ+}. It turns out that the power of $\lambda$ in Theorem \ref{main 2} is $\lambda^{\frac{n+d}{2}}$,
hence depends on $d$ as well as on $n$.
\subsubsection{ Submanifolds with non-degenerate second fundamental form when $c=1$}\label{CAUSTICSECT} When $c=1$ and $H$ has non-degenerate second fundamental form, one expects diffractive or Airy type effects when $H$ is a caustic hypersurface for the geodesic flow. This type of eigenfunction concentration seems to require Airy integral operator techniques that are beyond the scope of this article. Simple examples of eigenfunctions in this sense on caustic latitude circles of ${\mathbb S}^2$ are presented in Section \ref{2Sphere}. The example of closed horocycles of cusped hyperbolic surfaces is studied in \cite{Wo04}. It would be interesting to see if there exist more exotic examples involving more general singularities than fold singularities.
\subsubsection{Equipartition of energy among Fourier coefficients}
This problem is mentioned around \eqref{QER}. \begin{prob} How is the mass of the Fourier coefficients distributed among the $a_n(\lambda_j)$? \end{prob}
In the case of `chaotic' geodesic flow, it is plausible that the distribution of mass among the Fourier coefficients should be roughly constant, at least away from the endpoints of the allowed interval. It is a very difficult problem to determine if and when the Fourier coefficients are equidistributed, or when they have singular concentration, but the problem guides many of the studies of averaged Fourier coefficients.
\subsubsection{Estimates for individual eigenfunctions}\label{INDIVIDSUBSECT}
It would be most desirable to obtain estimates on individual Fourier coefficients of individual eigenfunctions, but again it is usually only feasible to study averages over thin windows of the Fourier coefficients when $\dim M >2$.
\begin{prob} What are sharp upper bounds on the individual terms \eqref{INDIVIDEST}. Which are the quadruples $(M, g, H, \varphi_j)$ for which the individual Fourier coefficients \eqref{INDIVIDEST} are maximal? Do they have the property that $\gamma_H \varphi_j$ is an eigenfunction of $H$? In that case, \eqref{INDIVIDEST} are the same as $L^2$ norms, for which sharp estimates are given in \cite{BGT} (see Section \ref{BGTREV}). When does \eqref{JDEF} have the same order of magnitude as \eqref{INDIVIDEST}? \end{prob}
It may be expected that the maximal case occurs when $H$ has many almost orthogonal Gaussian beams. This occurs when $H$ is a totally geodesic subsphere of a standard sphere. Gaussian beams may be constructed around elliptic periodic geodesics, but usually as quasi-modes rather than modes. The quasi-modes give rise to jumps in the second term but not jumps in the Weyl-Kuznecov sums.
\subsubsection{When is the restriction of an eigenfunction an eigenfunction?}\label{EIGRESEIG}
It is plausible that restrictions of individual eigenfunctions of $M$ with maximal Fourier coefficients are eigenfunctions of $H$. Otherwise its Fourier coefficients are spread out too much.
\begin{prob} What are necessary and sufficient conditions that the restriction of a $\Delta_M$ eigenfunction is a $\Delta_H$ eigenfunction? \end{prob}
It is likely that this problem has been studied before, but the authors were unable to find a reference. The known examples all seem to involve separation of variables.
Examples where $\gamma_H \varphi_j$ is an eigenfunction of $H$ are standard exponentials $e^{i x \cdot \xi}$ on a flat torus, where $H$ is a totally geodesic sub-torus. Other examples are the standard spherical harmonics $Y_N^{\vec m}$ on the $n$-sphere ${\mathbb S}^n$, where $H$ is a `latitude sphere' i.e. an orbit of a point under $SO(k)$ for some $k \leq n$. In Section \ref{SHARPSECT} we consider various examples on the standard spheres ${\mathbb S}^n$.
\subsection{Acknowledgments} Xi was partially supported by National Key R\&D Program of China No. 2022YFA1007200, National Natural Science Foundation of China No. 12171424. Wyman was partially supported by National Science Foundation of USA No. DMS-2204397 and by the AMS Simons travel grants. Zelditch was supported by National Science Foundation of USA Nos. DMS-1810747 and DMS-1502632.
\section{Sharpness of the remainder estimates} \label{SHARPSECT}
In this section, we give examples illustrating the sharpness of the remainder estimate of Theorem \ref{main 3} and the behavior of the jumps \eqref{JDEF} and the estimate of Corollary \ref{JUMPCOR}. We also illustrate the cleanliness issues in Definition \ref{CLEAN} with some examples that explain the reasons for assuming that $0 < c < 1$. The main example of jump behavior is that of totally geodesic or of latitude spheres ${\mathbb S}^d \subset {\mathbb S}^n$ in standard spheres. We postpone the discussion of these examples to \cite{WXZ+}.
As mentioned above, the behavior of remainders and jumps depends on the periodicity properties of $G^t_M$ and $G^s_H$. We begin by discussing the role of periodicities in spectral asymptotics.
\subsection{Spectral clustering, jump behavior and periodicity of geodesic flows}\label{CLUSTERSECT}
There is a well-known dichotomy among geodesic flows and Laplace spectra which plays an important, if implicit, role in the
Kuznecov-Weyl asymptotics. Namely, if the geodesic flow $G^t_M$ of $(M,g)$ is periodic in the sense that $G^T = Id$ then the spectrum of $\sqrt{-\Delta_M}$
clusters along an arithmetic progression $\{ \frac{2\pi}{T} k + \frac{\beta}{4}, k \in {\mathbb N}\}$ where $T$ is the minimal period and $\beta$ is the common Morse index of the closed geodesics. On the other hand, if the geodesic flow is ``aperiodic'' in the sense that the
set of closed geodesics has Liouville measure zero in $S^*M$, then the spectrum is uniformly distributed modulo one. We refer to
\cite{DG75} for the original theorem of this kind and to \cite{Zel17} for further background. There also exist intermediate cases with
a positive measure but not a full measure of closed geodesics.
The principal term of Theorem \ref{main 2} (and subsequent theorems) does not depend on whether the eigenvalues cluster or are uniformly distributed, but the remainder terms and jump formulae do. In the examples of subspheres $H = {\mathbb S}^d \subset M = {\mathbb S}^n$, both $G^t_M $ and $G^t_H$ are periodic and both Laplacians have spectral clustering. Indeed, the eigenvalues of $\sqrt{-\Delta_{{\mathbb S}^n}}$ concentrate along the arithmetic progression $\{N + \frac{n-1}{2}\}$ and have multiplicities of order $N^{n-1}$. As discussed in detail in Section \ref{SHARPSECT}, this causes huge jumps in the $\lambda$ aspect at eigenvalues of $\sqrt{-\Delta_{{\mathbb S}^n}}$. Furthermore, the equation $\mu_k = c \lambda_j$ for fixed $(\lambda_j, c)$ can have many solutions when $\sqrt{-\Delta}_H$ has spectral clustering, i.e. when $G^t_H$ is periodic. The number of solutions depends on the relation between the periods and therefore on $c$.
To be more precise, the Kuznecov-Weyl asymptotics are determined by the dimension of the `fixed point set' \eqref{EQs=0}. For general $(n,d)$ this is not literally the fixed point set of a flow, since the relevant `joint flow' at $t=0$ is the family of maps (depending on $s$),
$$G_H^{ - s} \circ \pi_H \circ G_M^{c s} (q, \xi): S^c_H M \to B^*H, \;\; B^*H = \{(q, \eta) \in T^*H: |\eta|_H \leq 1\}$$ between different spaces.
In the special case where $\dim H = \dim M-1$ is an oriented hypersurface, this joint flow may be considered a double-valued flow on $B^*H$. As in \cite{TZ13}, we can define lifts $\xi_{\pm}: B^* M \to S^*_H M$ where $\xi_{\pm}(q, \eta) $ are the two unit covectors (on opposite sides of $T^*H$) that lift $(q, \eta)$ in the sense that $\pi_H \xi_{\pm}(q, \eta) = (q, \eta)$. Then \eqref{EQs=0} is the fixed point equation at $t=0$ of the double-valued flow, $$G_H^{ - s} \circ \pi_H \circ G_M^{c s}\; \xi_{\pm} (q, \eta): B^*H \to B^*H.$$ When studying the singularities at $t \not=0$ one has the equation \eqref{EQ}, which in the hypersurface case is the equation, \begin{equation} \label{LIFT} G_H^{ - s} \circ \pi_H \circ G_M^{cs + t} \xi_{\pm} (q, \eta ) = (q, \eta). \end{equation} When $c < 1$ and $t=0$, the fixed point set has maximal dimension for $s \not= 0$ (i.e. if the principal component is non-dominant in the sense of Definition \ref{DOMDEF}) only if the double-valued flow is the identity map at time $s$. The main example is when both $G^{-s} _H$ and $G^t_M$ are both periodic and $c$ is such that they have a common period. When $c=1$ and $H$ is totally geodesic, this equation holds trivially for all $s$ and all $(q, \eta)$. Periodicity of $G_H^t$ is a necessary and sufficient condition to obtain singularities at times $t \not= 0$ as strong as the one at $t=0$ in the case $c=1$ and $H$ totally geodesic. {In the language of \cite{TZ13}, there is a first return map $\Phi^c: S^c_H M \to S^c_H M$ defined by following geodesics with initial data in $S^c_H M$ until they return to $S^c_H M$. The first return time to $S^c_H M$ is denoted by $T_H^c$ and $\Phi^c = G_M^{T^c_H} $. Since $S^c_H M$ has codimension $> 1$ in $S^*M$, the first return may be infinite on a large subset of $S^c_H M$. We have not formulated the results in terms of $T_H^c$ or $S^c_H M$, but they are implicitly relevant in the main results. We refer to Section \ref{SURFOFREV} for the example of convex surfaces of revolution, where $T_H^c$ is a constant when $H$ is an orbit of the rotation group. }
For submanifolds $H$ of codimension $> 1$, the double valued lift generalizes to a correspondence taking $\eta \in B^*_y H$
to a sphere $S^{n-d-1}$ of possible covectors $\eta + \sqrt{1 - |\eta|^2} \nu \in S^*_H M$ projecting to $\eta$, as $\eta$ varies over $S N^* H$. One may still think of \eqref{LIFT} as the fixed point equation for a symplectic correspondence rather than a flow.
{Flat tori also exhibit certain kinds of periodicities. Suppose that $H = \{x_1 = 0\}$ is a totally geodesic coordinate slice of the flat torus
${\mathbb R}^n/{\mathbb Z}^n$. The geodesic flow is $G^t(x, \xi) = (x + t \frac{\xi}{|\xi|}, \xi)$ and it leaves invariant the tori $T_{\xi} = \{(x, \xi): x \in {\mathbb R}^n/{\mathbb Z}^n\}
\subset T^* {\mathbb R}^n /{\mathbb Z}^n$. The coordinate slice defines a transversal to the Kronecker flow on each $T_{\xi}$. Fixing $|\pi_H \xi | = c$ forces
$|\xi_1| = \sqrt{1 - c^2}$. The return time of $(x, \xi) \in S^c_{x_1 = 0} {\mathbb R}^n/{\mathbb Z}^n$ to the slice on $T_{\xi}$ is the time $t$ so that $t \xi_1 =0$, or $t= (1 - c^2)^{-{\frac{1}{2}}}$. Thus, the return time is independent of the invariant torus and one has periodicity of the return to $S^c_H M$ even though the geodesic flow of $M$ fails to be periodic. }
It is plausible from \eqref{JDEF} that $J_{\epsilon, H}^c(\lambda_j)$ should attain its maximal size when the the multiplicity of $\lambda_j$ is maximal and when there is clustering of the $\sqrt{-\Delta_H}$-spectrum $\{\mu_k\}$ around $\lambda_j$, forcing both geodesic flows $G_M^t$ and $G_H^t$ to be periodic. But the jump depends on the sizes of the Fourier coefficients as well as
the spectrum.
To understand the general picture of jumps and remainder estimates, the reader may keep in mind some examples in the simplest case where $\dim M =2$ and $H$ is a geodesic. Periodicity of $G^s_H$ coincides with $H$ being a closed geodesic. Let us consider four examples (see Section \ref{SHARPSECT} for further discussion): (i) $M = {\mathbb S}^2$ and $H = \gamma$ is the equator; (ii) $M $ is a convex surface of revolution and $H = \gamma$ is the equator (see Section \ref{SURFOFREV}); (iii) $M$ is a non-Zoll surface of revolution in the shape of a `peanut', i.e. has a periodic hyperbolic geodesic `waist'
$\gamma$ and a top and bottom convex parts, each with a unique elliptic periodic geodesic; (iv) $\gamma$ is a closed geodesic of a hyperbolic surface. In all cases, the geodesic flow of $H$ is periodic. In case (i) the geodesic flow of $M$ is periodic, while in case (ii) it is not. In case (i) the multiplicity of the $N$th eigenvalue is $2N-1$ while in case (ii) all eigenvalues have multiplicity $\leq 2$. Yet both (i)-(ii) have Gaussian beams along $\gamma$, and when $c=1$ they are the only eigenfunctions contributing to the Kuznecov-Weyl asymptotics; hence the asymptotics are the same in both cases. Case (iii) is different in that $\gamma$ is now hyperbolic and there do not exist standard Gaussian beams along it but there does exist an eigenfunction which concentrates on $\gamma$ due to the fact that this example is quantum completely integrable. To our knowledge, the $L^2$ norm of its restriction (or equivalently, its Fourier coefficient with $|n| = \lambda$) have not been determined; In case (iv) there should not exist any such concentrating eigenfunctions. One would expect at least logarithmic improvements on the Fourier coefficient bounds, as in the case where $c=0$ (see \cite{WX, SXZh17, CG19}).
\subsection{Review of results on $L^2(H)$ norms of restrictions}\label{BGTREV}
Before discussing examples, we compare the results of Theorem \ref{main 3} with prior results on $L^2$ norms of restrictions \cite{BGT}. In the notation of \cite{BGT}, the estimates take the form,\footnote{The notation in \cite{BGT} is $\dim M = d, \dim H = k$} $$
\|\varphi_{\lambda}\|_{L^2(H)} \leq C (1 + \lambda)^{\rho(d, n)} \sqrt{\log \lambda} \|\varphi_{\lambda}\|_{L^2(M)} $$ where $$ \rho(d,n) = \begin{cases}
\frac{n-1}{4} - \frac{n-2}{4} = \frac{1}{4}, & d = n-1, \\
\frac{1}{2}, & d= n-2, \\
\frac{n-1}{2} - \frac{d}{2}, & 1 \leq d \leq n-3 \end{cases} $$ and where the $\sqrt{\log \lambda}$ in the bound can be removed if $d \neq n-2$. (See also \cite{Hu}.)
The problem of finding extremals for restricted $L^2$ norms on submanifolds is studied in \cite{BGT}. It is shown that extremals vary between Gaussian beams and zonal spherical harmonics depending on the pair $(n, d)$. The most difficult case is where $d= n -2$.
When $\dim M = 2, \dim H=1, c=1$ and $H$ is totally geodesic, the estimates on $\|\gamma_H \varphi_j\|_{L^2(H)}^2$ and \eqref{JDEF} can be the same (see \cite{WXZ+})). But for $\dim M > 2, $
the estimates on individual norms are significantly smaller, illustrating that \eqref{JDEF} is an average and that, when
$\gamma_H \varphi_j$ is not an eigenfunction of $H$ for every $j$, the Kuznecov-Weyl sums are of a different nature from $L^2$-norms of restrictions.
In general, the sum \eqref{JDEF} is a very thin sub-sum of \eqref{PLANCHEREL}
and \eqref{c} is a very thin sub-sum of the Weyl type function for restricted $L^2 $ norms,
\begin{equation} \label{L2DEF} N_{L^2(H)}(\lambda): = \sum_{j: \lambda_j \leq \lambda} \int_H |\gamma_H \varphi_j|^2 d V_H. \end{equation}
Further details are given in the examples below.
\subsection{Examples illustrating different types of Fourier coefficient behavior}\label{SECTEX} As mentioned above, the jumps \eqref{JDEF} are
averages over modes of $H$, and also involve sums over repeated eigenvalues of $M$ and $H$ when there exist multiple eigenvalues.
We now list some of the issues involved in relating remainder estimates on \eqref{JDEF} to estimates on individual Fourier coefficients of
individual eigenfunctions. The issues are illustrated on standard spheres ${\mathbb S}^n$ in Section \ref{2Sphere}. \begin{itemize} \item Multiplicity issues: The eigenspace ${\mathcal H}(\lambda_j)$ may have a large dimension $m(\lambda_j)$ , so that \eqref{JDEF} is an $m(\lambda_j)$-fold sum over an orthonormal basis of eigenfunctions of ${\mathcal H}(\lambda_j)$. See Section \ref{2Sphere} for the example of standard spheres ${\mathbb S}^n$.
Also, the $\sqrt{-\Delta}_H$-eigenspaces may have large dimension, so that for each $\lambda_j$, the $\mu_k$ in \eqref{JDEF} sum is over many 'Fourier coefficients.' This again is illustrated by sub-spheres of spheres (Section \ref{2Sphere}).
\item Fourier-sparsity of restricted eigenfunctions : It may occur (and does in the case of latitude circles of ${\mathbb S}^2$) that $\Delta_M$ has a sequence of eigenspaces of high multiplicity but, for each mode $\psi_k$ of $H$ and $\lambda$ in the spectrum of $\sqrt{-\Delta_M}$, there exists a single eigenfunction $\varphi_{j}$ in a given orthonormal basis of the $\lambda$-eigenspace with a non-zero $k$th Fourier coefficient $\langle \varphi_{j}, \psi_k \rangle$. Alternatively, for each eigenfunction $\varphi_j$ in the eigenbasis for $L^2(M)$, there might exist a single $\psi_{k}$ for which the Fourier coefficient is non-zero. An extreme (and interesting) case occurs when the restriction $\gamma_H \varphi_j$ of an eigenfunction of $M$ is an eigenfunction of $H$ (it is unknown when this occurs; see Section \ref{EIGRESEIG}.)
\item Codimension of $H$.
The higher the dimension of $H$, the higher the number of eigenvalues
$\mu_k: |\mu_k - c \lambda_j| \leq \epsilon $, hence the greater amount of averaging in \eqref{JDEF} for fixed $\varphi_j$. In the extreme case of curves, $\dim H =1$, the $\sqrt{-\Delta_H}$-spectrum is an arithmetic progression with large gaps, and for $\epsilon$ sufficiently small, the sum over $k$ might have just one element $\mu_k$. This eliminates the $H$-multiplicity aspect. However, there can be many $\Delta_M$-eigenfunctions which restrict to the same (up to scalar multiple) eigenfunction of $H$; see Section \ref{2Sphere}.
The results of \cite{BGT} reviewed in Section \ref{BGTREV} show that the $L^2$ norms of restrictions of eigenfunctions decrease linearly with the dimension of $H$. This in some sense balances the additional growth rate of eigenvalues as the dimension of $H$ increases. This issue is only relevant for $c = 1$.
\item Uniformity of Fourier coefficients. Another interesting scenario, which probably holds for compact hyperbolic surfaces at least,
is where the Fourier coefficients $|\langle \varphi_j, \psi_k \rangle_{L^2(H)}|$ are uniform in size as $\mu_k$ varies in the `allowed window' where $|\mu_k| < \lambda_j$. This is the opposite scenario from Fourier sparsity.
\end{itemize}
The sparsity phenomenon is illustrated in Section \ref{2Sphere} for the standard 2-spheres ${\mathbb S}^2$. In the case where $\gamma_H \varphi_j = c_{j,k} \psi_k$
for some $(j,k)$, $|\langle \gamma_H \varphi_j, \psi_k \rangle|^2 = c_{j,k}^{-1} \|\gamma_H \varphi_j\|_{L^2(H)}^2$.
\subsection{Restrictions to curves in a convex surface of revolution in ${\mathbb R}^3$} \label{SURFOFREV}
In this section, we illustrate some of the possible types of Fourier coefficient behavior in the case where $H$ is a latitude circle (an orbit of the rotational action around the third axis) of a convex surface of revolution $({\mathbb S}^2, g)$ in ${\mathbb R}^3$ and for the joint eigenfunctions
$\varphi^{m}_{\ell}$ of the Laplacian and of the generator $\frac{\partial}{\partial \theta} $ of
rotations around the $x_3$ axis in ${\mathbb R}^3$. Much of this material can be found in \cite{WXZ20} for the standard metric on ${\mathbb S}^2$.
The geodesic flow of $({\mathbb S}^2, g)$ is completely integrable, since rotations commute with the geodesic flow. The Hamiltonian $|\xi|_g$ of the geodesic flow Poisson commutes with the angular momentum, or Clairaut integral,
$p_{\theta}(x, \xi) = \langle \xi, \frac{\partial}{\partial \theta} \rangle = \left|\frac{\partial}{\partial \theta} \right|_{H_{\varphi_0} } \cos \angle (\frac{\partial}{\partial \theta}, \dot{\gamma}_{x, \xi}(0)), \;\; (x, \xi) \in T_x^*{\mathbb S}^2. $
The moment map for the joint Hamiltonian action is defined by $\mathcal{P}: = (|\xi|, p_{\theta}): T^*{\mathbb S}^2 \to {\mathbb R}^2$.
A level set $ \Lambda_{a} = \mathcal{P}^{-1}(a, 1) \subset S^* {\mathbb S}^2$ is a Lagrangian torus when $a \not= \pm 1$ and is the equatorial (phase space) geodesic when $a = \pm 1$. A ray or ladder in the image of the moment map $\mathcal{P}$ is defined by $\{(m, E): \frac{m}{E} = a\} \subset {\mathbb R}^2_+$, and its inverse image under $\mathcal{P}$ is ${\mathbb R}_+ \Lambda_{a} \subset T^* {\mathbb S}^2$.
If $(\theta, \varphi)$ denote spherical coordinates with respect to $(M,g)$ (i.e $\varphi$ is the distance from the north pole, $\theta$ is the angle of rotation from a fixed meridian), then an orbit of the rotation action is a latitude circle $H_{\varphi_0}$ with fixed $\varphi = \varphi_0$. We denote by $\frac{\partial}{\partial \varphi}$ the unit vector field tangent to the meridians.
The parameter $c$ is related to the values of $p_{\theta}$ by the formula, \begin{equation} \label{Tc} \frac{|p_{\theta}(x, \xi)|}{|\xi|} = c \left| \frac{\partial}{\partial \theta} \right|_{H_{\varphi_0}}, \;\;\; (x \in H_{\varphi_0}). \end{equation}
To see this, let $u_{\theta}(\theta, \varphi) : = \left| \frac{\partial}{\partial \theta} \right|^{-1}_{H_{\varphi}} \;\frac{\partial}{\partial \theta} $ and let $u_{\theta}^*, u_{\varphi}^*$ be the dual unit coframe field. The orthogonal projection from $T_{H_{\varphi_0}} {\mathbb S}^2 \to T^* H_{\varphi_0} $ is given by $ \pi_{H_{\varphi_0}}(x, \xi) = \langle \xi, u_{\theta} \rangle u^*_{\theta},$ and \eqref{Tc} follows.
The reason that the parameter $c$ is
not the usual ratio $\frac{p_{\theta}(x, \xi)}{|\xi|}$ is because we choose the operator on $H$ to
be $\sqrt{\Delta_H}$ rather than $\frac{\partial}{\partial \theta}.$
We now show that the first return time $T^c_H$ defined in Section \ref{CLUSTERSECT} is a constant when $H$ is a latitude circle of a surface of revolution. This is because the ration in \eqref{Tc} between $c$ and $p{\theta}$ is constant on a latitude circle. Since $p_{\theta}$ is constant along geodesics, an initial vector in $S^c_{H_{\varphi_0}} {\mathbb S}^2$ at time zero will return to $S^c_H {\mathbb S}^2$ each time the geodesic returns to $H$. Moreover, in the setting of curves on surfaces, $S^*_{H_{\varphi_0}} {\mathbb S}^2$ is a cross section to the geodesic flow, hence the first return time to $S^*_{H_{\varphi_0}} {\mathbb S}^2$ is finite almost surely. Given that $p_{\theta}$ is constant on orbits, it follows that $T_{H_{\varphi_0}}^c$ is constant too.
The flow on $H_{\varphi_0}$ is of course periodic as well of period $\frac{2 \pi}{L}$ where $L$ is the length of $H_{\varphi_0}$. It follows that the equation \eqref{LIFT} can have fixed point sets of maximal dimension when $t \not= 0$ on a convex surface of revolution, despite the fact that the geodesic flow itself is not periodic. Indeed, $G_H^{ - s} \circ \pi_H \circ G_M^{cs + t} (q, \xi ) = (q, \xi)$ for any $(q, \eta) \in S^c_{H_{\varphi_0}}$ if $t = T_{H_{\varphi_0}}^c$.
We now introduce notation for quantum ladders. Let $\varphi_{\ell}^m$ be the standard orthonormal basis of joint eigenfunctions of $\Delta$ and of the generator $\frac{\partial}{\partial \theta}$ of rotations around the third axis. The orthonormal eigenfunctions of $H_{\varphi_0}$ are given by $\psi_m(\theta) = C_{\varphi_0} e^{im \theta}$ where $C_{\varphi_0} = \frac{1}{L(H_{\varphi_0})}$. Hence, the Fourier coefficients \eqref{FCDEF} are constant multiplies of the Fourier coefficients relative to $\{e^{i m \theta}\}$.
It follows that the $m$th Fourier coefficient of $\varphi_{\ell}^m$ is its only non-zero Fourier coefficient along any latitude circle $H_{\varphi_0}$, and that $|\int_{H_{\varphi_0}} \varphi_{\ell}^m e^{-im \theta} d\theta|^2 = ||\varphi_{\ell}^m||^2_{H_c}$.
On the quantum level, a ray corresponds to a `ladder' $\{\varphi_{\ell}^m\}_{\frac{m}{\ell} = a}$ of eigenfunctions. The possible Weyl-Kuznecov sum formulae for latitude circles $H = H_{\varphi_0}$ thus depend on the two parameters $(\varphi_0, \frac{m}{\ell})$. The first corresponds to a latitude circle, the second to a ladder in the joint spectrum. It is better to parametrize the ladder as $\frac{\mu_m}{\ell} = c $ as discussed above.
\subsubsection{The standard ${\mathbb S}^2$ \label{2Sphere}}
The standard sphere $({\mathbb S}^2, g_0)$ is of course a special case of a surface of revolution, and the joint eigenfunctions $\varphi_N^m$ are denoted by $Y_N^m$. The special feature of the standard sphere is that its geodesic flow is periodic and the eigenspaces of the Laplacian have dimensions $2N+1$. This gives it the special properties discussed in the next subsection.
\subsubsection{Fourier sparsity phenomena} \label{S2SPARSE} In the case of ${\mathbb S}^2$, we slightly re-adjust the definition of $\sqrt{-\Delta} $ to $\sqrt{-\Delta + \frac{1}{4}}-{\frac{1}{2}}$), whose eigenvalues are $\lambda_N = N$. Also, $\mu_m = m \in {\mathbb Z}$. Suppose that
$c \in {\mathbb Q}_+$ and write it in lowest terms as $c = \frac{p}{q}$ with $(p,q) =1$. Then let $\epsilon > 0$ and consider the set $\{(m, N): |m - \frac{p}{q} N | < \epsilon \} = \{(m, N): |\frac{m}{N} - \frac{p}{q} | < \frac{\epsilon}{ N}\}$. Roughly, this is the set of lattice points inside a strip of width $\frac{1}{n}$ around the ray through $(0,0)$ of slope $\frac{p}{q}$. Of course, the lattice points $\{k(p, q), k \in {\mathbb N}\}$
lie in the strip. But for other lattice points, $ |\frac{m}{N} - \frac{p}{q} | = |\frac{mq - N p}{N q} | \geq \frac{1}{N q}$, so there are no solutions aside from the lattice points on the rational ray if $\epsilon < \frac{1}{q}$. Moreover, the possible `gaps' $\{m - \frac{p}{q} N\} $ in this example are $\geq \frac{1}{q}$. Hence, when $n =2, d=1$ the remainder terms in Theorem \ref{main 2} and elsewhere only sum over one eigenvalue of $H$ and the magnitude of \eqref{JDEF} is the magnitude of the extremal Fourier coefficient of a restricted eigenfunction.
\subsubsection{$H$ is a closed geodesic of ${\mathbb S}^2$ and $c<1$}\label{LEGENDREc<1} Let $M = {\mathbb S}^2$ and let $H$ be a closed geodesic ${\mathbb S}^2$. It is always the case that $\dim \mathcal{G}_c^{0,0} = \dim S_c^*H = 1$ in the case of ${\mathbb S}^2$.
In the rest of this section, we assume $0 < c < 1$.
For concreteness suppose that $H$ is a meridian through the north pole $p$. Then for any $\xi \in S^*_p S^2$, $G_{S^2} ^{\pi} (p, \xi) \in S^*_H S^2$ and $\exp_p (\pi \xi) = - p$. The same holds for any $p$ on the meridian geodesic. In this case, $\mathcal{G}_c^{0} = \mathcal{G}_c^{0,0}$ if $H$ is totally geodesic and $c < 1$. There exist $(c, s, 0)$ bi-angles if $s$ is a common period for $G_H^s$ and $G_M^{cs}$. For fixed $c < 1$, the $L^2$ norms of the restrictions of spherical harmonics $Y^m_{\ell}$ with $\frac{m}{\ell} \simeq c < 1$ to $H$ are uniformly bounded above, and therefore so are their $m$th Fourier coefficients. This is consistent with Corollary \ref{JUMPCOR}. On the other hand, $\mathcal{G}^{0,0}$ is non-dominant due to periodicity of the geodesic flow, and one cannot improve the remainder estimates.
We refer to \cite{Geis} for a recent study of how the restricted $L^2$ norms vary with $c$.
On the other hand, if we restrict $Y^m_{\ell}$ to a meridian geodesic, then all the Fourier coefficients in the range $[-\ell, \ell]$ can be non-zero. We now show that the squares $ \left| \int_H Y_N^0 (\varphi) e^{- i \frac{N}{2} \varphi} d \varphi \right|^2 $ of the Fourier coefficients of the zonal spherical harmonic along a meridian geodesic are bounded above and below by positive constants, proving that Corollary \ref{JUMPCOR} is sharp for $(n=2, d =1)$.
Let $Y^N_0(\theta, \varphi) = \sqrt{(2 N + 1) } P^N_0(\cos \varphi). $ be the zonal spherical harmonic on ${\mathbb S}^2$. Here, $P^N_0(\cos \varphi)= P_N(\cos \varphi)$ is a normalized Legendre polynomial. Let $H$ be a meridian geodesic through the poles of $Y_N^0$. The Fourier coefficients of $\gamma_H Y_N^0$ are known explicitly \cite[(2.6)-(2.7a)-(2.7b)]{HP}. To quote one special value, $$\begin{array}{l} P_N(\cos \varphi) = \sum_{k=0}^N p_k p_{N-k} \cos (n - 2k) \varphi,
\end{array} $$ where $p_j = 4^{-j} {2j \choose j}. $
Multiplying by $2N+1$ shows that the $L^2$ norm square of $Y_N^0$ is the
the partial sum of the harmonic series and equals $\log N + \gamma$, where $\gamma$ is Euler's constant (this calculation was first done in
\cite{T09} by a different method). On the other hand,
\begin{equation} \label{ZONALMER} \sum_{k: |k -c N| < \epsilon } \left| \int_H Y_N^0 (\varphi) e^{- i k \varphi} d \varphi \right|^2 = (2N + 1)\;
\sum_{k: |k - c N| < \epsilon} |p_{N-k} p_{N+k}|^2, \end{equation}
for any $\epsilon > 0$ and $c \in (0, 1]$. Since the number of terms in the sum is bounded for fixed $\epsilon > 0$, it suffices to calculate one term $|p_{N-k} p_{N+k}|^2$ asymptotically by Stirling's formula, and for simplicity of exposition we only calculate the middle case with $c = {\frac{1}{2}}$ and for $N= 2n$ even. Using that ${2 j \choose j} \simeq 2 \frac{2^{2j}}{\sqrt{\frac{ \pi j}{2}}}$
$$p_{2 n-n} p_{2n+ n} = p_{n} p_{3n} = C_0 \; n^{-1} 4^{-n} 2^{2n} 4^{- 3n} 2^{6n} = C_0 n^{-1}> 0$$
for a certain constant $C_0 > 0$. Multiplying by $(2 N +1)$ shows that $ \left| \int_H Y_N^0 (\varphi) e^{- i \frac{N}{2} \varphi} d \varphi \right|^2 $ is asymptotically a positive constant, corroborating Corollary \ref{JUMPCOR}. Essentially the same calculation is valid for any $0 < c < 1$ (see \cite{Stan} for the relevant binomial asymptotics).
\subsubsection{Gaussian beam sequences: $c=1$ and $H$ is totally geodesic}
Another extremal scenario occurs when $c=1$ and $H$ is totally geodesic, where the classical ray occurs on the boundary of the moment map image. The corresponding ladder of eigenfunctions consists of the Gaussian beams, $ C_0 N^{\frac{1}{4} } (x_1 + i x_2)^N$, around the equator $\gamma$. The standard Gaussian beams (highest weight spherical harmonics) $\{Y_{N}^{N}\}_{N=0}^{\infty}$ are then a semi-classical sequence of extremals. Their restrictions to the equator $\varphi = \frac{\pi}{2}$ are equal to $C_0 N^{1/4} e^{i N \theta}$. The only non-zero Fourier mode is the $N$th, and that Fourier coefficient is of magnitude $N^{1/4}$. The growth rate of the Kuznecov-Fourier sum \eqref{c} is that of $\sum_{N: N \leq \lambda} N^{1/2} \simeq \lambda^{\frac{3}{2}} $ with remainder of order $N^{1/2}$. This situation is studied systematically in \cite{WXZ+}.
\subsubsection{Caustic sequences }
We briefly mention the case where where $c=1$ and $H$ has non-degenerate second fundamental form, although they are not studied in this article. In this case, there exist caustic effects which dominate the estimate of Fourier coefficients. The simplest example is the restriction of the standard
spherical harmonics $Y_N^m$ to non-geodesic latitude circles, where the Fourier coefficients of certain sequences blow up at the rate $N^{1/6}$.
Such caustic effects on restrictions of eigenfunctions will be investigated systematically in a future article.
\subsection{Higher dimensional spheres ${\mathbb S}^n$ }\label{SnSHARPSECT} We denote by $\Pi_N^{{\mathbb S}^n}(x, y)$ the degree $N$ spectral projections kernel on ${\mathbb S}^n$. To verify the sharpness of Corollary \ref{JUMPCOR} we will need to use some explicit formulae for this kernel. We follow \cite{AH} for notation\footnote{ The pair $(d, n)$ in \cite{AH} corresponds to $(n+1, N)$ in this article.} and refer there for the proofs.
In a well-known way, we slightly change the definition of $\sqrt{-\Delta}$ to obtain operators $A^{{\mathbb S}^n}$ with positive integer eigenvalues: $$
A^{{\mathbb S}^n} = \sqrt{-\Delta^{{\mathbb S}^n} + \frac{(n-1)^2}{4}} + \frac{n-1}{2}. $$ We replace $\{\lambda_j\}_{j=1}^{\infty}$ and $\{\mu_k \}_{k=1}^{\infty}$ by ${\mathbb N}$, denoting the eigenvalues of $A^{{\mathbb S}^n}$ by $\{N\}_{N=0}^{\infty}$ and those of $A^{{\mathbb S}^d}$ by $\{M\}_{M=0}^{\infty}$. Let $\Pi_N^{{\mathbb S}^n}$ denote the spectral projections onto the $N$th eigenspace ${\mathcal H}_N^{{\mathbb S}^n}$ of $A^{{\mathbb S}^n}$, i.e the orthogonal projection onto the space of spherical harmonics of ${\mathbb S}^n$ of degree $N$.
The submanifolds $H$ we consider are sub-spheres (with standard metrics) of ${\mathbb S}^n$. The notation for totally geodesic and latitude subspheres is as follows. In standard Euclidean coordinates on ${\mathbb R}^{n+1}$, the unit sphere is defined by $\sum_{j=1}^{n+1} x_j^2 = 1$. We define a latitude $d$-sphere of height $a$ to be the subspheres $$
{\mathbb S}^{n,d}(a) : = \{\vec x \in {\mathbb R}^{n+1}: \sum_{j=1}^{n+1} x_j^2 =1 , \;\; \sum_{j = d + 2}^{n+1} x_j^2 = a^2\}. $$ Henceforth, we drop the superscript $n$ when the dimension is understood.
$SO(d+1)$ act by isometries on ${\mathbb S}^n$ and all latitude sub-spheres $H = {\mathbb S}^d$ are orbits of the action.
The sup-spheres ${\mathbb S}^d(a)$ are totally geodesic when $a = 0$ and are not totally geodesic if $a > 0$. In particular, if $d =1$ we obtain the closed geodesic, $$\gamma: = {\mathbb S}^{n,1}(0) : = \{\vec x \in {\mathbb R}^{n+1}: \sum_{j=1}^{n+1} x_j^2 =1 , \;\; \sum_{j=3}^{n+1} x_j^2 = 0\},$$ which is the intersection ${\mathbb S}^{n} \cap {\mathbb R}^2_{x_1, x_2} $ where ${\mathbb R}^2_{x_1, x_2} \subset {\mathbb R}^{n+1}$ is the plane $x_3 = \cdots = x_{n+1} = 0$.
The eigenspaces ${\mathcal H}_N^{{\mathbb S}^n}$ of $\Delta$ on ${\mathbb S}^n$ are spaces of degree $N$ spherical harmonics, i.e. restrictions to ${\mathbb S}^n$ of homogeneous harmonic polynomials on ${\mathbb R}^{n+1}$. We denote the orthogonal projection onto ${\mathcal H}_N^{{\mathbb S}^n}$ by \begin{equation} \label{PROJ}\Pi_N^{{\mathbb S}^n}: L^2({\mathbb S}^n) \to {\mathcal H}_N^{{\mathbb S}^n}. \end{equation} As is well-known, $D_N^n: = \dim {\mathcal H}_N^{{\mathbb S}^n} \sim C_n N^{n-1} $ (see e.g. \cite{StW}.)
The more interesting calculation occurs when we restrict $\Pi_N^{{\mathbb S}^{n}}(x, y)$ to the $SO(d+1)$ invariant coordinate sub-sphere ${\mathbb S}^d$ in $x, y$ and and sift out one `Fourier coefficient' of the sub-sphere, i.e. one degree of spherical harmonic. We denote the restriction (in both variables) of \eqref{PROJ} to ${\mathbb S}^d \times {\mathbb S}^d$ by \begin{equation} \label{RESNOT} \gamma_{{\mathbb S}^d(a)} \Pi^{{\mathbb S}^n}_N \gamma_{{\mathbb S}^d(a)}^* (x, y) = [\gamma_{{\mathbb S}^d} \otimes \gamma_{{\mathbb S}^d} \Pi_N^{{\mathbb S}^n} ](x, y), \;\;\; (x, y \in {\mathbb S}^d)
.\end{equation} When integrating over ${\mathbb S}^d$ it is clear that the variables are restricted to this submanifold and we drop the restriction operators $\gamma_{{\mathbb S}^d}$ for simplicity of notation.
For any $0 < c \leq 1$ and $N$, and for $\epsilon $ sufficiently small (depending on $c$), there exists at most one $M = M(N, c) $ satisfying $|M - c N| < \epsilon$. We always assume that $\epsilon$ is chosen this way. In fact, as discussed in Section \ref{S2SPARSE}, there might not exist any such $M$ for small $\epsilon$. To illustrate this, suppose $c={\frac{1}{2}}$. Then only even $N$ will contribute and $M = N/2$. If $N$ is odd, there does not exist any $M$ within $\epsilon$ of $c N$, so that \eqref{JDEF} is zero for odd $N$ and non-zero for even $N$.
We now give an explicit formula for the jumps \eqref{JDEF} when ${\mathbb S}^d \subset {\mathbb S}^n$ is any latitude subsphere. \begin{lemma}\label{JUMPSPHERE} Fix $c \in (0, 1]$ and $(M, N, \epsilon) $ that one (and only one) degree $M(c, N)$ satisfies
$|M - c N| < \epsilon$, one has \begin{equation} \label{JDEFSn} J^{c} _{\epsilon, {\mathbb S}^d }(N) := \int_{{\mathbb S}^d} \int_{{\mathbb S}^d} \Pi_N^{{\mathbb S}^n} (x, y)\; \Pi_{M(c, N)} ^{{\mathbb S}^d}(x, y) d V_{{\mathbb S}^d}(x) dV_{{\mathbb S}^d}(y).
\end{equation} \end{lemma} \begin{proof} For each fixed tangential mode $\psi_k$, the sum over $\ell$ in \eqref{JDEF} takes the form, $$
\sum_{\ell : \lambda_\ell = N} \left| \int_{H} \varphi_{\ell} \overline{\psi_k}dV_H \right|^2 = \int_{{\mathbb S}^d} \int_{{\mathbb S}^d} \Pi_N^{{\mathbb S}^n} (x, y)\overline{\psi_k(x)} \psi_k(y) d V_{{\mathbb S}^d}(x) dV_{{\mathbb S}^d}(y), $$ and the sum over all $k$ giving the jump \eqref{JDEF} is given by \eqref{JUMPSPHERE}. \end{proof} Thus, there exists at most one full projector $\Pi_{M(c, N)} ^{{\mathbb S}^d}$ of ${\mathbb S}^d$ contributing to the sum if $\epsilon < \frac{c}{2} $ is sufficiently small that each term of the arithmetic progression $\{c N\}_{N=1}^{\infty} $ is $\epsilon$- close to only one integer $M(N,c)$. Note that $\gamma_{{\mathbb S}^d(a)} \Pi^{{\mathbb S}^n}_N \gamma_{{\mathbb S}^d(a)}^* (x, y)$ is not a spherical harmonic on ${\mathbb S}^d$ in either variable, but is a sum of harmonics of degrees $0, \dots, N$. The integrals in Lemma \ref{JUMPSPHERE} are special kinds of Clebsch-Gordan integrals. For $n=2$, the integrals are evaluated in \eqref{ZONALMER} when $H$ is a geodesic. To our knowledge, the integrals have not been studied for subspheres of higher dimensional spheres, although they are not hard if $d=1$ (see \cite{WXZ+}).
For general $n$ and $d=1$, we restrict to
$x_3 = x_4 = \cdots = x_{n+1} =0$. When $0<c<1$ is rational, we choose $M < N$ with $\frac{M}{N} = c$ and assume $\epsilon $ small enough so it is the unique integer
satisfying $|M - c N| < \epsilon$. The restricted Fourier coefficient is then, $$\int_{S^1} \int_{S^1} \Pi^{{\mathbb S}^{n}}_N(\langle x(\theta_1), y(\theta_2) \rangle) e^{- i N (\theta_1 - \theta_2) } d\theta_1 d \theta_2.$$ Let $SO(2) \simeq S^1 \subset SO(d)$ denote the 1-parameter subgroup so that the orbit of $e_1$ is the circle above. In the space $ {\mathcal H}_N^{{\mathbb S}^n}$, the integral sifts out the orthogonal projection $\Pi_N^{n, M}$ to the subspace ${\mathcal H}_N^M$ of spherical harmonics which transform by $e^{i M \theta}$ when translated by the circle action.
The integral equals $\gamma_{S^1} \Pi_N^{n, 0}\gamma_{S^1}^*(x, x)$ where $x \in S^1$ is any point. Since $\dim {\mathcal H}_N^M({\mathbb S}^{n}) = C_{n} N^{n-1}$, its order of magnitude is $N^{n-1}$. This proves that the statement of Theorem \ref{main 3} is sharp for general $n$ and $d=1$.
For general $(n,d)$ with $d > 1$, explicit estimates for the Fourier coefficient sums of restrictions to latitude spheres for ${\mathbb S}^n$ when $n > 3$ are much more difficult by means of classical analysis (see Section \ref{LEGENDREc<1} for $n=2$).
\subsection{Flat tori}\label{FLATSECT}
Let $H$ be a $d$-dimensional coordinate plane in the $n$-dimensional torus with the usual eigenfunctions $\varphi_j(x) = \exp(i j \cdot x)$ and $\psi_k(x') = \exp(i k \cdot x')$. Here, $x = (x_1,...,x_n)=(x',x'')$ with $x' = (x_1,...,x_d)$ and $x'' = (x_{d+1},...,x_n)$. Then each $ |\langle \gamma_H \varphi_j, \psi_k \rangle_{L^2(H)}|^2$ is a power of $2\pi$ times $\delta_{j' - k}$. The ladder sum now just counts the lattice points $j \in {\mathbb Z}^n$ such that $|c|j| - |j'|| < \epsilon$ and $|j| \leq \lambda$. When $0<c<1$, this region is asymptotic to an $\epsilon$ thickening of a (codimension 1) cone in ${\mathbb R}^n$ and has volume to the order of $\lambda^{n-1}$. Now run this construction again except replace the sharp cutoff by a fuzzy cutoff. The main term of the ladder sum agrees (up to a constant and a lower order term) with the volume of this thickened cone, $\lambda^{n-1}$, which does not depend on $d$.
To illustrate the necessity of the hypotheses of Theorem \ref{main 3}, we consider the two-dimensional case. We write $j = (j_1,j_2) \in {\mathbb Z}^2$ and observe \[
N_{\epsilon,H}^c(\lambda) = (2\pi)^{-1} \#\{ j \in {\mathbb Z}^n : |j| \leq \lambda, \ ||j_1| - c|j|| \leq \epsilon \}. \] The region capturing the lattice points in the set above, \begin{equation}\label{lattice ladder}
\{ \xi \in {\mathbb R}^2 : ||\xi_1| - c|\xi|| \leq \epsilon \}, \end{equation} has hyperbolas for boundaries and is asymptotically \[
\left\{ \xi \in {\mathbb R}^2 : \left|\sqrt{1 - c^2} |\xi_1| - c |\xi_2|\right| \leq \frac{\epsilon}{\sqrt{1 - c^2}} \right\}, \] the union of two strips of slope $\pm \frac{\sqrt{1 - c^2}}{c}$ and thickness $\frac{2\epsilon}{\sqrt{1 - c^2}}$. We conclude that the area of the region \eqref{lattice ladder} within the the ball of radius $\lambda$ is asymptotic to \[
8 \epsilon (1 - c^2)^{-1/2} \lambda, \] which is consistent with the main term of Theorem \ref{main 3}. However, if the slope $c/\sqrt{1 - c^2}$ is rational, we may carefully select two different values of $\epsilon$ which yield exactly the same count of points for $N_{\epsilon,H}^c(\lambda)$. The difference in the main terms must be absorbed into the remainder. Hence, the improved remainder {in Theorem \ref{main 3}} may not be obtained in this setting.
\subsection{Hyperbolic quotients}\label{HYPERBOLIC}
Fourier coefficients of restrictions of eigenfunctions on hyperbolic surfaces to closed geodesics, to horocycles or distance circles is a classical problem in automorphic forms.
To our knowledge, the only case studied rigorously to date is that of cuspidal eigenfunctions $\psi$ of the modular curve ${\mathbb H}^2/SL(2, {\mathbb Z})$ to a closed horocycle $H_y$ of `height' $y$ \cite[Page 428]{Wo04}. This is a case where $c=1$ and caustic effects occur, and the estimates of \cite{Wo04} are of the same nature as the $N^{1/6}$ estimate for spherical harmonics in the caustic case. It seems likely that in the negatively curved case, one can improve such estimates by powers of $\log N$. Such effects will be studied in future work.
\subsection{Singularities for $t \not=0$ \label{SINGtnot0} } In this section, we discuss the existence of two-term asymptotics discussed in Section \ref{REMAINDERSECT}.
To determine two-term asymptotics, it is necessary to allow the support of $\hat{\rho}$ to be any finite interval and to calculate the contribution of all sojourn times $t \in \Sigma^c(\psi)$ with fixed point sets of maximal dimension. Thus, we
need to determine the connected components of all $(c, s, t)$ bi-angles for general $t$, i.e. solve \eqref{BIANGLEDEF} for all $(s,t)$ with $s \in \mathrm{supp}\, \hat\psi$ and to locate the maximal components with $t \not=0$ of the same dimension as the principal component at $t=0$.
We now assume further that the set of $(c, s, t)$-bi-angles with $t = T \in {\rm singsupp}\; S^c(t, \psi) \backslash \{0\}$ is a union of clean components $Z_j(T)$ of dimension $d_j(T)$, where $Z_j(T)$ is a component of $ \mathcal{G}^T_c$. Then, for $t$ sufficiently close to $T$, there exist Lagrangian distributions $\beta_j$ on ${\mathbb R}$ with singularities only at $t=0$ such that, $$\begin{array}{l} S^c(t, \psi) = \sum_j \beta_j(t - T), \;\; \beta_j(t) = \int_{{\mathbb R}} \alpha_j(s) e^{- i s t} ds,\\ \\
\;\; \text{ with}\;\; \alpha_j(s) \sim (\frac{s}{2 \pi i})^{ -1 + {\frac{1}{2}} (n -d)+\frac{d_j(T)}{2}}\;\; i^{- \sigma_j} \sum_{k=0}^{\infty} \alpha_{j,k} s^{-k}, \end{array}$$ where $d_j(T) $ is the dimension of the component $Z_j(T)$. We refer to Section \ref{CLUSTERSECT} for background on the role of periodicity properties of $G_M^t$ and $G_H^s$ in the existence of maximal components $Z_j(T)$ for $T \not=0$, i.e. in whether ``fixed point sets'' defined by \eqref{EQ} and \eqref{LIFT} can be of the same full dimension as for the principal component at $t = 0$. When $c < 1$, $G_M^{cs + t}$ must take $S^c_q M \to S^c_q M$ for every $q \in H$, and moreover must map each set $S^c_{q, \eta} M: = \{\xi \in S^c_q M: \pi_H \xi = \eta\}$ for $(q, \eta) \in B^*H$ into itself.
\begin{proposition} \label{MORESINGS}
Let $\rho \in \mathcal{S}({\mathbb R})$ with $\hat{\rho} \in C_0^{\infty}$ and with $0 \notin {\rm supp} \hat{\rho}$. Assume that the bi-angle equation is clean in the sense of Definition \ref{CLEAN}, and let $\mathcal{S}_{\psi} = {\rm singsupp} S^c(t, \psi) \backslash \{0\}.$ Denote by $d_j$ the dimension of a component $Z_j$ of
$\mathcal{G}_c^{t} $ where $t$ is a non-zero period. Then, there exists $\beta_j \in {\mathbb R}$ and a complete asymptotic expansion, $$ N^{c} _{\rho, \psi, H }(\lambda) \sim \lambda^{-1 + {\frac{1}{2}} (n -d) } \sum_{T \in \mathcal{S}(\psi)} \sum_{\ell=0}^{\infty} \beta_{\ell} (t -T) \; \lambda^{\frac{d_j(T) }{2} -\ell},$$ The asymptotics are of lower order than the principal term of Theorem \ref{main 2} (resp. Theorem \ref{main 5}) unless there exists a maximal component. \end{proposition}
To obtain two-term asymptotics of the type discussed in Section \ref{REMAINDERSECT}, one needs to specify the maximal components, and to calculate the associated $\beta_{\ell}$ and $\alpha_j$ in geometric terms. In effect, the function $\mathcal{Q}^c_{\psi, H}(\lambda)$ is a sum over the maximal components for all $t \not= 0$. Its calculation is postponed to a subsequent study.
\section{Fuzzy ladder projectors and Kuznecov formulae} In this section, we set up the main objects in the proof Theorem \ref{main 4}. We use the terminology of
the `fuzzy ladders' of \cite{GU89} to describe the main operators and their canonical relations. However, there are some significant differences in that we consider `fuzzy' ladders with respect to two elliptic operators with erratically distributed eigenvalues, rather than with respect to a compact group such as $S^1$ in \cite{GU89} with a lattice of eigenvalues.
\subsection{Notation}
Since we are often dealing with operators on product spaces, we use the notation $f\otimes g$ for a function on $X \times Y$ of the product form $f(x) g(y)$. Linear combinations of such functions are of course dense in $L^2(X \times Y)$ and it suffices to define operators on product spaces on such product functions.
We introduce the two commuting operators on $M \times H,$ \[
P_M = \sqrt{-\Delta_M} \otimes I \qquad \text{ and } \qquad P_H = I \otimes \sqrt{-\Delta_H}, \] and denote an orthonormal basis of their joint eigenfunctions by \begin{equation}\label{def phi_j,k}
\varphi_{j,k} = \overline{\psi_k} \otimes \varphi_j. \end{equation} Thus, we have $$(P_M,P_H)\varphi_{j,k} = (\lambda_j,\mu_k)\varphi_{j,k}. $$ As discussed in \cite{GU89}, $P_M$ and $P_H$ are not quite pseudodifferential operators due to singularities in their symbols on $0_H \times \dot T^*M \cup \dot T^*H \times 0_M$, where $0_M$ denotes the zero section. These singularities lie far from the canonical relations determining the asymptotics and therefore may be handled by suitable cutoffs as in \cite{GU89}.
We then introduce the operators on $C^{\infty}(M \times H)$ defined by, \begin{equation} \label{PQ} \left\{\begin{array}{l} P : = P_M: = \sqrt{-\Delta}_M \otimes I
, \\ \\
Q_c:= c \sqrt{-\Delta}_M \otimes I - I \otimes \sqrt{-\Delta}_{H} = c P_M - P_H. \end{array} \right. \end{equation} The system $(P, Q_c)$ is elliptic; $Q_c $ is a non-elliptic first order pseudo-differential operator of real principal type with characteristic variety, \begin{equation}
\label{CHARQ} \operatorname{Char}(Q_c): \{(x, \xi, q, \eta) \in \dot T^*M \times \dot T^*H: c |\xi|_g - |\eta|_{g_H} = 0\}. \end{equation}
As in \cite{GU89} we are interested in the ``nullspace" of $Q_c$, i.e. its $0$-eigenspace. The corresponding pairs of eigenvalues of $(P_M, P_H)$ would concentrate along a ray of slope $c$ in ${\mathbb R}^2$. Except in rare situations with symmetry, there are at most finitely many such eigenvalue pairs, but there always exist an approximate or `fuzzy' null-space, intuitively defined by a strip around the ray or ladder in the joint spectrum of the pair $(P_M, P_H)$. A `ladder strip' as in \cite{GU89} is defined by a strip around a ray in the joint spectrum, $$
\{(\lambda_j, \mu_k): |\mu_k - c \lambda_j| \leq \epsilon \} \subset {\mathbb R}_+^2. $$ It is usually difficult to study a strip directly, and in place of the indicator function of a strip one constructs Schwartz test functions which concentrate in the strip and are rapidly decaying outside of it in ${\mathbb R}^2$. When one uses such a test function rather than an indicator function one gets a `fuzzy ladder.'
Such ladders, sharp or truly fuzzy, arise when one studies eigenvalue ratios $\frac{\mu_k}{\lambda_j}$ in intervals of width $O(\lambda_j^{-1})$. These are very short intervals and it is much more difficult to obtain asymptotics for such short intervals than for intervals of constant width. To this end, one can study wedged (or coned) $$
\{(\lambda_j, \mu_k): | \frac{\mu_k}{\lambda_j} - c| \leq \epsilon\} \subset {\mathbb R}_+^2 $$ along the ray of slope $c$ in ${\mathbb R}_+^2$ in the joint spectrum. See section \ref{RELATED} for the corresponding Weyl sums. Asymptotics for cones and some slowly thickening ladders were obtained in \cite{WXZ20}, as well as ladders which are sufficiently fuzzy as in Theorem \ref{main 2}. In this article, the aim is to obtain improved asymptotics for both sharp and fuzzy ladders.
\subsection{Fuzzy ladder projectors} To prove Theorem \ref{main 4} by Fourier integral operator methods, we need to smooth out the projection operators onto the spectral subspaces corresponding to these ladders. We therefore introduce $\psi \in \mathcal{S}({\mathbb R})$ with $\hat{\psi} \in C_0^{\infty}({\mathbb R})$ and define, \begin{equation} \label{FLP}
\begin{split}
\psi(Q_c) &: L^2(M \times H) \to L^2(M \times H), \\
\psi(Q_c) &= \int_{{\mathbb R}} \hat{\psi}(s) e^{i s Q_c} ds = \sum_{j,k} \psi(\mu_k - c \lambda_j) \varphi_{j,k} \otimes \varphi_{j,k}^*.
\end{split} \end{equation} Here, $\varphi_{j,k}^*$ denotes the linear functional dual to $\varphi_{j,k}$ in $L^2(M \times H)$, and hence $\varphi_{j,k} \otimes \varphi_{j,k}^*$ is the rank-$1$ projector onto the line spanned by $\varphi_{j,k}$. $\psi(Q_c)$ is sometimes denoted $\Pi_c$ to emphasize that it is an approximate projector.
The operator \eqref{FLP} is a smoothing of the sharp fuzzy ladder projection
of $L^2(M \times H)$ onto the span of the joint eigenfunctions for which $| \mu_k - c \lambda_j | \leq \epsilon$ for some $\epsilon > 0$. It is a Fourier integral operator of real principal type, whose properties we now review. The characteristic variety of $Q_c$ is the hypersurface in $\dot T^* M \times \dot T^*H$ defined by \eqref{CHARQ}.
Its characteristic (null) foliation is given by the integral curves of
the Hamiltonian $c |\xi|_g - |\eta|_{g_{H}}$, i.e. the orbits of the flow $G^{c s}_M \otimes G_H^{-s}$ on $T^* M \times T^*H$ restricted to the level set $\operatorname{Char}(Q_c) $. The next Lemma is similar to calculations in \cite{GU89} and \cite[Proposition 2.1]{TU92}:
\begin{lemma} \label{WFQc} $\psi(Q_c)$ of \eqref{FLP} is a Fourier integral operator in the class $ I^{-{\frac{1}{2}}}((M \times H) \times (M \times H)), {{\mathcal I}_{\psi}^c}')$ with canonical relation \begin{multline*}
{\mathcal I}^c_{\psi} := \{(x, \xi, q, \eta; x', \xi', q', \eta') \in {\operatorname{Char}}(Q_c) \times {\operatorname{Char}}(Q_c) : \\
\exists s \in \mathrm{supp}\,(\hat{\psi}) \text{ such that } G^{c s}_M \times G^{-s}_{H}(x, \xi, q, \eta) = (x', \xi', q', \eta') \}. \end{multline*}
The symbol of $\psi(Q_c)$ is the transport of $(2 \pi)^{-\frac{1}{2}} \hat{\psi}(s) |{\mathrm d} s|^{{\frac{1}{2}}} \otimes |{\mathrm d} \mu_L|^{{\frac{1}{2}}}$ via the implied parametrization $(s,\zeta) \mapsto (\zeta, G_M^{cs} \times G_H^{-s}(\zeta))$, where $\mu_L$ is Liouville surface measure on $\operatorname{Char}(Q_c)$. \end{lemma}
Since we often use the term {\it Liouville measure} we give the general definition.
\begin{definition} \label{LIOUVILLEDEF} Let $Y \subset T^*M$ be a hypersurface defined by $\{f = 0\}$ . By the Liouville measure on $Y$, we mean the Leray form $\frac{\Omega}{df}$ on $Y$, where $\Omega$ is the symplectic volume form of $T^*M$. \end{definition}
We now prove Lemma \ref{WFQc}. \begin{proof} Let $\zeta = (x, \xi, q, \eta) \in T^* (M \times H)$.
Since $Q_c$ is of real principal type, we may apply \cite[Proposition 2.1]{TU92} to obtain that $\psi(Q_c) \in I^{-{\frac{1}{2}}} ((M \times H) \times (M \times H), {{\mathcal I}_{\psi}^c}')$, where \begin{multline} \label{TU}
{\mathcal I}^c_{\psi} = \{(\zeta_1, \zeta_2) \in \dot{T}^* (M \times H) \times \dot{T}^* (M \times H): \sigma_{Q_c} (\zeta_1) = 0, \\
\exists s \in \mathrm{supp}\, \hat{\psi}, \
\exp s H_{\sigma_{Q_c}} (\zeta_1) = \zeta_2\}. \end{multline} The Hamiltonian flow $\exp s H_{\sigma_{Q_c}}$ on the characteristic variety is given by $$
G^{cs}_M \times G^{-s}_{H}: \operatorname{Char}(Q_c) \to \operatorname{Char}(Q_c). $$
By \cite[Lemma 2.6]{TU92}, the principal symbol of $\psi(Q_c)$ is $(2 \pi)^{-\frac{1}{2}} \hat{\psi}(s) |{\mathrm d} s|^{{\frac{1}{2}}} \otimes |{\mathrm d} \mu_L|^{{\frac{1}{2}}}$. \end{proof}
\begin{remark} Without the support condition, the wave front relation is an equivalence relation on points of $\operatorname{Char}(Q_c)$, namely $(x, \xi, q, \eta) \sim (x', \xi', q', \eta')$ if they lie on the null bi-characteristic. \end{remark}
\subsection{Elliptic cutoff}
Since $\psi(Q_c)$ is not elliptic, we introduce a second smooth cutoff $\rho \in \mathcal{S}({\mathbb R}), $ with $\hat{\rho} \in C_0^{\infty}$ and define $$
\rho(P - \lambda) = \frac{1}{2\pi} \int_{{\mathbb R}} \hat{\rho}(t) e^{- it \lambda} e^{i t P} dt. $$ Thus, $\rho(P- \lambda) \psi(Q_c) : L^2(M \times H) \to L^2(M \times H)$ is the operator, \begin{equation} \label{RHOPSI} \rho(P- \lambda) \psi(Q_c) = \sum_{j,k}\rho(\lambda_j - \lambda) \psi(\mu_k - c \lambda_j) \varphi_{j,k} \otimes \varphi_{j,k}^*. \end{equation}
To understand its purpose, we note that if the cutoff $\rho$ were the indicator function of an interval $[-1, 1]$, it would restrict the $\lambda_j$ to $[\lambda -1, \lambda +1]$, while if $\psi$ were an indicator function, it would restrict the $\mu_k$ to $|\mu_k - c \lambda_j| \leq A$. Hence, the pair of cutoffs would restrict the joint spectrum to a rectangle. The smooth cutoffs $\psi, \rho$ should be thought of as smoothings of such indicator functions.
By Fourier inversion, \eqref{RHOPSI} is given by $$
\rho(P- \lambda) \psi(Q_c) = \frac{1}{2\pi} \int_{{\mathbb R}} \hat{\rho}(t) e^{- it \lambda} e^{i t P} \psi(Q_c) \, dt $$ and the next step is to elucidate the integrand. For simplicity of notation we denote $\zeta = (\zeta_1, \zeta_2) \in T^*(M \times H)$. Since the canonical relation of $e^{itP}$ is the graph of the bicharacteristic flow of $\sigma_P$ on $T^*(M \times H)$, the composition theorem for Fourier integral operators gives,
\begin{lemma} \label{WFsPIc} $e^{it P} \psi(Q_c): L^2(M \times H) \to L^2({\mathbb R} \times M \times H)$ is a Fourier integral operator in the class $I^{-\frac{3}{4}}(({\mathbb R} \times M \times H) \times (M \times H), {\mathcal{C}^c_{\psi}}')$, with canonical relation \begin{multline*} \mathcal{C}^c_{\psi} := \{(t, \tau, G_M^{cs + t} \times G_H^{-s}(\zeta), \zeta) \in T^* {\mathbb R} \times \operatorname{Char}(Q_c) \times \operatorname{Char}(Q_c): \\
s \in \mathrm{supp}\,(\hat{\psi}), \ \tau + |\zeta_M|_g = 0\} \end{multline*} In the natural parametrization of $\mathcal{C}_\psi^c$ by $(s,t,\zeta) \in \mathrm{supp}\, \hat \psi \times {\mathbb R} \times \operatorname{Char}(Q_c)$ given by $$
(t, - |\zeta_M|_g, G_M^{cs + t} \times G_H^{-s}(\zeta), \zeta), $$ the symbol of $e^{itP} \psi(Q_c)$ is
$(2 \pi)^{-\frac{1}{2}} \hat{\psi}(s)|ds|^{{\frac{1}{2}}} \otimes |{\mathrm d} t|^{{\frac{1}{2}}} \otimes |{\mathrm d} \mu_L|^{{\frac{1}{2}}}$, where $\mu_L$ is Liouville surface measure on $\operatorname{Char}(Q_c).$ \end{lemma}
\begin{proof} We recall that if $\chi: \dot T^*M \to \dot T^*M$ is a homogeneous canonical transformation and $\Gamma_{\chi} \subset \dot T^*M \times \dot T^*M$ is its graph, and if $\Lambda \subset T^*M \times T^*M$ is any homogeneous Lagrangian submanifold with no elements of the form $(0, \lambda_2)$, then $\Gamma_{\chi} \circ \Lambda$ is a transversal composition with composed relation $\{(\chi(\lambda_1), \lambda_2): (\lambda_1, \lambda_2) \in \Lambda\}. $ The condition that $\lambda_1 \not= 0$ is so that $\chi(\lambda_1)$ is well-defined.
It follows that $e^{it P} \circ \psi(Q_c)$ is a transversal composition, and therefore its order is the sum of the order $\frac{-1}{4}$ of $e^{it P}$ \cite{DG75} and the order $-{\frac{1}{2}}$ of $\psi(Q_c)$ (Lemma \ref{WFQc}). \end{proof}
\section{Reduction to $H$}
The Schwartz kernel of $e^{it P} \psi(Q_c)$ lies in $\mathcal{D}'(({\mathbb R} \times M \times H) \times (M \times H))$. To study sums of squares of inner products, $ \left| \int_{H} \varphi_j \overline{\psi_k}dV_H \right|^2$, we need to restrict the Schwartz kernels to $({\mathbb R} \times H \times H) \times (H \times H)$. To this end, we introduce the restriction operator, $$
\gamma_H \otimes I: C(M \times H) \to C(H \times H). $$ For instance (in the notation of \eqref{def phi_j,k}), $$
(\gamma_H \otimes I) (\varphi_{j,k}) \in C(H \times H), \;\; (\gamma_H \otimes I) \varphi_{j,k} (y_1, y_2) =(\gamma_H \varphi_j) (y_1) \psi_k(y_2). $$ We are interested in the operator with Schwartz kernel in $C((H \times H) \times (H \times H))$ given by, \begin{equation}
(\gamma_H \otimes I) (\varphi_{j,k}) \otimes [(\gamma_H \otimes I) (\varphi_{j,k}) ]^*. \end{equation} This is an operator from $L^2 (H \times H) \to L^2 (H\times H).$ We may construct this operator as the composition of the rank one projection $\varphi_{j,k} \otimes \varphi_{j,k}^* : L^2(M \times H) \to L^2(M \times H)$ with the restriction operator and its adjoint, $$(\gamma_H \otimes I)^* : C(H \times H) \to \mathcal{D}'(M \times H). $$ Note the the adjoint is an extension as a kind of delta-function and does not preserve continuous functions; see e.g.
\cite[Proposition 4.4.6]{D73} or \cite{TZ13} for background. The relevant composition is, $(\gamma_H \otimes I) \circ (\varphi_{j,k} \otimes \varphi_{j,k}^*) \circ (\gamma_H \otimes I)^*$ given by the map \begin{align*} C(H \times H) &\mapsto C(H \times H), \\ K &\to \left( \int_H \int_H K(q_1, q_2) \varphi_{j,k}(q_1,q_2) \, dV_H(q_1) \, dV_H(q_2) \right) \cdot (\gamma_H \otimes I) \varphi_{j,k}. \end{align*} Extending to the operator $e^{itP}\psi(Q_c)$, we define \[ (\gamma_H \otimes I) \circ e^{itP} \psi(Q_c) \circ (\gamma_H \otimes I)^* \\ := \sum_{j,k} e^{it\lambda_j} \psi(\mu_k - c \lambda_j) ((\gamma_H \otimes I) \varphi_{j,k} ) \otimes ((\gamma_H \otimes I) \varphi_{j,k})^*. \]
If $X \subset M$ is a submanifold, we refer to the composition $\gamma_{X} F \gamma_{X}^*$ of a Fourier integral operator as its reduction to $L^2(X)$. Such operators are studied in many articles; we refer to \cite{TZ13, Si18} for background. We will need Sipailo's Theorem 3.1 \cite{Si18}, stated below for convenience. In the following, $\pi_X: T_X^*M \to T^* X$ is the natural restriction (or projection) of covectors to $T^*X$.
\begin{lemma}\cite[Theorem 3.1]{Si18} \label{REDUCTIONLEM} Let $F \in I^m (M \times M, \Lambda)$ be a Fourier integral operator of order $m$ associated to the canonical relation $\Lambda' \subset \dot{T}^* M \times \dot{T^*} M$. If \begin{itemize} \item (i) \; $\Lambda \cap (T^*_X M \times T^*_X M)$ is a clean intersection, and \item (ii)\; $\Lambda \cap N^*(X \times X) = \emptyset $, \end{itemize} then $$
\gamma_X F \gamma_X^* \in I^{m^*}(X \times X, \Lambda_X) $$ where $\Lambda_X = (\pi_X \times \pi_X)( \Lambda \cap (T^*_X M \times T^*_X M))$, and where $$
m^* = m - \dim X + {\frac{1}{2}} \operatorname{codim} X + {\frac{1}{2}} \dim (\Lambda \cap (T^*_X M \times T^*_X M)). $$ \end{lemma}
\begin{remark} We recall that $X \cap Y$ is a {\it clean} intersection of two submanifolds of $Z$ if $X \cap Y$ is a submanifold of $Z$ and
$T_p(X\cap Y) = T_p X \cap T_p Y$ at all points $p \in X\cap Y$. Failure of clean intersection can happen in two ways: (i) $ X\cap Y$ fails to be a submanifold of $Z$, or (ii) it is a submanifold of $Z$ but $T_p (X\cap Y) \not= T_p X \cap T_p Y$. Usually, (i) is easy to check, but (ii) can be hard to check when (i) holds. \end{remark}
We also quote the result in the simplest case, when $\Lambda$ is the graph of a symplectic diffeomorphism $g$ of $\dot{T}^* M$. Here and henceforth, for any subset $V \subset T^* M$, $\dot{V} = V \backslash \{0\}$ is $V$ minus its intersection with the zero section. The graph of $g$ is denoted by ${\rm Graph}(g): = \{(g(\zeta), \zeta): \zeta \in \dot{T}^* M\}$.
\begin{corollary} \cite[Corollary 3.9]{Si18} \label{REDUCTIONCOR} Let $\Phi$ be a Fourier integral operator of order $m$ quantizing a canonical transformation $g$. Assume that $g$ satisfies the conditions: \begin{itemize} \item (i) \; the intersection $\dot{T}_X^* M \cap g (\dot{T}^*_X M) $ is clean.
\item (ii) \; $\dot{N}^* X \cap g(\dot{N}^*(X)) = \emptyset$.
\end{itemize}
Then, $(\pi_X \times \pi_X)({\rm Graph} (g) \cap (T^*_X M \times T^*_X M)) $ is a Lagrangian submanifold of $\dot{T}^*(X \times X)$ and $\gamma_X \Phi \gamma_X^*$ is a Fourier integral operator in the class $I^{m^*}(X \times X, \pi_X \times \pi_X)({\rm Graph} (g))$, where $m^*$ is defined in Lemma \ref{REDUCTIONLEM}. \end{corollary}
\subsection{Generalization to $F(t) := e^{it P} \psi(Q_c)$} \label{FtSECT}
In fact, we need to extend Lemma \ref{REDUCTIONLEM} to the case where $F$ is replaced by $ F(t) := e^{it P} \psi(Q_c)$ as in Lemma \ref{WFsPIc}. We state the result in the generality of Lemma \ref{REDUCTIONLEM} where $F$ is replaced by $F(t) = e^{it P} F$, where (for each $t$) $e^{it P}: L^2(M) \to L^2(M)$ is the unitary group generated by a first order elliptic pseudo-differential operator $P$. Note that $F: \mathcal{D}'(M) \to \mathcal{D}'({\mathbb R} \times M)$, so the domain and range are not the same, and Lemma \ref{REDUCTIONLEM} does not apply as stated, although it applies for each fixed $t$.
As is well-known \cite{DG75}, $e^{it P} \in I^{-\frac{1}{4}}( {\mathbb R} \times M \times M, \widetilde{\rm Graph}(g^t))$ where $\widetilde{\rm Graph}(g^t) = \{(t, \tau, g^t(\zeta), \zeta): \tau + \sigma_P(\zeta) = 0\}$ is the space-time graph of the flow. Let $F \in I^m(M \times M, \Lambda)$ be a Fourier integral operator as in Lemma \ref{REDUCTIONLEM}, let $e^{it P}$ be as in the preceding paragraph and let $F(t) =e^{it P} F$. Assume: $$
(**)\;\; \; \widetilde{\rm Graph}(g^t) \circ \Lambda' \subset \dot{T}^* {\mathbb R} \times \dot{T}^* M \times \dot{T}^* M \;\; \text{ is a clean composition}. $$
Then by the composition theorem for Fourier integral operators, $F(t,x, y) ) \in I^{m - \frac{1}{4}} ({\mathbb R} \times M \times M, \widetilde \Lambda)$ is a Fourier integral operator associated to the canonical relation, $$\widetilde \Lambda : = \{(t, \tau, g^t(\zeta), \zeta'): \tau + p(\zeta) = 0, (\zeta, \zeta') \in \Lambda\} \subset \dot{T}^* {\mathbb R} \times \dot{T}^* M \times \dot{T^*} M. $$ We define the t-slice of $\widetilde \Lambda$ by, $$\widetilde{\Lambda}_t := \{(g^t(\zeta), \zeta'): (\zeta, \zeta') \in \Lambda\} \subset \dot{T}^* M \times \dot{T^*} M. $$ Since $g^t$ is a diffeomorphism, it is evident that if $\Lambda$ is a manifold, then so is $\widetilde{\Lambda}_t$ for every $t$. It follows that also the spacetime graph $\widetilde \Lambda$ of $\widetilde{\Lambda}_t$ is a manifold and that the composition is always clean.
We continue to use the notation $\gamma_X F(t) \gamma_X^*$ for the restriction in the $M$-variables, and $\pi_X$ as the projection of a covector in $T^*M$ to $T^* X$, both with $t$ as a parameter.
\begin{lemma} \label{REDUCTIONLEMEXT} With the notation and assumptions of Lemma \ref{REDUCTIONLEM}, let $F(t) := e^{it P} F: C^\infty(M) \to C^\infty({\mathbb R} \times M)$. Let $X \subset M$ be a submanifold and assume:
\begin{enumerate} \item[(i)] For each $t$, $\widetilde \Lambda_t \cap \dot{T}^*_X M \times \dot{T}^*_X M$ is a clean intersection; \item[(ii)] For each $(\zeta, \zeta') \in \Lambda$, the curve $t \to g^t(\zeta) $ intersects $T^*_X M$ cleanly; \item[(iii)] For each $t$, $\widetilde \Lambda_t \cap \dot{N}^*(X \times X) = \emptyset $. \end{enumerate} Then,
$\gamma_{{\mathbb R} \times X} F(t)\gamma_{ X}^* \in I^{m^*}({\mathbb R} \times X \times X, \widetilde \Lambda_X) $ where \[
\widetilde \Lambda_X = (I \times \pi_X \times \pi_X)(\widetilde \Lambda \cap (\dot{T}^* {\mathbb R} \times \dot{T} ^*_X M \times \dot{T}^*_X M)), \] and where $$
m^* = \mathrm{ord} F(t) + \frac{1}{2} \operatorname{codim} X + {\frac{1}{2}} \dim \widetilde \Lambda \cap (T^*{\mathbb R} \times T^*_X M \times T^*_X M) - {\frac{1}{2}}(2 \dim X + 1)
$$ Here, $\mathrm{ord} F(t)$ denotes the order of $F(t) $ as a Fourier integral kernel in $I^*({\mathbb R} \times M \times M; \widetilde \Lambda)$. \end{lemma}
\begin{proof} (Sketch) Since the proof is almost the same as for Lemma \ref{REDUCTIONLEM} we only provide a brief sketch of the proof, emphasizing the new aspects.
We claim first that the conditions (i) - (iii) of the Lemma are equivalent to: \begin{enumerate} \item[(i)'] \; $\widetilde \Lambda \cap (\dot{T}^*{\mathbb R} \times \dot{T}^*_X M \times \dot{T}^*_X M)$ is a clean intersection, and \item[(ii)'] \; $\widetilde \Lambda \cap \dot{T}^* {\mathbb R} \times \dot{N}^*(X \times X) = \emptyset $. \end{enumerate} It is obvious that (iii) and (ii)' are equivalent, so we only show that (i)' is equivalent to $(i) - (ii)$. In fact, it is clear that (i)' implies (i)-(ii) since $dt \not=0 $ and all $t$ slices of the intersection are submanifolds if (i)' is a clean intersection and the tangent space condition is satisfied. The non-trivial statement is the converse, that (i)-(ii) implies (i)'. To prove it, let $f_X: M \to {\mathbb R}^k$ be a local defining function for the codimension k submanifold $X$, i.e. locally $X = \{f_X = 0\}$ and $df_X$ has full rank on $X$. For instance, one may use the normal variables $y$ of local Fermi normal coordinates. Then, $\widetilde \Lambda \cap T^*{\mathbb R} \times T^*_X M \times T^*_X M$ is the set of points in $\widetilde \Lambda$ where $\pi^*f_X \times \pi^* f_X = 0$ (here, we use the notation $\pi^* f_X$ for its pullback to $T^*M$), and the intersection in (i)' is a submanifold if $\pi^* f_X \times \pi^* f_X $ (the pullback to $T^*{\mathbb R} \times T^*M \times T^*M$) is non-singular on $\widetilde \Lambda$, that is, if $\pi^* f_X \times \pi^* f_X: \widetilde \Lambda \to {\mathbb R}^k \times {\mathbb R}^k$ has a surjective differential at each point $(t, \tau, g^t(\zeta), \zeta')$. Since $\pi^* f_X \times \pi^* f_X ((t, \tau, g^t(\zeta), \zeta')) = (f_X (\pi g^t(\zeta)), f_X(\zeta'))$
we can first calculate $D \pi^* f_X \times \pi^* f_X: T \widetilde \Lambda \to {\mathbb R}^k \times {\mathbb R}^k$ on tangent vectors to curves in $\zeta$ and $\zeta'$ for fixed $t$ and then for for tangent vectors as $t$ varies. If $t$ is fixed, then the calculation is the same as for the $t$-slice and by assumption (i) the derivative is already surjective. Afortiori, it is surjective if $t$ is allowed to vary.
The more difficult condition is that the tangent space to the intersection equals the intersection of the tangent spaces. As mentioned in the introduction, the tangent space of the intersection always contains the intersection of the tangent spaces but may possibly be larger. If we decompose the tangent space into the $\frac{\partial}{\partial t}$ direction and the tangent vectors to the slices, we find that the only condition not contained in (i) is that the the tangent vectors to an orbit $t \to g^t(\zeta)$ may be tangent to the intersection but not in the intersection of the tangent spaces. Since this vector lies only in one component, the condition that there are no such additional tangent vectors is precisely that the curve $g^t(\zeta) $ intersects $T^*_X M$ cleanly.
Once it is proved that the composition is clean, the order can be calculated just using the standard calculus of Fourier integral operators under clean composition \cite{HoIV, DG75, D73}. As in \cite[(1.20)]{DG75} or \cite[Example, page 111]{D73}, restriction $\gamma_X$ to a submanifold is a Fourier integral operator of order $\frac{1}{4} \operatorname{codim} X$ (after cutting away normal directions, which are irrelevant to our application). The adjoint $\gamma_X^*$ has the same order.
In our application, the domain and range are different, creating an asymmetry in the order calculation. The domain (incoming variables) is $M$ and the range (outgoing variables) is ${\mathbb R} \times M$. The left
restriction $\gamma_{{\mathbb R} \times X} $ acts on the outgoing variables ${\mathbb R} \times M \to {\mathbb R} \times X$ and is of order $\frac{1}{4} (\dim M - \dim X)$. The right restriction $\gamma_X^*$ acts on the incoming variables, Hence the
order of $\gamma_X^*$ is $\frac{1}{4} (\dim M - \dim X)$.
The clean composition $\gamma_{{\mathbb R} \times X} F(t) \gamma_X^*$ has order \[
\mathrm{ord} \gamma_{{\mathbb R} \times X} + \mathrm{ord} F(t) + \mathrm{ord} (\gamma_X^*) + \frac{e}{2} = \mathrm{ord} F(t) + \frac12 \operatorname{codim} X + \frac{e}{2} \]
where $e$ is the excess (see \eqref{EXCESS}). This is the formula for the composition of two operators, but here we extend
it to three operators by successively computing excesses and adding the two excesses. To complete the proof, we need to show
that
\begin{equation} \label{e/2} \frac{e}{2} ={\frac{1}{2}} \dim \widetilde \Lambda \cap (T^*{\mathbb R} \times T^*_X M \times T^*_X M) - {\frac{1}{2}}(\dim X + \dim X + \dim {\mathbb R}) .
\end{equation}
The excess of a general clean composition $A_1 \circ A_2$ of Fourier integral operators, with respective canonical relations $C_1 \subset \dot T^* X \times \dot T^* Y$, $C_2 \subset \dot T^*Y \times \dot T^*Z$, is defined as follows (cf. \cite[Page 18]{HoIV}). The composition is defined
in terms of the clean intersection $\hat{C} : = C_1 \times C_2 \cap T^*X \times {\rm Diag} (T^* Y \times T^* Y) \times T^* Z$. Denote by $C$ the range of the map $\pi_{T^* X \times T^* Z} : \hat{C} \to T^*X \times T^* Z, \; (x, \xi; (y, \eta, y, \eta), z, \zeta) \mapsto (x, \xi, z, \zeta) $. Cleanliness implies that the map $\hat{C} \to C $ has constant rank. The excess is the dimension of the fiber $C_{\gamma}$ over a point $\gamma \in C$.
The composition $\gamma_{{\mathbb R} \times X} F(t)\gamma_X^*$ involves three canonical relations, but in the case of restriction operators there is a convenient way to summarize the above two sided composition composition $\mathrm{WF}\,'(\gamma_{{\mathbb R} \times X} ) \circ \widetilde \Lambda \circ \mathrm{WF}\,'(\gamma_X^*) $ (where $\mathrm{WF}\,'(A)$ is the canonical relation of $A$). Namely, for the right composition, the submanifold $\hat{C}_R$ is $\widetilde{\Lambda} \cap T^* {\mathbb R} \times T^* M \times T^*_X M $. It projects to $\pi_{T^* {\mathbb R} \times T^* M \times T^*X} (\hat{C}_R)$. Similarly, the left composition produces the submanifold $\hat{C}_L:= \widetilde{\Lambda} \cap T^*{\mathbb R} \times T^*_X M \times T^*M$. The double composition produces the intersection $\hat{C} = \hat{C}_L \cap \hat{C}_R = \widetilde \Lambda \cap (\dot{T}^*{\mathbb R} \times \dot{T}^*_X M \times \dot{T}^*_X M)$ and the projection map $$
\Pi_{T^*{\mathbb R} \times T^*X \times T^*X}: \widetilde \Lambda \cap (\dot{T}^*{\mathbb R} \times \dot{T}^*_X M \times \dot{T}^*_X M) \to C \subset T^* {\mathbb R} \times T^*X \times T^*X $$ projects the $T^*_X M $ components to $T^*X$.
The combined excess of the double composition is the dimension of the fiber of this map over its image. We know that
the map has constant rank, since $C$ is a Lagrangian submanifold, $\dim C = (2 \dim X +1)$ and therefore the dimension of the fiber is
$$\dim \widetilde \Lambda \cap (\dot{T}^*{\mathbb R} \times \dot{T}^*_X M \times \dot{T}^*_X M) - (2 \dim X + 1),$$
agreeing with \eqref{e/2}. \end{proof}
\subsection{Geometry of submanifolds, transversality to the geodesic flow and cleanliness} \label{HSECT}
We further review the geometry of unclean intersections in restriction problems from \cite{TZ13}. In that article, $H$ was
assumed to be a hypersurface, whereas here $H \subset M$ can be any submanifold. We briefly generalize the statements
accordingly. In the following, we assume that $H$ is locally defined in an open set $U$ by $\{f_H = 0\}$ with $df_H \not= 0$ on
$H$ and with $f_H : U \to {\mathbb R}^k$. Then $(f_H, d f_H) : T M \to T {\mathbb R}^k$ is a local defining function of $T H$.
A natural choice is to use Fermi-normal coordinates along $H$, the coordinates defined by $\exp^{\perp}: N H \to M $. We let
$(s_1, \dots, s_d)$ be a choice of coordinates on $H$ and let $f_H = (y_1, \dots, y_{n-d})$ be normal coordinates. We also
let $(\sigma_1, \dots, \sigma_d, \eta_1, \dots, \eta_{n-d})$ be the symplectically dual coordinates on $T^*M$. The following
Lemma explains why geodesics tangent to $H$ may cause a lack of cleanliness.
\begin{lemma} \label{FAILLEM} Let $H \subset M$ be a submanifold. Then, $S^* H$ is the set of points of $S^*_H M$ where $S^*_H M$
fails to be transverse to the geodesic flow $G^t$, i.e. where the Hamilton vector field $H_{p_M}$ of $p_M = |\xi|_M$ is tangent to $S^*_H M$.
\end{lemma} This is proved in \cite{TZ13} for hypersurfaces.
\begin{proof} The generator $H_{p_M}$ of the geodesic flow of $M$ is the vector field on $S^*M$ obtained by horizontally lifting $\eta \mapsto \eta^h$ a covector $(q, \eta) \in S^*M $ to $T_{(q, \eta)} S^*M$ with respect to the Riemannian connection on $S^* M$; here, we freely identify covectors and vectors by the metric. Lack of transversality $G^t$ and $H$ occurs when $\eta^h \in T_{(q, \eta)} (S^*_H M)$. The latter is the kernel of $d f_H$. Since $f_H$ is a pullback to $S^*M$, $d f_H (\eta^h) = d f_H (\eta)= 0 $ if and only if $\eta \in T H$.
When $H$ is totally geodesic, the orbits of $G^t$ starting with initial data $(s, \xi) \in S^*H$ remain in $S^*H$, proving the last statement. \end{proof}
We now consider an example of unclean bi-angle sets $\mathcal{G}_c$, resp. $\mathcal{G}_c^0$, in the sense of Definition \ref{CLEAN} of Section \ref{CLEANJF}, which arise when $c < 1$ and the $M$-geodesic of the $(c, s, t)$ bi-angles have transverse intersection with $H$, i.e. examples where the solution set of $G_H^{ - s} \circ \pi_H \circ G_M^{cs + t} (q, \xi) = \pi_H (q, \xi) $ is unclean. In these examples $t \not=0$.
Suppose that $\gamma_0$ is a closed geodesic of ${\mathbb S}^d$, which we envision as a meridian through the poles. Let $SO(2)$ be the one-parameter subgroup of rotations fixing $\gamma_0$. Let $p \in \gamma_0$. Then for any $\xi \in S^*_p S^2$, $G_{S^2} ^{\pi} (p, \xi) \in S^*_H S^2$ and $\exp_p (\pi \xi) = - p$.
We then define $H$ to be the bumped geodesic in which we add a small `bump' on some proper subinterval of $\gamma_0$. For instance, we deform $\gamma_0$ by a nearly rectangular bump centered along the equator which is $\epsilon$ in length along $\gamma_0$ and comes away from the geodesic $\gamma_0$ by $\epsilon^2$. Then we smooth it out near the corners.
We then consider $c$-bi-angles i.e. solutions of $G_H^{ - s} \circ \pi_H \circ G_M^{cs + t} (q, \xi) = \pi_H (q, \xi) $ for some $(s,t)$ and some $(q, \xi)$ with $q \in H$.
First, assume that $q$ is not on the bump, i.e. $q \in \gamma_0$, and let $\xi \in S^c_q H$. If $ q$ is sufficiently far from the equator and $\epsilon$ is small enough, then the $M$- geodesic $\gamma_{q, \xi}(\sigma) = G^{\sigma} (q, \xi)$ does not intersect the bump. It will produce a c-bi-angle in which the geodesic $\gamma_{q, \xi}(\sigma)$ hits $H$ at $\sigma = \pi$.
Let $s$ be the arc-length on $H$ between the two intersection points and define $t$ by $\pi = cs + t$.
We now move the initial data $(q, \xi)$ under $g_{\theta} \in SO(2)$. Let $d(g_{\theta} q)$ be the distance from $g_{\theta} q \in \gamma_0$ to the equator, where the bump lies.
As $d(g_{\theta} q) \to 0$, the geodesic $\gamma_{g_{\theta}(q, \xi)}$ first intersects the bump when $\theta= \theta_0$. The angle of intersection
of $\gamma_{g_{\theta}(q, \xi)}$ and $H$ ceases to be constant at $\theta_0$ and then depends on $\theta$. In particular, the angle ceases to
be the one corresponding to $c$. Thus, the $c$-bi-angle set has a boundary and therefore the solution set is not even a manifold. For $\theta \geq \theta_0$, a c-bi-angle with footpoint on $H$ no longer
lies in the 1-parameter family obtained from $g_{\theta} q$ where $q \in \gamma_0$.
A related example is to put two bumps into $\gamma_0$. An extreme case is that the bumps touch at a point $q_0$ where they are tangent to
to $\gamma_0$ to high order. If $c, \epsilon$ are chosen so that $\gamma_{q_0, \xi}(t)$ does not intersect the bumps, then one can find
$(s, t)$ to have a $c$-bi-angle. The bi-angle cannot be deformed preserving $c$, i.e it is an isolated bi-angle.
\subsection{Microlocal cutoffs}
The hypotheses of Lemma \ref{REDUCTIONLEM} and Lemma \ref{REDUCTIONLEMEXT} will not be satisfied in all the cases for which we wish to prove Theorem \ref{main 3} and Theorem \ref{main 4}. As discussed in Section \ref{SHARPSECT}, non-clean intersections arise due to tangential intersections of geodesics with $H$. This problem is discussed at length in \cite{TZ13}, to which we refer for much of the background. As in \cite{TZ13}, we introduce some cutoff operators supported away from glancing and conormal directions to $H.$ For fixed $\epsilon >0,$ let $\chi^{(tan)}_{\epsilon}(x, D) = Op(\chi_{\epsilon}^{(tan)}) \in \Psi^0(M),$ with homogeneous symbol $\chi^{(tan)}_{\epsilon}(x,\xi)$ supported in an $\epsilon$-aperture conic neighbourhood of $T^*H \subset T^*M$ with $\chi^{(tan)}_{\epsilon} \equiv 1$ in an $\frac{\epsilon}{2}$-aperture subcone. The second cutoff operator $\chi^{(n)}_{\epsilon}(x,D) = Op(\chi_{\epsilon}^{(n)}) \in Op(S^0_{cl}(T^*M))$ has its homogeneous symbol $\chi^{(n)}_{\epsilon}(x,\xi)$ supported in an $\epsilon$-conic neighbourhood of $N^*H$ with $\chi^{(n)}_{\epsilon} \equiv 1$ in an $\frac{\epsilon}{2}$ subcone. Both $\chi_{\epsilon}^{(tan)}$ and $\chi_{\epsilon}^{(n)}$ have spatial support in the tube $\mathcal{T}_{\epsilon}(H)$, the tube of radius $\epsilon $ around $H$ (see \cite[(5.1) and (5.2)]{TZ13} ). To simplify notation, define the total cutoff operator \begin{equation} \label{CUTOFF} \chi_{\epsilon}(x,D) := \chi^{(tan)}_{\epsilon}(x,D) + \chi^{(n)}_{\epsilon}(x,D). \end{equation} We put, $$B_{\epsilon}(x, D) = I - \chi_{\epsilon} (x, D).$$
We use these cutoff operators to ensure that the relevant cleanliness conditions are satisfied. However, our ultimate goal is to prove singularity results for the traces \eqref{SpsiDEF}, which involve diagonal pullbacks and pushfowards that will erase some of the uncleanliness problems, and make it possible to remove the cutoffs. Moreover, the cutoff is unnecessary when $H$ is totally geodesic.
\subsection{Application to fuzzy ladder propagators}
We now apply Lemma \ref{REDUCTIONLEMEXT} to the operator $e^{it P} \psi(Q_c)$ of Lemma \ref{WFsPIc},
where $M$ in the lemma is replaced with $ M \times H$, and where $X = H \times H$, so $\dim X = 2 \dim H, {\frac{1}{2}} \operatorname{codim} X = {\frac{1}{2}} \operatorname{codim} H$. We use the following notation: $\zeta = (\zeta_M, \zeta_H) \in T^*_H M \times T^*H$ and $\pi_H(\zeta) = (\pi_H(\zeta_M), \zeta_H)$.
Since tangential intersections of geodesics with $H$ and conormal vectors to $H$ apriori cause problems, we will apply
Lemma \ref{REDUCTIONLEMEXT} to the reduction of the cutoff operator,
\begin{equation} \label{CUTOFFUtPi} (B_{\epsilon}(x,D) \otimes I) e^{it P} \psi(Q_c): L^2(M \times H) \to L^2({\mathbb R} \times M \times H). \end{equation} The tensor product notation means, as usual, that the cutoff is applied only in the $M$ variables (we omit tensor product with $I$ for the time variables). The cutoff has the effect of cutting down the canonical relation $\mathcal{C}^c_{\psi}$ of $e^{it P} \psi(Q_c)$ in Lemma \ref{WFsPIc} to,
\begin{equation} \label{CUTOFFCR} \mathcal{C}^c_{\psi, \epsilon} := \{(t, \tau, \ G_M^{cs + t} \times G_H^{-s}(\zeta), \zeta) \in \mathcal{C}^c_{\psi} : (1 - \chi_{\epsilon}) (G_M^{cs + t}(\zeta_M))) \not= 0. \} \end{equation} Here, we use that if $F$ is any Fourier integral operator with canonical relation $C_F = \{(x, \xi, y, \eta)\} \subset T^* X \times T^*Y$ and symbol $\sigma_F$, and if $a(x, D)$ is a pseudo-differential operator, then the symbol of the left composition $a(x, D) F$ at $(x, \xi, y, \eta)$ is $a(x, \xi) \sigma_F(x, \xi, y, \eta). $
\begin{lemma} \label{LRLEMLEM} Let $\mathcal{C}_{\psi, \epsilon}^c$ be the canonical relation of \eqref{CUTOFFCR}. Then, for $c < 1$, the intersection \begin{equation}\label{INTER}
\mathcal{C}_{\psi, \epsilon}^c \cap (T^*{\mathbb R} \times T^*_{H } M \times T^*H \times T^*_H M \times T^*H ) \end{equation} is always clean. \end{lemma}
\begin{proof}
Denote the codimension of $H$ by $\operatorname{codim} H = k$.
By Lemma \ref{REDUCTIONLEMEXT}, the cleanliness of \eqref{INTER} holds as long as it holds for (i) fixed t-slices, and (ii) for $t$-curves with fixed $\zeta$.
In the case of fixed $t$ slices, we claim that the intersection is clean if and only if \begin{equation} \label{GCAP}
G^{t + c s} _M(S^*_H M) \cap S^*_H M \end{equation} is a clean intersection for all $s \in \mathrm{supp}\, \hat \psi$ and $\zeta_M$ in the support of the above cutoff. Indeed, the intersection \eqref{INTER} at time $t$ is parametrized by the points $(\zeta, s) \in S^*_H M \times \mathrm{supp}\, \hat \psi$
such that $$
G_M^{cs + t} \times G_H^{-s} (\zeta) \in S ^*_H M \times B^*H. $$
Since cleanliness in the $ \zeta_H$ component is automatic,
the cleanliness of this intersection is equivalent to cleanliness of the intersection \eqref{GCAP}. By Lemma \ref{FAILLEM}, the intersection is necessarily clean
unless there exists $\zeta_M \in S^*_H M$ such that $G^{t + cs}_M(\zeta_M) \in S^* H$. However, in this case $(1- \chi_{\epsilon}) (G^{t + cs}_M(\zeta_M)) = 0$ and the point is not in the
canonical relation.
In addition, we need to check that the
orbits $t \to G_M^{cs + t}(\zeta)$ intersect $S_H^c M$ cleanly. But this case is again covered by Lemma \ref{FAILLEM} for
the same reasons as above. \end{proof}
\begin{proposition} \label{LRLEMPROP} Let \begin{multline*} \Gamma^c_{\psi, \epsilon} := (\pi_{{\mathbb R} \times H \times H} \times \pi_{H \times H}) \mathcal{C}_{\psi, \epsilon}^c \cap (T^*{\mathbb R} \times T^*_{H } M \times T^*H \times T^*_H M \times T^*H ) \\
= \{(t, \tau, \pi_{H \times H} \zeta, \pi_{H \times H} (G^{c s +t}_M \times G^{-s }_{H}) (\zeta)) : |\zeta_M|_g + \tau = 0,\\ \zeta \in \operatorname{Char} Q_c \cap T^*_{H \times H} (M \times H), \ G^{c s +t}_M(\zeta_M) \in T^*_H M \\
(1- \chi_{\epsilon}) (G^{c s +t}_M(\zeta_M)) \not=0, \ s \in \mathrm{supp}\, \hat \psi \} \\ \subset T^*{\mathbb R} \times (T^*H \times T^*H \times T^*H \times T^*H). \end{multline*} For $0<c< 1$, $\Gamma^c_{\psi,\epsilon}$ is a Lagrangian submanifold and the `reduced' Fourier integral operator \begin{equation} \label{LRLEM}
\gamma_{{\mathbb R} \times H \times H} \circ (B_{\epsilon}(x,D) \otimes I) e^{it P} \psi(Q_c) \circ \gamma_{H \times H}^* \end{equation} belongs to the class $$
I^{\rho(m, d)} ({\mathbb R} \times (H \times H) \times (H\times H), \Gamma^c_{\psi,\epsilon}), $$ with $$\rho(m, d) = \mathrm{ord} e^{it P} \psi(Q_c) + {\frac{1}{2}} (n-d) + 2 d +{\frac{1}{2}} -{\frac{1}{2}} (4 d+ 1) = \mathrm{ord} e^{it P} \psi(Q_c) + {\frac{1}{2}} (n-d) . $$ \end{proposition}
The principal symbol of this kind of composition is calculated in \cite{TZ13} using symbol calculus and in \cite{Si18} using oscillatory integrals. We postpone the calculation until the end, since it involves two different restrictions and a pushforward, which can be done in one step rather than in three steps. The ultimate symbol is obtained by a sequence of canonical pushforward and pullback operations.
\begin{proof}
By Lemma \ref{LRLEMLEM}, \eqref{INTER} is a clean intersection and it follows from Lemma \ref{REDUCTIONLEMEXT} that $ \Gamma^c_{\psi, \epsilon} $ is a homogeneous Lagrangian submanifold of $T^* {\mathbb R} \times T^*H \times T^*H \times T^*H \times T^*H$ and that \eqref{LRLEM} is a Fourier integral operator with canonical relation $\Gamma_{\psi, \epsilon}^c.$
To complete the proof, we compute the order of \eqref{LRLEM} when $\dim M = n$ and $\dim H = d$. We have, $ \dim ({\mathbb R} \times H \times H ) = 2 d + 1$ and ${\frac{1}{2}} \operatorname{codim} ({\mathbb R} \times H \times H \subset {\mathbb R} \times M \times H) = {\frac{1}{2}} \operatorname{codim} H = {\frac{1}{2}} (n-d)$. The main problem is to calculate the dimension, \begin{equation}
\label{DIM} D^c(n,d) := \dim (\mathcal{C}_{\psi, \epsilon}^c \cap (T^*{\mathbb R} \times T^*_H M \times T^*H \times T^*_H M \times T^*H)), \end{equation} and how it depends on $c$ and on whether or not $H$ is totally geodesic. Note that $\widetilde \Lambda = \mathcal{C}_{\psi, \epsilon}^c$ in the notation of Lemma \ref{REDUCTIONLEMEXT}, where \begin{multline*} \mathcal{C}^c_{\psi, \epsilon} := \{(t, \tau, G_M^{cs + t} \times G_H^{-s}(\zeta), \zeta) \in T^* {\mathbb R} \times \operatorname{Char}(Q_c) \times \operatorname{Char}(Q_c): \\ s \in
\mathrm{supp}\,(\hat{\psi}), \ \tau + |\zeta_1|_g = 0\}. \end{multline*}
\begin{lemma} \label{DIMLEM} If $c < 1$ and $H$ is any submanifold of dimension $d$, we have, $$ {\frac{1}{2}} D^c(n, d) = 2 d + {\frac{1}{2}}. $$ \end{lemma}
\begin{proof} The equation for $\mathcal{C}_{\psi, \epsilon}^c$ involves $2 + 2n + 2d$ parameters $(t, s, \zeta_M, \zeta_H) \in {\mathbb R} \times {\mathbb R} \times \dot T^* M \times \dot T^*H$, and $1 + 2 (n-d)$ constraints. One is that $\zeta \in \dot T_H^* M$, so we may regard the parameters as $(t, s, \zeta_M, \zeta_H) \in {\mathbb R} \times {\mathbb R} \times \dot T_H^* M \times \dot T^*H$, and then we have
$1 + (n-d)$ further constraints, \begin{itemize}
\item $\sigma_{Q_c} (\zeta) = 0$ (which implies $G_M^{cs +t} \times G_H^{-s}(\zeta) \in {\rm Char}(Q_c)$);
\item $G_M^{t + cs} (\zeta_M) \in T^*_H M$. If $f_H:U \to {\mathbb R}^{n-d} $ is a local defining function of $H$ in $U \subset M$, then we may write the $(n-d)$ constraints as $\pi^* f_H (G_M^{t + cs} (\zeta_M)) = 0$ (see Section \ref{HSECT} and Lemma \ref{FAILLEM}.)
\end{itemize}
The first return time from $S^*_H M$ to itself is a smooth function on the support of $1 - \chi_{\epsilon}$ (see \cite[Section 2.3]{TZ13}). Hence, the solutions $(\sigma, \zeta_M)$ of $\pi^* f_H(G_M^{\sigma}(\zeta_M)) = 0$ is a smooth submanifold of $T^*_H M$ of codimension $n -d$.
Hence, the dimension \eqref{DIM} is given by $$D^c(n,d) = 2 + d + n + 2d - (n-d) - 1 = 4 d + 1 . $$
This completes the proof of Lemma \ref{DIMLEM}. \end{proof}
We now complete the proof of Proposition \ref{LRLEMPROP}.
We now apply Lemma \ref{WFsPIc}, Lemma \ref{REDUCTIONLEMEXT} and Lemma \ref{DIMLEM}, with
$ X = H \times H \subset M \times H$. The intersection has dimension $4 d + 1$. The fiber dimension over ${\mathbb R} \times (H \times H) \times (H \times H)$ is $4 \dim H + 1$, so we subtract ${\frac{1}{2}} (4 d +1)$.
By Lemma \ref{REDUCTIONLEMEXT}, $$\mathrm{ord} e^{it P} \psi(Q_c) + {\frac{1}{2}} (n-d) + 2 d +{\frac{1}{2}} -{\frac{1}{2}} (4 d+ 1) = \mathrm{ord} e^{it P} \psi(Q_c) + {\frac{1}{2}} (n-d) . $$ \end{proof}
\section{Asymptotics of $ N^{c}_{ \psi, \rho, H }(\lambda)$ : Proof of Theorem \ref{main 4} and Theorem \ref{main 5}} \label{ASYMPTOTICSECT}
\subsection{Diagonal pullback and pushforward to ${\mathbb R}$. } The next (and final) step is to compose with the diagonal pullback and to integrate over $H$. By the diagonal embedding $\Delta_H \times \Delta_H$ we mean the partial diagonal embedding $$
(\Delta_H \times \Delta_H) (x,y)\in H \times H \to (x,x, y,y) \in (H \times H) \times (H \times H), $$ and let $(\Delta_H \times \Delta_H)^*$ be the corresponding pull back operator. For instance, when applied to the rank one orthogonal projections onto the joint eigenfunctions, the Schwartz kernels satisfy \begin{multline*} (\Delta_H \times \Delta_H)^* (\gamma_H \otimes I) (\varphi_{j,k}) \otimes [(\gamma_H \otimes I) (\varphi_{j,k}) ]^*(x, y) \\ = (\gamma_H \varphi_j(x) \psi_k(x)) \otimes \overline{ (\gamma_H \varphi_j(y)\psi_k(y)) } \in C(H \times H). \end{multline*}
We then compose with the pushforward under the projection, $\Pi: {\mathbb R} \times H \times H \to {\mathbb R}$. The pushforward of the eigenfunctions is given by, \[
\Pi_* \left((\Delta_H \times \Delta_H)^* (\gamma_H \otimes I) (\varphi_{j,k}) \otimes [(\gamma_H \otimes I) (\varphi_{j,k}) ]^* \right)
= \left| \int_H \gamma_H \varphi_j(x) \psi_k(x) \, dV_H(x) \right|^2. \]
We then apply the pushforward-pullback operation to the Fourier integral operator \eqref{LRLEM}. To keep track of which components are being paired by the diagonal embedding, we note that \eqref{LRLEM} is $$ V_H(t, \psi) = \sum_{j,k} e^{it \lambda_j } \psi (\mu_k - c \lambda_j) ((\gamma_H \otimes I) \varphi_{j,k} \otimes [(\gamma_H \otimes I )\varphi_{j,k}]^*) $$ and its pushforward-pullback is given by the following: The $S^c(t, \psi)$ defined in \eqref{SpsiDEF} is given by,
\begin{align} \label{St} S^c(t, \psi) &= \Pi_*(\Delta_H \times \Delta_H)^* (\gamma_H \otimes I ) e^{i t P} \psi(Q_c) (\gamma_H \otimes I)^* \\ \nonumber
&= \sum_{j,k} e^{it \lambda_j} \psi(\mu_k-c \lambda_j) \left| \int_H \varphi_{j,k}(x,x) dV_H (x) \right|^2. \end{align} Of course, \begin{align} \label{FTFORM}
{\mathcal F}_{\lambda \to t} d N^{c} _{\psi, H }(t) &= \sum_{j,k} e^{it\lambda_j} \psi(\mu_k -c \lambda_j) \left| \int_H \varphi_j \overline{\psi_k} \, dV_H \right|^2\\ \nonumber &= S_c(t, \psi). \end{align}
\begin{definition} \label{SOJOURN} Recall $\mathcal{G}_c$ from \eqref{gcalc}. Let \begin{align*} \mathcal{S}_{c, \psi} &:= \{t \in {\mathbb R}: \exists s \in \mathrm{supp}\, \psi, \gamma \in \mathcal{G}_c: \gamma \; \text{ is a $(c, s, t)$-bi-angle}\}, \\ \mathcal{S}_c &:= \{t \in {\mathbb R} : \exists s \in {\mathbb R}, \gamma \in \mathcal{G}_c: \gamma \text{ is a $(c, s, t)$-bi-angle}\}, \end{align*} \end{definition}
$S^c(t, \psi) $ is the distribution trace of the restriction of the Schwartz kernel of $e^{it P} \psi(Q_c)$, and
\eqref{cpsirho} is its integral in $dt$ against $\hat{\rho} (t) e^{it \lambda}$. Thus, in \eqref{St}, we have expressed the smoothed Kuznecov sums as compositions of Fourier integral operators, specifically as the composition of the diagonal pullback to a suitable diagonal in $H \times H \times H \times H$ and then the pushforward over the diagonal. These operations are also Fourier integral operators, and as reviewed in Section \ref{FIOSECT}, we can use the calculus of Lagrangian distribution under pullback and pushforward to determine the singularities
of $S^c(t, \psi)$ and then the
asymptotics as $\lambda \to \infty$ of \eqref{cpsirho} (see also \cite[Proposition 1.2]{DG75} and \cite{GU89} and many subsequent articles).
The next step is to prove that \eqref{St} is Lagrangian distribution and to use Proposition \ref{LRLEMLEM} to calculate the singular set $\mathcal{S}_c$ of Definition \ref{SOJOURN}, and the order and principal symbol of \eqref{St} at the singular points.
We will need to recall the definition of a homogeneous Lagrangian distribution. By definition, $I^{\frac{\nu}{2} - \frac{1}{4}}(\Lambda_T)$, with $\Lambda_T = \dot T^*_T {\mathbb R}$, consists of scalar multiples of the distribution \begin{equation} \label{I*} \int_0^{\infty} s^{\frac{\nu-1}{2}} e^{- i s (t - T)} ds \end{equation} plus similar distributions of lower order and perhaps a smooth function.
\begin{proposition} \label{MAINFIOPROP} Assume that $\Gamma^c_{\psi, \epsilon} \subset \dot T^*({\mathbb R} \times H \times H \times H \times H)$ (cf. Proposition \ref{LRLEMPROP}) is a Lagrangian submanifold, let $(\Delta_H \times \Delta_H)^*\Gamma^c_{\psi, \epsilon} $ be its pullback under the diagonal embedding $\Delta_H(y, y') = (y, y, y', y')$ in the sense of \eqref{PB} and let $\Pi: {\mathbb R} \times H \times H \to {\mathbb R}$ be the natural projection. Let \begin{equation} \label{LambdacDEF} \Lambda^c_{\psi} := \Pi_*(\Delta_H \times \Delta_H)^*\Gamma^c_{\psi, \epsilon} \end{equation} be the pushforward-pullback of $\Gamma^c_{\psi, \epsilon}$. Then,
\begin{align} \label{PUSHPULL}\Lambda^c_{\psi} &= \{(t, \tau) \in T^*{\mathbb R}: \exists (s, \sigma, q, \eta): s \in \mathrm{supp}\,(\hat{\psi}) : \eqref{EQ} \; \text{ is satisfied}\}\\ \nonumber &= \bigcup_{t \in \mathcal{S}_{c, \psi}} \mathcal{G}_c(t), \;\; (\text{see}\;\; \eqref{gcalct}). \end{align}
Moreover, \eqref{PUSHPULL} is a clean composition if and only if the equation \eqref{EQ} is clean in the sense of condition of Definition \ref{CLEAN}, and then \begin{equation} \label{ordStpsi} S^c(t, \psi) \in
I^{n - \frac{7}{4} } ({\mathbb R}, \Lambda_{\psi}^c), (0 < c < 1).
\end{equation} The displayed order occurs at $t=0$ and the symbol $\sigma(S^c(t, \psi))$ at $t =0$ equals, \begin{equation} \label{sigmaStpsi}
\sigma(S^c(t, \psi)) |_{t =0} = C_{n, d} \; a_c^0(H, \psi) \tau^{n-2} |d \tau|^{{\frac{1}{2}}}. \end{equation} $C_{n,d}$ is a dimensional constant depending only on $n = \dim M, d = \dim H$.
\end{proposition}
\begin{remark} \label{IREM}
$S^c(t, \psi)$ is a sum of (translates of) homogeneous distributions with singularities at the discrete set $\mathcal{S}_c$. The order displayed above is the order of the singularity at $t=0$. The order of the singularity of \eqref{St} at any $t$ is less than or equal to the order at $t=0$. In the dominant case of Definition \ref{DOMDEF}, the order displayed above only occurs at $ t =0$. \end{remark}
\begin{proof} The first step is to calculate the wave front relation of the pullback $(\Delta_H \times \Delta_H)^*\Gamma^c_{\psi, \epsilon} $ using the pullback formula \eqref{PB}. The calculation is similar to that of \cite[(1.20)]{DG75} for the pullback to the `single diagonal' in $M \times M$. The pullback to the `double-diagonal'
$\Delta_{H \times H} \subset H \times H \times H \times H$ subtracts the two covectors at the same base points in the double-diagonal, i.e.
\begin{multline*} (\Delta_H \times \Delta_H)^*\Gamma^c_{\psi, \epsilon} = \{(t, \tau, (q,\eta -\pi_H \xi) ; (q', \eta' - \pi_H \xi')) \in T^*{\mathbb R} \times T^*H \times T^*H: \exists s \\ (t, \tau, (q, \eta), (q, \pi_H \xi), (G_H^s(q, \eta), \pi_H G_M^{t + cs}(q, \xi))) \in \Gamma^c_{\psi, \epsilon}\}, \end{multline*} or in the notation $\zeta = (\zeta_H, \zeta_M) = (x, \xi, y, \eta'') \in {\rm Char}(Q_c)$ such that
$(x, \xi) \in T^*_H M, G^{c s +t}_M(x, \xi) \in T^*_H M$, $\zeta_H = (q, \eta), \zeta_H' = (q', \eta')$, and with $\pi: T^*X \to X$ the natural projection,
\begin{multline*}(\Delta_H \times \Delta_H)^*\Gamma^c_{\psi, \epsilon} = \{(t, \tau, \zeta_H, \zeta_H') : \exists (s, \pi_{H \times H} (x, \xi, y, \eta'') , \pi_{H \times H} (G^{ cs +t}_M \times G^{-s }_{H}) ((x, \xi, y, \eta'') )) \in \Gamma^c_{\psi, \epsilon}, \\ (q, q, q', q') = (x, y, \pi G_M^{s + t}(x, \xi), \pi G_H^{-s}(y, \eta'') ), \\ (\zeta_H, \zeta_H') = (\Delta_H \times \Delta_H)^* (\pi_{H \times H} \zeta, \pi_{H \times H} (G^{ c s +t}_M \times G^{-s }_{H}) (\zeta))) \} \end{multline*}
\begin{remark} For the sake of clarity, we note that $ (\Delta_H \times \Delta_H)^*\Gamma^c_{\psi, \epsilon} $ consists of analogues for c-bi-angles of geodesic loops. Unlike a closed geodesic, the initial and terminal directions of a geodesic loop do not have to be the same. A bi-angle is the analogue of a closed geodesic but the `bi-angle-loop' consists of two geodesic arcs, an $M$-arc and an $H$-arc from $q$ to $q'$, with no constraint that the projection of the initial or terminal directions of the $M$ arc agree with those of the $H$ arc.
\end{remark}
Next, we pushforward the canonical relation $ (\Delta_H \times \Delta_H)^*\Gamma^c_{\psi, \epsilon}$ under the projection $\Pi_t: {\mathbb R} \times H \times H \to {\mathbb R}$,
$\Pi_t(t, \zeta_H, \zeta_H') = t$. As in \eqref{PF} (cf. \cite[(1.21)]{DG75}), the pushforward operation erases points of $$ (\Delta_H \times \Delta_H)^*\Gamma^c_{\psi, \epsilon} = \{(t, \tau, (q,\eta -\pi_H \xi) ; (q', \eta' - \pi_H \xi'))\} $$ unless $\eta -\pi_H \xi = \eta' - \pi_H \xi' = 0$. Equivalently, the pushforward relation only retains covectors normal to the fiber, which results in `closing' the bi-angle-loop wave front set to the set of `closed bi-angles'. Hence, the pushed forward Lagrangian is,
$$\begin{array}{lll} \Pi_{t *} (\Delta_H \times \Delta_H)^*\Gamma^c_{\psi, \epsilon} & = & \{(t, \tau); \;
(t, \tau, (q,\eta -\pi_H \xi) ; (q', \eta' - \pi_H \xi')) \in (\Delta_H \times \Delta_H)^*\Gamma^c_{\psi, \epsilon}: \\&&\\ & &\eta -\pi_H \xi = \eta' - \pi_H \xi' = 0 \} \\ &&\\
& = & \{(t, \tau): \mathcal{G}_c(t) \not= \emptyset\} = \Lambda_{\psi}^c,\;\; ({\rm cf.} \; \eqref{gcalc})
\end{array}$$
The pushforward Lagrangian is the stage in the sequence of compositions where closed geodesic bi-angles first occur. $\zeta_M$ is constrained to make an angle of $\arccos \; c $ with $H$. At this state, the cutoffs $\chi_{\epsilon}$ away from tangential and normal directions become unnecessary and $B_{\epsilon}$ may be removed. Cleanliness of the diagonal pullback is equivalent to the cleanliness conditions on geodesic bi-angles of Definition \ref{CLEAN}, completing the proof of \eqref{PUSHPULL}.
The next step is to calculate the order \eqref{ordStpsi} of $S^c(t, \psi)$ at its singularities. As in \cite[Lemma 6.3]{DG75}, $\Pi_{t*} \Delta_H^*$ is an operator of order $0$ from ${\mathbb R} \times H \times H \to {\mathbb R}$ with the Schwartz kernel of the identity operator on ${\mathbb R} \times H \times H \times {\mathbb R}$. The excess of its composition with $\Gamma_{\psi,t}^c$ is by definition (and by \eqref{gcalc}), \begin{equation} \label{etform}
e(t) = \dim \{ (s, \xi) \in{\rm supp} \;\hat{\psi} \times S^c_H M: G_H^{-s} \circ \pi_H G_M^{cs + t} (\xi) = \pi_H(\xi)\} = \dim \mathcal{G}_c(t). \end{equation}
We refer to Section \ref{FIOSECT} for background (see \eqref{EXCESSCOMP} and \eqref{EXCESS}). The calculation of the excess is parallel to to the calculation of the excess of $(\pi_* \Delta^*) U(t)$ in \cite{DG75}, where the excess is the dimension of the fixed point set of $G^t$ on $S^*M$.
Another description of the excess is given in \cite[Page 18]{HoIV}. It is the fiber dimension of the fiber of the intersection over the composition (see Section \ref{FIOSECT} for the precise statement); that explains why we may restrict to $S^c_H M$ and not $T^c_H M$ and the fiber is $\dim \mathcal{G}^t_c$. By the order calculation at the end of the proof of Proposition \ref{LRLEMPROP}, the order of the singularity at each $t \in \mathcal{S}_c$ in the singular support is given by, \begin{equation} \label{ORDST} {\rm ord} \;S^c(t, \psi) = {\rm ord} e^{it P} \psi(Q_c) + {\frac{1}{2}} (n-d) + \frac{e(t)}{2} = -\frac{3}{4} + {\frac{1}{2}} (n-d) + \frac{\dim \mathcal{G}^t_c }{2}. \end{equation}
In the notation \eqref{gcalct}, $$e(0) = \dim \mathcal{G}_c^{0} =
\dim \mathcal{G}_c^{(0,0)} = \dim S^c_H M = n + d - 2, \; (0 < c < 1).
$$ Indeed, at each point $y \in H$ and for $c \leq 1$, $$S^c_y M = \{\xi \in S_y^* H, \xi = \eta + \nu, \eta \in T_y^* H, \nu \in N^*_y H,
|\eta| = c, |\nu| = \sqrt{1 - c^2}\}.$$ For $0 < c < 1$, $S_y^c M $ is a hypersurface in $S^*_y M$ and has dimension $n - 2$.
Combining with \eqref{ORDST}, it follows that the order of \eqref{St} at $t=0$ equals,
\begin{equation} {\rm ord} \;S^c(t, \psi) |_{t =0} =
- \frac{3}{4} + {\frac{1}{2}} (n - d) + {\frac{1}{2}} (n + d-2) = n - 1 - \frac{3}{4} \;\;(0 < c < 1).
\end{equation}
For general $T \in \mathcal{S}_c$,
\begin{equation} \label{genord} {\rm ord} \;S^c(t, \psi) |_{t =T} = - \frac{3}{4} + {\frac{1}{2}} (n - d) + {\frac{1}{2}} \dim \mathcal{G}^T_c.\end{equation}
To complete the proof of Proposition \ref{MAINFIOPROP}, we need to calculate the symbol of \eqref{St} at each singularity, and particular to show that the symbol at $t=0$ is given by \eqref{sigmaStpsi}. Since the coefficient at $t=0$ was calculated in great detail in the earlier paper \cite{WXZ20}, we only briefly sketch the argument.
The symbol is calculated using iterated pushforward and pullback formulae as reviewed in \eqref{PFPB} (see also \eqref{PB} - \eqref{PF}). The pushforward is by the canonical projection $\Pi_t$, and the pull-back is under the canonical embedding $\Delta_H \times \Delta_H$. In the terminology of \cite{GS13}, these maps must be `enhanced' with natural half-densities to make them `morphisms' on half-densities. As reviewed in Section \ref{MORPHISM}, the embedding must be enhanced by a half-density on the conormal bundle of the image, and the projection must be enhanced by a half-density along the fiber. As discussed in \cite[Page 66]{DG75}, there is a natural half-density on the conormal bundle of the diagonal. In fact, the enhancement construction in this case is quite simple, since the pull-back under $\Delta_H \times \Delta_H$ of the half-density symbol on $\Gamma_{\psi,\epsilon}^c$ produces a density on the fibers of the projection $\Pi_t$. One integrates this density over the fibers to obtain the symbol in \eqref{sigmaStpsi} at any $t$. The half-density symbols are always volume half-densities on their respective canonical relations. In particular when $c \in (0,1)$ the coefficient of the singularity at $t=0$ is a dimensional constant times ${\rm Vol}(S^c_H M)$. \end{proof}
\subsection{Proof of Theorem \ref{main 4}, Theorem \ref{main 5} and Proposition \ref{MORESINGS} } \label{3PROOFS}
By \eqref{FTFORM}, $ {\mathcal F}_{\lambda \to t} d N^{c}_{\psi, H } = S^c(t, \psi)$ and Proposition \ref{MAINFIOPROP} shows that $S^c(t, \psi)$ is a discrete sum of translated polyhomogeneous distributions in $t$. It follows from Remark \ref{IREM} that if a distribution lies in $I^{\frac{\nu}{2} - \frac{1}{4}}(\Lambda_T)$ then its inverse Fourier transform is asymptotic to $ s^{\frac{\nu-1}{2}}$. This suggests that the singularity at $t =0$ produces an asymptotic expansion of order $ n-2$ equal to $ n-2$ for $0 < c < 1$.
The next Lemma gives the precise statement and concludes the proof of Theorem \ref{main 4} for test functions whose support contains only the singularity at $t=0$. If one uses general test functions with Fourier transforms in $C_0^{\infty}({\mathbb R})$, one adds similar contributions from the non-zero $t \in \mathcal{S}_c$. As mentioned above, the order of these singularities is no larger than the order at $ t=0$. If they are less than that order $t= 0$ is called dominant (Definition \ref{DOMDEF}).
\begin{lemma}\label{CONVOLUTION} Assume $0 < c < 1$. Let $\rho \in \mathcal{S}({\mathbb R})$ with $\hat{\rho} \in C_0^{\infty}$, $\int \rho = 1$, and with ${\rm supp} \hat{\rho}$ in a sufficiently small interval around $0$. Then, there exists $\beta_j \in {\mathbb R}$ and a complete asymptotic expansion, $$ N^{c} _{\rho, \psi, H }(\lambda) \sim \lambda^{n-2} \sum_{j=0}^{\infty} \beta_j \; \lambda^{-j}, $$ with $\beta_0 = \; C_{n,d} \; a_c^0(H, \psi)$ (see Theorem \ref{main 5} for the general formula).
\end{lemma}
\begin{proof} The asymptotic expansions follow immediately from Proposition \ref{MAINFIOPROP}. By definition (see \eqref{St}),
\begin{equation} \label{EXPRESSIONS2} \begin{array}{lll} N^{c} _{\rho, \psi, H }(\lambda) & = & \sum_{j, k} \rho(\lambda - \lambda_k) \psi( \mu_j - c \lambda_k) \left| \int_{H} \varphi_j \overline{\psi_k}dV_H \right|^2 \\ && \\ & =& \int_{{\mathbb R}} \hat{\rho}(t) e^{it \lambda} S^c(t, \psi) dt
. \end{array} \end{equation} If $\mathcal{G}_c^{0,0}$ is the only component with $t=0$, then by Proposition \ref{MAINFIOPROP} and by definition of Lagrangian distributions \eqref{I*},
for sufficient small $|t|$, \begin{equation} \label{SI*} S^c(t, \psi) = \sum_{j=0}^{\infty} \alpha_j \int_0^{\infty} s^{n-2 -j} e^{- i s t} ds \; {\rm mod}\; C^{\infty}. \end{equation}
{$$ \begin{array}{lll} \int_{{\mathbb R}} \hat{\rho}(t) e^{it \lambda} S^c(t, \psi) dt & = & \sum_{j=0}^{\infty} \alpha_j \int_0^{\infty} s^{n-2 -j} \left( \int_{{\mathbb R}} \hat{\rho}(t) e^{it \lambda}e^{- i s t} dt \right) ds + O(\lambda^{-\infty}) \\ &&\\& = & \sum_{j=0}^{\infty} \alpha_j \int_0^{\infty} s^{n-2 -j} \rho(\lambda -s) ds + O(\lambda^{-\infty}) \\ && \\ & = & \sum_{j=0}^{\infty} \alpha_j \int^{\lambda}_{-\infty} (\lambda - s) ^{n-2 -j} \rho(s) ds + O(\lambda^{-\infty}) \\ &&\\ & = & \sum_{j=0}^{\infty} \alpha_j \int^{\infty}_{-\infty} (\lambda - s) ^{n-2 -j} \rho(s) ds + O(\lambda^{-\infty}) \\ &&\\ & \end{array}$$}where $\tilde \alpha_0 = \alpha_0$ is the principal symbol of $S^c(t, \psi)$ at $t=0$ (given in Proposition \ref{MAINFIOPROP}), and $\tilde \alpha_j$ are obtained in part by expanding \[
\int_{-\infty}^\infty (\lambda - s)^{n-2-j} \rho(s) \, ds \] in $\lambda$.
As discussed in Section \ref{supplarge}, when $\hat{\psi}$ is arbitrarily large, the principal symbol is changed by summing over the components of $\mathcal{G}_c^0$. As in that section, the $(c, s, 0)$-bi-angles with $t = 0$ is assumed to be a union of clean components $Z_j(0)$ of dimension $d_j$. In our situation $Z_j$ is a component of $ \mathcal{G}^0_c$. Then, for $t$ sufficiently close to $0$, $$ S^c(t, \psi) = \sum_j \beta_j(t), $$ with \begin{equation}\label{betaeq2} \beta_j(t) = \int_{{\mathbb R}} \alpha_j(s) e^{- i s t} ds, \;\; \text{ with}\;\; \alpha_j(s) \sim (\frac{s}{2 \pi i})^{ -1 + {\frac{1}{2}} (n -d)+\frac{d_j}{2}}\;\; i^{- \sigma_j} \sum_{k=0}^{\infty} \alpha_{j,k} s^{-k}, \end{equation} where $d_j $ is the dimension of the component $Z_j(0) \subset \mathcal{G}_c^0$. \end{proof}
\subsubsection{Proof of Proposition \ref{MORESINGS}} \begin{proof}
By Proposition \ref{LRLEMPROP}, at a non-zero period, the exponents are calculated from \eqref{ORDST}, with the excess \eqref{etform} given by $\dim \mathcal{G}^t_c$. Hence, the exponents at a non-zero period are now ${\frac{1}{2}} (n-d) - 1 + \frac{\dim \mathcal{G}^t_c }{2}$.
In the notation of \cite[Theorem 4.5]{DG75}, we are assuming that the set of $(c, s, t)$-bi-angles with $t = T \in {\rm singsupp}\; S^c(t, \psi) \backslash \{0\}$ is a union of clean components $Z_j$ of dimension $d_j$. In our situation $Z_j$ is a component of $ \mathcal{G}^t_c$. Then, for $t$ sufficiently close to $T$, $$S^c(t, \psi) = \sum_j \beta_j(t - T), $$ with $$ \beta_j(t) = \int_{{\mathbb R}} \alpha_j(s) e^{- i s t} ds, \;\; \text{ with}\;\; \alpha_j(s) \sim (\frac{s}{2 \pi i})^{ -1 + {\frac{1}{2}} (n -d)+\frac{d_j}{2}}\;\; i^{- \sigma_j} \sum_{k=0}^{\infty} \alpha_{j,k} s^{-k},$$ where $d_j $ is the dimension of the component $Z_j$ of $\mathcal{G}_c^t$. \end{proof}
\subsection{Sub-principal term of $N_{\rho, \psi}(\lambda)$ when $\hat{\psi}$ has small support and both $\hat{\psi}$ and $\hat{\rho} $ are even.}\label{SUBPRINCIPALSECT}
{The subprincipal term may be calculated by the stationary phase method, but even the subprincipal term is a sum of a large number of terms with up to six derivatives on the phase and two derivatives of the amplitude. It is easily seen (for instance, on a flat torus) that the subprincipal term contains sums with the derivatives of $\hat{\rho}(t)$ and $\hat{\psi}(s)$ at $t=s=0$). It is assumed in \cite[Proposition 2.1]{DG75} (which only involves $\hat{\rho}$) that $\hat{\rho} \equiv 1$ near $0$, hence none of its derivatives contribute to the subprincipal term. We are making the same assumption.
In addition, derivatives of the amplitude of the wave kernel may contribute to the subprincipal term. We use a parity argument as in \cite[Page 48]{DG75} to show that the contribution of these terms is also zero.
The sub-principal symbol of $\sqrt{-\Delta_M}$ and of $\sqrt{-\Delta_H}$ both vanish,
hence the subprincipal symbols of $\sqrt{-\Delta_M} \otimes I$ and of $Q_c$ both vanish. The homogeneous part of degree k in $\sigma_P(x, \xi)$ is even, resp. odd if k is even, resp. odd. By induction with respect to $r$ it follows that $(\frac{\partial}{\partial t})^r a_{-j}$ is an even, resp. odd. if $r-j$ is even, resp. odd.
The amplitude of
$e^{it P} e^{i s Q_c}$ is obtained by integrating the parametrix formula for $e^{i t P} \circ e^{- i s Q_c}$ as
a tensor product $e^{i (t - c s) P_M} \otimes e^{i s P_H} $. The parities of the terms in the amplitude are thus determined as in \cite{DG75}, for $s = t = 0$. One has to restrict the $M$ amplitudes to $H$ but they still have the same parity. One further has to restrict them to
the diagonals in $H \times H$, which seems to multiply the amplitudes. But the subprincipal term can only be obtained as the product of the
principal symbol and the subprincipal symbol.
Hence it is odd and its integrals over cospheres vanishes.
}
\begin{remark} We opt not to calculate the Maslov indices $\sigma_j$ for the sake of brevity, and absorb them into the constants $\beta_{\ell}$. \end{remark}
\section{Proof of Theorem \ref{main 2} and Theorem \ref{main 2b}}
In this section we apply a cosine Tauberian theorem to deduce Theorem \ref{main 2} from Theorem \ref{main 4}.
\subsection{Tauberian Theorems} \label{SS Tauberian}
For the reader's convenience we quote the statements of two Fourier Tauberian theorem from \cite{SV}. In what follows, $\rho$ will be{ a strictly positive even Schwartz-class function} on ${\mathbb R}$ with compact Fourier support satisfying that $\hat\rho(0)=1$. $N$ will be a tempered, monotone increasing function with $N(\lambda) = 0$ for $\lambda < 0$, and $N'$ its distributional derivative as a nonnegative measure on ${\mathbb R}$.
\begin{proposition}[Corollary B.2.2 in \cite{SV}] \label{tauberian 1}
Fix $\nu \geq 0$. If $N' * \rho(\lambda) = O(\lambda^\nu)$, then
\[
N(\lambda) = (N * \rho)(\lambda) + O(\lambda^\nu).
\]
This estimate holds uniformly for a set of such $N$ provided $N' * \rho(\lambda) = O(\lambda^\nu)$ holds uniformly. \end{proposition}
\begin{proposition}[Theorem B.5.1 in \cite{SV}] \label{tauberian 2}
Fix $\nu \geq 0$. If $N' * \rho(\lambda) = O(\lambda^\nu)$ and additionally
\[
N' * \chi(\lambda) = o(\lambda^\nu)
\]
for every Schwartz-class $\chi$ on ${\mathbb R}$ whose Fourier support is contained in a compact subset of $(0,\infty)$. Then,
\[
N(\lambda) = N * \rho(\lambda) + o(\lambda^\nu).
\] \end{proposition}
\subsection{Proof of Theorem \ref{main 2}}\label{TAUBPSISECT} \begin{proof} Theorem \ref{main 2} pertains to the Weyl function $N^{c} _{\psi, H }(\lambda)$ of \eqref{cpsi}, $$N^{c} _{\psi, H }(\lambda): =
\sum_{j, k: \lambda_k \leq \lambda} \psi( \mu_j - c \lambda_k) \left| \int_{H} \varphi_j \overline{\psi_k}dV_H \right|^2. $$ {Recall our assumption that $\psi \geq 0$.} Then, $N^{c} _{\psi, H }(\lambda)$
is monotone non-decreasing and has Fourier transform $S^c(t, \psi)$ \eqref{St}.
We apply Proposition \ref{tauberian 1} with $\hat{\rho}\; \cap\; {\rm singsupp}\; S^c(t, \psi) = \{0\}$ and to $d N^{a,c} _{\psi, H }(\lambda) $. By Lemma \ref{CONVOLUTION}, $\rho* d N^{c} _{\psi, H }(\lambda) = \beta_0 \;\lambda^{n-2} + O(\lambda^{n-2-1})$, and therefore, $$\begin{array}{lll} N^{c} _{\psi, H }(\lambda) & = & \rho* N^{c} _{\psi, H }(\lambda) + O(\lambda^{n-2}) \\ &&\\ & = & \dfrac{\beta_0}{n-1} \;\lambda^{n- 1} + O(\lambda^{n-2}),\end{array} $$ concluding the proof of Theorem \ref{main 2}. \end{proof}
\subsection{Proof of Corollary \ref{JUMPCOR} } To prove Corollary \ref{JUMPCOR} it suffices to prove that, for any $\epsilon > 0$ there exists a test function $\psi \geq 0, \hat{\psi} \in C_0^{\infty}({\mathbb R}), \hat{\psi}(0) =1$ and a universal constant $C(\epsilon, \delta)$ depending only on $(\epsilon, \delta)$ so that for all $\lambda_j$, \begin{equation} \label{LB} J_{\psi, H}^c(\lambda_j) \geq C(\epsilon, \delta) \; J_{\epsilon, H}^{c} (\lambda_j). \end{equation} Then the upper bound for $ J_{\psi, H}^c(\lambda_j) $ given in Corollary \ref{main 2cor} provides the upper bound for $ J_{\epsilon, H}^{c} (\lambda_j)$.
\begin{proof} We have \[
\sum_{ k: |\mu_k - c \lambda_j | \leq \epsilon } \left| \int_{H} \varphi_j \overline{\psi_k}dV_H \right|^2 \leq \sum_k \psi(\mu_k - c\lambda_j)\left| \int_{H} \varphi_j \overline{\psi_k}dV_H \right|^2 \] provided $\psi$ is chosen to be a nonnegative Schwartz function with small Fourier support, with $\psi(0) > 1$, and perhaps scaled wider so that $\psi \geq \mathbf{1}_{[-\epsilon,\epsilon]}$. By Corollary \ref{main 2cor}, the right side is $O(\lambda_j^{n-2})$, concluding the proof. \end{proof}
\subsection{Proof of Theorem \ref{main 2b}}\label{main 2bSECT}
\begin{proof}
By Proposition \ref{tauberian 2}, it suffices to check the additional condition, \begin{equation} \label{ADDCOND}
\chi * d N_{\psi, H}^c (\lambda) = o(\lambda^{n-2} ) \end{equation} for every Schwartz-class $\chi$ on ${\mathbb R}$ whose Fourier support is contained in a compact subset of $(0,\infty)$.
To prove this, we consider the expansions given in Theorem \ref{main 4} and Theorem \ref{main 5} and especially Proposition \ref{MORESINGS}. In addition to the assumptions of Theorem \ref{main 4}, the assumption of Theorem \ref{main 2b} is that $d_j(T) < d_j(0)$ for $T \not=0$, i.e. that $\dim Z_j(0) > \dim Z_j(T)$ for all $T \neq 0$. This assumption together with Proposition \ref{MORESINGS} shows that \eqref{ADDCOND} holds; then Proposition \ref{tauberian 2} implies the first claim in Theorem \ref{main 2b}.
Finally, we bound the jumps in the Weyl function of Theorem \ref{main 2b} by the remainder as before and obtain $J_{\psi, H}^c(\lambda) = o_\psi(\lambda^{n-2})$. \end{proof}
\section{Proof of Theorem \ref{main 3}}\label{BUSECT}
In this section we deduce Theorem \ref{main 3} from Theorem \ref{main 5} and an additional Tauberian theorem, which allows us to replace $\psi$ in the inner sum of \eqref{cpsirho} (or \eqref{EXPRESSIONS2}),
by an indicator function ${\bf 1}_{[-\epsilon, \epsilon]}$. Throughout this section, we assume that $c < 1$.
For simplicity of notation, when $\psi = {\bf 1}_{[-\epsilon, \epsilon]}$, we write,
\begin{equation} \label{cpsirho2}
N^{c} _{\epsilon, H }(\lambda) := \sum_{j: \lambda_j \leq \lambda} J^c_{{\bf 1}_{[-\epsilon, \epsilon]}}(\lambda_j)
\end{equation} where as in \eqref{JDEF} \begin{equation} \label{SUM}
J^c_{ {\bf 1}_{[-\epsilon, \epsilon]}}(\lambda_j) : = \sum_{\ell: \lambda_{\ell} = \lambda_j}
\sum_{ \substack{k: | \mu_k - c \lambda_j| \leq \epsilon}} \left| \int_{H} \varphi_{\ell} \overline{\psi_k}dV_H \right|^2 \end{equation}
The Tauberian theorem is not used in the traditional way, i.e. to replace a monotone increasing function with jumps by a smoothly varying
sum. The relevant monotone function is the Weyl-Kuznecov sum \eqref{cpsirho2}, and as with \eqref{cpsi}, its jump discontinuities occur only at the points $\lambda = \lambda_j$, with jumps $J^c_{ {\bf 1}_{[-\epsilon, \epsilon]}}(\lambda_j)$.
We treat the sum \eqref{SUM} as a semi-classical Weyl
function with semi-classical parameter $\lambda_j^{-1}$ and deploy
the semi-classical Tauberian theorem of \cite{PR85, R87}. The main complication is that the terms
are weighted by the unbounded and non-uniform weights $\left| \int_{H} \varphi_j \overline{\psi_k}dV_H \right|^2$ (in $(\lambda_j, \mu_k))$. Moreover, $J^c_{ {\bf 1}_{[-\epsilon, \epsilon ]}}(\lambda_j)$ does not usually
have an asymptotic expansion as $\lambda_j \to \infty$ due to lack of asymptotics for the individual eigenfunctions $\varphi_j$. We need to sum
in $\lambda_j$ as well to obtain asymptotics.
To set things up for the Tauberian arguments in \cite{PR85,R87}, we define \begin{equation} \label{EMPIRICAL} \left\{
\begin{array}{l} \displaystyle d\mu^c_{\lambda}(x) :=\sum_{j: \lambda_j \leq \lambda} \sum_{k} \left| \int_{H} \varphi_j \overline{\psi_k}dV_H \right|^2 \delta_{\mu_k - c\lambda_j}(x). \\ \displaystyle \sigma^c_{\lambda} (x) = \int_{-\infty}^x d\mu^c_{\lambda}(y). \end{array} \right. \end{equation} By \eqref{c}, $$ N^{c}_{\epsilon, H}(\lambda) := \int_{ -\epsilon}^{ \epsilon} d\mu^{c}_{\lambda} = \sigma^c_{\lambda} (\epsilon) - \sigma^c_{\lambda} (-\epsilon) = \sum_{j,\lambda_j \leq \lambda} J_{{\bf 1}_{[-\epsilon,\epsilon]}}^c(\lambda_j) . $$
Note that $\mathrm{supp}\, \mu^c_{\lambda} \subset \{x \geq - c \lambda\}$ and that, \begin{equation} \label{MASS}
\int_{-\infty}^{\infty} d \mu_{\lambda}^c = \sum_{j: \lambda_j \leq \lambda} \|\gamma_H \varphi_j\|_{L^2(H)}^2. \end{equation}
In comparison with the proofs of Theorems \ref{main 2} - \ref{main 2b}, we do not have a complete, or even two-term, asymptotic expansion for either of the once-smoothed sums, \eqref{cpsi} or
$\sum_j \rho(\lambda - \lambda_j) J^c_{{\bf 1}_{[-\epsilon, \epsilon]}}(\lambda_j). $
The Tauberian strategy is to smooth out the indicator functions ${\bf 1}_{[-\epsilon, \epsilon]}$ and apply Theorem \ref{main 5} to this
kind of mollified sum. Our aim is to show that, under the assumption that $\mathcal{G}^{0,0}_c$ is dominant, we can sharpen the sum at the expense of weakening the remainder to $o_{c,\epsilon}(\lambda^{n-1})$.
The smoothing error is an `edge effect' due to the sum of the terms $ \left| \int_{H} \varphi_j \overline{\psi_k}dV_H \right|^2$
near the endpoints $|\mu_k - c \lambda_j |= \epsilon $ of the interval $|\mu_k - c \lambda_j |\leq \epsilon $. The Tauberian theorem is used to show that the
eigenvalues and the quantities $|\mu_k - c \lambda_j |= \epsilon $ are sufficiently uniform, i.e. do not concentrate
near the endpoints.
\subsection{The proof of Theorem \ref{main 3}}
Following \cite{PR85, R87}, we denote by
$\rho_1 \in C_0^{\infty}(-1,1) $ a smooth cutoff satisfying $\rho_1(0) = 1$, $\rho_1(-t) = \rho_1(t)$. With no loss of
generality, we
assume $\hat \rho_1(\tau) \geq 0$ and $\hat \rho_1(\tau) \geq \delta_0 > 0$ for $|\tau| \leq \epsilon_0$.
Then set, \begin{equation} \label{thetaDEF} \rho_T(\tau) = \rho_1(\frac{\tau}{T}), \;\;\; \theta_{T}(x) := \hat \rho_T (x) = T \hat \rho_1(T x). \end{equation}
In particular, $\int \theta_T(x) dx = 1$ and $\theta_T(x) > T \delta_0$ for $|x| < \epsilon_0/T$. Note that $\theta_{T}* d\mu^c_{\lambda}$ is by definition the measure, $$
\theta_{T}* d\mu^c_{\lambda}(x) = \sum_{j: \lambda_j \leq \lambda} \sum_k \theta_T(\mu_k - c \lambda_j - x) \left| \int_{H} \varphi_j \overline{\psi_k}dV_H \right|^2. $$ Of course, $\theta_{T}* d\mu^c_{\lambda} (x) \to d\mu^c_{\lambda} (x) $ as $T \to \infty$.
Let us record the relation between the various relevant quantities.
\begin{lemma} \label{RELS} We have, \[
\int_{-\epsilon}^{\epsilon} \theta_{T}* d\mu^c_{\lambda} (x) = N^{c} _{\theta_T * {\bf 1}_{[-\epsilon, \epsilon]}, H }(\lambda) = \sigma_{\lambda}^c * \theta_{T} (\epsilon) - \sigma_{\lambda}^c * \theta_{T} (-\epsilon) \]
\end{lemma}
\begin{proof}
This follows from the definitions. \end{proof}
The asymptotics of $ N^{c} _{\theta_T * {\bf 1}_{[-\epsilon, \epsilon]}, H }(\lambda) $ are given in Theorem \ref{main 5} with $$
\psi = \psi_{T, \epsilon} : = \theta_T * {\bf 1}_{[-\epsilon, \epsilon]}. $$ We use the notation $\psi_{T, \epsilon}$ henceforth to simplify the notation. Putting things together, \begin{equation} \label{CORCOR}
N^{c} _{\epsilon, H}(\lambda) = N^{c} _{\psi_{T, \epsilon}, H }(\lambda) + N^{c}_{(\psi_{\infty, \epsilon} - \psi_{T, \epsilon}), H }(\lambda), \end{equation} where $\psi_{\infty} = \mathbf{1}_{[-\epsilon,\epsilon]}$.
Here, $\epsilon $ is fixed. The hard step is to estimate the error in the smoothing approximation, \begin{equation}\label{NEEDT}\begin{array}{l} N^{c}_{(\psi_{\infty, \epsilon} - \psi_{T, \epsilon}), H }(\lambda) =\left(\sigma_{\lambda}^c (\epsilon) - \sigma_{\lambda}^c (- \epsilon) \right)- \left(\sigma_{\lambda}^c * \theta_{T} (\epsilon)- \sigma_{\lambda}^c * \theta_{T} (- \epsilon) \right),
\end{array} \end{equation} in terms of $(\lambda, T)$.
\begin{proposition}\label{TLEMa} With the same notation and assumptions as in Theorem \ref{main 3}, for any $\epsilon > 0$ and $c \in (0,1)$, there exist constants $\gamma(c, \epsilon) $ such that, for any $T >0$,
$$
|N^{c}_{(\psi_{\infty, \epsilon} - \psi_{T, \epsilon}), H }(\lambda)| = \left| \int_{-\epsilon}^{\epsilon} (\theta_T * d\mu_{\lambda}^c- d\mu_{\lambda}^c) \right| \leq
\frac{\gamma(c, \epsilon)}{T} \lambda^{n-1} + O_{T, \epsilon} (\lambda^{n-3/2})
.$$ \end{proposition}
Before giving the proof, we verify that Proposition \ref{TLEMa} implies Theorem \ref{main 3}. By Theorem \ref{main 5} and the hypothesis that $\mathcal{G}_c^{0,0}$ is dominant, we have, $$ N^{c} _{\psi_{T, \epsilon}, H}(\lambda) = \hat{\psi}_{T, \epsilon}(0) \; A_{n, d}^c {\mathcal H}^{d}(H) \lambda^{n-1 } + R_{\psi_{T, \epsilon}} (\lambda), $$ where $A_{n,d}^c $ is the leading coefficient, e.g. $A_{n,d}^c = C_{n, d} c^{d-1} (1 - c^2)^{\frac{n-d-2}{2}} $ for $0 < c < 1$, and where $R_{\psi_{T, \epsilon}}(\lambda) = O_{T, \epsilon}(\lambda^{n-3/2}) $. Moreover, $\hat{\psi}_{T, \epsilon} (0) = 2 \epsilon.$ The full error term in \eqref{CORCOR} is therefore, \begin{align*} \widetilde{R}_{T, \epsilon}(\lambda): & = N^{c}_{(\psi_{\infty, \epsilon} - \psi_{T, \epsilon}), H }(\lambda) + R_{\psi_{T, \epsilon}}(\lambda) \\ & = O( \frac{\gamma(c, \epsilon)}{T} \lambda^{n-1}) + O_{T, \epsilon} (\lambda^{n-3/2 }) + O_{T, \epsilon}(\lambda^{n-3/2}). \end{align*} The bound $\tilde R_{T,\epsilon}(\lambda) = o_\epsilon(\lambda^{n-1})$ follows by taking $T = T(\lambda)$ as a function of $\lambda$ increasing $T(\lambda) \nearrow \infty$ sufficiently slowly.
\subsection{Proof of Proposition \ref{TLEMa}}
We have, $$ \begin{array}{lll} \int_{-\epsilon }^{\epsilon } (\theta_T * d\mu_{\lambda}^{c} - d\mu_{\lambda}^{c} )& = & \int_{{\mathbb R}} \left(\mu_{\lambda}^c ( [-\epsilon , \epsilon ] - \tau) - \mu^c_{\lambda} [- \epsilon , \epsilon ]\right) \theta_{T}(\tau) d\tau\\&&\\ & = &T \int_{{\mathbb R}} \left( \mu^c_{\lambda} ([- \epsilon ,\epsilon ] - \tau) - \mu_{\lambda}^c[-\epsilon , \epsilon ]) \right) \hat{\rho}_1( \tau T) d \tau \\&&\\
& = &T \int_{|\tau| \leq \frac{1}{T} } \left( \mu^c_{\lambda} ([- \epsilon ,\epsilon ] - \tau) - \mu^c_{\lambda} ([- \epsilon, \epsilon ]) \right) \hat{\rho}_1( \tau T) d \tau \\&&\\
& + & T \int_{|\tau| > \frac{1}{T}} \left( \mu^c_{\lambda} ([-\epsilon , \epsilon ] - \tau) - \mu^c_{\lambda} ([-\epsilon ,\epsilon ]) \right) \hat{\rho}_1( \tau T) d \tau \\&&\\& =: & I_1 + I_2. \end{array} $$
The key point is to prove the analogue of \cite[Proposition 3.2]{PR85}. \begin{proposition}\label{TLEM2} With the same notation and assumptions as in Theorem \ref{main 3}, and for any $0 < c < 1$ here exist constants $\gamma_1(c, \epsilon)$ such that, for any $T > 0$,
$$\left|\mu^c_{\lambda} ([- \epsilon , \epsilon ] - \tau) -\mu^c_{\lambda} ([- \epsilon, \epsilon ] ) \right| \leq \gamma_1(c, \epsilon) (\frac{1}{T} + |\tau|) \lambda^{n - 1 } + C_1(T,c) O(\lambda^{n-3/2}), $$
\end{proposition}
We first show that Proposition \ref{TLEM2} implies Proposition \ref{TLEMa}.
\begin{proof} We only verify this in the case $0 < c < 1$, since the second case is proved in the same way. First, observe that Proposition \ref{TLEM2} implies,
\begin{equation} \label{Izbd} I_1 \leq \sup_{|\tau| \leq \frac{1}{T}}
\left| \mu^c_{\lambda} ([- \epsilon, \epsilon ] - \tau) - \mu^c_{\lambda} ([-\epsilon, \epsilon] ) \right|, \end{equation}
and Proposition \ref{TLEM2} immediately implies the desired bound for $|\tau| \leq \frac{1}{T}$. For $I_2$ one uses that $\hat{\rho}_1 \in \mathcal{S}({\mathbb R})$.
Since
$T \int_{|\tau| \geq \frac{1}{T} } \hat{\rho}_1(\tau T) d \tau \leq 1, $ Proposition \ref{TLEM2} implies that there exist constants $A > 0$, $C_1(T,c)$ so
that
\begin{equation} \label{IIzbd} \begin{array}{lll} I_2 &\leq & \; A \;
\lambda^{n-1}\gamma_1(c,\epsilon) T \int_{|\tau| > \frac{1}{T}} (\frac{1}{T} + |\tau|)
\hat{\rho}_1( T \tau) d \tau \\&&\\ &&+ C_1(T,c) O(\lambda^{n-3/2 }) T \int_{|\tau| > \frac{1}{T}} \hat{\rho}_1(T \tau ) d \tau. \end{array}\end{equation}
If one changes variables to $r = T \tau$ one also gets the estimate of the Tauberian Lemma.
\end{proof}
We now prove Proposition \ref{TLEM2}.
\begin{proof} Since we are studying the increments $ \mu^c_{\lambda} ([-\epsilon , \epsilon ] - \tau) - \mu^c_{\lambda} ([-\epsilon ,\epsilon ])$ and since the integral is a sum where $|\tau| \leq \frac{1}{T}$ and $|\tau | \geq \frac{1}{T}$,
the proof is broken up into 3 cases: (1) $|\tau | \leq \frac{\epsilon_0}{T}$,
\; (2) $\tau = \frac{\ell}{T} \epsilon_0,$ for some $\ell \in {\mathbb Z}$, and (3) $ \frac{\ell}{T} \epsilon_0 \leq \tau \leq \frac{\ell+1}{T} \epsilon_0 $, for some $ \ell \in {\mathbb Z}$.
The key assumption that the only maximal component is the principal component is used to obtain the factor of $\frac{1}{T}$, which is responsible
for the small oh of the remainder.
We use the exact formula for the leading coefficient when $\hat{\psi}$ has arbitrarily large compact support
in Theorem \ref{main 5}. When $0 < c < 1$ and when the only maximal component is the principal component, the sum over $s_j^m$ is merely
the value of $s =0$. When $\hat{\psi}(0) = 1$, $a_c^0(H, \psi)$ is independent of $\mathrm{supp}\, \; \hat{\psi}$. When there do exist many maximal
components, as in the case of subspheres of spheres, the sum $\sum_j \hat{\psi}(s_j^m)$ essentially counts the number of the components
with $s$-parameter in $\mathrm{supp}\, \psi$, and that can cancel the $\frac{1}{T}$.
\noindent{\bf (1)} Assume $|\tau| \leq \frac{\epsilon_0}{T}$. Also assume $\tau > 0$ since the
case $\tau < 0$ is similar. We claim that,
$$ \left| \mu_{\lambda}^c ([-\epsilon, \epsilon] -\tau) - \mu_{\lambda}^c [- \epsilon, \epsilon]) \right| \leq \frac{2\gamma_0(c,\epsilon)}{T \delta_0} \lambda^{n-1}. $$
Write
$$ \begin{array}{lll} \mu_{\lambda}^c([- \epsilon, \epsilon] -\tau) - \mu_{\lambda}^c[- \epsilon, \epsilon]) & = & \int_{{\mathbb R}} [{\bf 1}_{[- \epsilon - \tau, \epsilon- \tau]} - {\bf 1}_{[- \epsilon, \epsilon]}] (x)d \mu_{\lambda}^c(x)
. \end{array} $$
For $T$ sufficiently large so that $ \tau \ll 2 \epsilon$, $$ [{\bf 1}_{[- \epsilon - \tau,\epsilon - \tau]} - {\bf 1}_{[-\epsilon, \epsilon]}] (x)= {\bf 1}_{[- \epsilon - \tau, -\epsilon]} - {\bf 1}_{[\epsilon - \tau, \epsilon]}. $$
Since they are similar we only consider the $[- \epsilon - \tau, - \epsilon]$ interval. Since for $|\tau| < \epsilon_0/T$, we have $\theta_T(\tau) > T \delta_0$,
it follows from Theorem \ref{main 5} that, $$\begin{array}{lll} \mu_{\lambda}^c ([- \epsilon - \tau, - \epsilon]) &\leq& \frac{1}{T \delta_0} \int_{\mathbb R} \theta_T(-\epsilon-x) d \mu_{\lambda}^c (x) \\
&\leq& \frac{\gamma_0(c,\epsilon)}{T \delta_0} \lambda^{n-1} + O_{T,\epsilon}(\lambda^{n-3/2}). \end{array}$$ The estimate in the third line uses the formula for the leading coefficient of Theorem \ref{main 5} with $\psi(s) = \theta_T(-\epsilon + s)$. Under the hypotheses of the Proposition, the sum of $\hat{\psi}(s_j^m)$ is just equal to $\hat{\psi}(0)$ and is independent of $T$, as explained above. This completes the proof of the claim.
\noindent{\bf (2)} Assume $\tau = \ell \frac{\epsilon_0}{T} , \ell \in {\mathbb Z}.$ With no loss of generality, we may assume
$\ell \geq 1.$ Write
$$ \mu_{\lambda}^c([- \epsilon, \epsilon]) - \mu_{\lambda}^c ([- \epsilon, \epsilon]- \frac{\ell}{T} \epsilon_0)
= \sum_{j = 1}^{\ell} \mu_{\lambda}^c ([- \epsilon, \epsilon] - \frac{j-1}{T} \epsilon_0 )- \mu_{\lambda}^c([-\epsilon, \epsilon] - \frac{j}{T} \epsilon_0 ) $$
and apply the estimate of (1) to upper bound the sum by
$$\frac{2 \ell \gamma_0(c,\epsilon)}{T \delta_0} \lambda^{n-1} + O_{T,\epsilon}(\lambda^{n-3/2}) = \frac{2 \gamma_0(c,\epsilon)}{\epsilon_0 \delta_0} \tau \lambda^{n-1} + O_{T,\epsilon}(\lambda^{n-3/2}).$$
\noindent{\bf (3)} Assume $ \frac{\ell}{T} \epsilon_0 \leq \tau \leq \frac{\ell+1}{T} \epsilon_0$ with $\ell \in {\mathbb Z}$. Write
$$\begin{array}{lll} \mu_{\lambda}^c ([-\epsilon, \epsilon] + \tau ) - \mu_{\lambda}^c([-\epsilon, \epsilon]) & = & \mu_{\lambda}^c ([- \epsilon, \epsilon] + \tau ) - \mu_{\lambda}^c([-\epsilon,\epsilon] + \frac{\ell}{T} \epsilon_0 ) \\&&\\&&+ \mu_\lambda^c([-\epsilon,\epsilon] + \frac{\ell}{T} \epsilon_0 )
- \mu_\lambda^c([\epsilon,\epsilon]).\end{array} $$
Applying (1) and (2), it follows that
$$|\mu_{\lambda}^c([- \epsilon, \epsilon] + \tau ) - \mu_{\lambda}^c([- \epsilon,\epsilon])| \leq \frac{2 \gamma_0(c,\epsilon)}{\delta_0} \left(\frac{\tau}{\epsilon_0} + \frac{1}{T} \right) \lambda^{n-1} + O_{T,\epsilon}(\lambda^{n - 3/2}). $$
This completes the proof of Proposition \ref{TLEM2}, hence also Proposition \ref{TLEMa} and therefore
Theorem \ref{main 3}. \end{proof}
\section{Appendix}
\subsection{Background on Fourier integral operators and their symbols}\label{FIOSECT}
The advantage of expressing $\Upsilon_{\nu, \psi}^{(1)}(t)$ in terms of pullback and pushforward is
the symbol calculus of Lagrangian distributions is more elementary to describe for such compositions.
We refer to \cite{HoIV,GS77,D73} for background but quickly review the basic definitions.
The space of
Fourier integral operators of order $\mu$ associated to a canonical relation $C$ is denoted by $$K_A \in I^{\mu}(M \times M, C'). $$
If $A_1 \in I^{\mu_1}(X \times Y, C_1'), A_2 \in I^{\mu_2}(Y \times Z, C_2')$, and if $C_1 \circ C_2$ is a `clean' composition,
then by \cite[Theorem 25.2.3]{HoIV},
\begin{equation} \label{EXCESSCOMP} A_1 \circ A_2 \in I^{\mu_1 + \mu_2 + e/2} (X \times Z, C'), \;\; C = C_1 \circ C_2, \end{equation}
where $e$ is the `excess' of the composition, i.e. if $\gamma \in C$, then $e =\dim C_{\gamma}$, the dimension of the fiber of $C_1 \times C_2
\cap T^* X \times \Delta_{T^*Y} \times T^*Z$ over $\gamma$ (see \eqref{EXCESS} below).
Pullback and pushforward of half-densities on Lagrangian submanifolds are more difficult to describe.
They depend on the map $f$ being a {\it morphism} in the language of \cite[page 349]{GS77}. Namely,
if $f: X \to Y$ is a smooth map, we say it is a morphism on half-densities if it is augmented by a section
$r(x) \in \mathrm{Hom}(|\Lambda|^{{\frac{1}{2}}}( TY_{f(x)}, |\Lambda|^{{\frac{1}{2}}} T_x X)$, that is, a linear transformation
mapping densities on $TY_{f(x)}$ to densities on $T_x X$. As pointed out in \cite[page 349]{GS77},
such a map is equivalent to augmenting $f$ with a special kind of half-density on the co-normal bundle $N^*(\mathrm{graph}(f))$ to the graph of $f$, which is constant along the fibers of the co-normal bundle. In our application, the maps are all restriction maps or pushforwards under canonical maps, and they are morphisms
in quite obvious ways. Note that under pullback by an immersion, or under a restriction, the number $n$ of independent variables is decreased by the codimension $k$ and therefore
the order goes up by $\frac{k}{4}.$ Pullbacks under submersions increase the number $n$. Pushforward is adjoint to pullback and therefore also decreases the order by the same amount.
Assume that $F: M \to N$ is a smooth map between manifolds.
Let $\Lambda \subset \dot T^* N$ be a Lagrangian submanifold. Then its pullback is defined by,
\begin{equation} \label{PB} f^* \Lambda = \{(m, \xi) \in T^*M \mid \exists (n, \eta) \in \Lambda, f(m) = n, f^* \eta = \xi\}. \end{equation}
On the other hand, let $\Lambda \subset \dot T^*M$. Then its pushforward is defined by,
\begin{equation} \label{PF} f_* \Lambda = \{(y, \eta) \in T^* N \mid y = f(x), (x, f^* \eta) \in \Lambda\}. \end{equation}
The principal symbol of a Fourier integral operator associated to a canonical relation $C$ is a half-density times a section of the Maslov line bundle on $C$.
We refer to \cite[Section 25.2]{HoIV} and to \cite[Definition 4.1.1]{D73} for the definition; see also \cite{DG75, GU89} for further expositions and for
several calculations of principal symbols closely related to those of this article.
The \emph{order} of a homogeneous Fourier integral operator\index{Fourier integral operator!order} $A \colon L^2(X) \to L^2(Y)$ in the
non-degenerate case is given in terms of a local oscillatory
integral formula $$K_A(x,y) = \frac{1}{(2 \pi)^{n/4 + N/2}}\int_{{\mathbb R}^N} e^{\mathrm{i} \varphi(x, y, \theta)} a(x, y, \theta) {\mathrm d} \theta $$by \begin{equation} \label{ORDDEF} \mathrm{ord} A = m +
\frac{N}{2} - \frac{n}{4}, \;\; \mathrm{where} \; n = \dim X + \dim Y, \; m = \mathrm{ord} \;a \end{equation} where the order of the amplitude $a(x, y, \theta)$
is the degree of the top order term of the polyhomogeneous expansion of $a$ in $\theta$, and $N$ is the number of phase
variables $\theta$ in the local Fourier integral representation (see
\cite[Proposition~25.1.5]{HoIV}); in the general clean case with
excess $e$, the order goes up by $\frac{e}{2}$ (see \cite[Proposition~25.1.5']{HoIV}
). The order is designed to be independent of the specific representation of $K_A$ as an oscillatory integral.
Further,
the principal symbol of a Fourier
integral distribution
$$ I(x, y) = \int_{{\mathbb R}^N} e^{i \varphi (x, y, \theta) }
a( x, y, \theta) d \theta $$
with non-degenerate homogeneous phase function $\varphi$ and amplitude $a \in S^{0}_{cl}(M \times M \times {\mathbb R}^N),$
is the transport to the Lagrangian $\Lambda_{\varphi} =
\iota_{\varphi}(C_{\varphi})$ of $a(\lambda) \sqrt{d_{C_\varphi}}$ where $\sqrt{d_{C_\varphi}}$ is the half density given by the square root of
\begin{equation} \label{SYMBOLDEF} d_{C_{\varphi}}: = \left|\frac{\partial(\lambda,
\varphi_{\theta}')}{\partial(x,y,\theta)}\right|^{-1} |d \lambda| \end{equation} on $C_{\varphi}$, where $\lambda=(\lambda_1,...,\lambda_n)$ are local coordinates on the critical manifold $C_{\varphi} =\{ (x,y,\theta); d_{\theta}\varphi(x,y,\theta) = 0\}. $
We next review the definition of the excess in a fiber product diagram. Let $F = \{(x, y) \in X \times Y, f(x) = g(y)\}$,
\begin{equation} \label{FP} \begin{tikzcd}
X \arrow[d, "f"'] & F \arrow[l] \arrow[d] \\
Z & Y \arrow[l, "g"'] \end{tikzcd} \end{equation}
The maps $f: X \to Z$ and $g: Y \to Z$ are said to intersect cleanly if the fiber product $F$ is a submanifold of $X \times Y$ and if
the tangent diagram is a fiber product diagram.
The excess is \begin{equation} \label{EXCESS} e = \dim F + \dim Z - (\dim X + \dim Y). \end{equation} Then $e =0$ if and only if the diagram is transversal.
Above $d = \dim {\rm Fix}(G^T)$ is the excess of the diagram. \subsection{Enhancement, morphisms and pullbacks and pushforward of symbols} \label{MORPHISM}
The behavior of symbols under pushforwards and pullbacks of Lagrangian submanifolds is described in \cite{GS77}, Chapter IV. 5 (page 345). The main statement (Theorem 5.1, loc. cit.) states that the symbol map $\sigma: I^m(X, \Lambda) \to S^m(\Lambda)$ has the following pullback-pushforward properties under maps $f: X \to Y$ satisfying appropriate transversality conditions, \begin{equation} \label{PFPB} \left\{\begin{array}{l} \sigma (f^* \nu) = f^* \sigma(\nu), \\ \\ \sigma(f_* \mu) = f_* \sigma(\mu), \end{array} \right. \end{equation} To be precise, $f$ must be ``enhanced'' as defined in \cite[Chapter 7]{GS13} in order to define a pullback or pushforward on symbols. This is because the pullback/pushforward of a half-density on $\Lambda$ is often not a half-density on $f^* \Lambda$.
The enhancement of a smooth map $f: X \to Y$ is a map $(f, r)$
with $r: |f^* T Y|^{{\frac{1}{2}}} \to |TX|^{{\frac{1}{2}}}$. Thus if $\rho$ is a half-density on $Y$
$$(f, r)^* \rho = r(\rho(f(x)) \in |T_x X|^{{\frac{1}{2}}}. $$
If $\iota: X \to Y$ is an immersion, then $N^*_{\iota} X$ consists of covectors $\xi \in T^*_{\iota(x)} Y: d\iota_x^* \xi = 0$. Enhancing an immersion is giving a section of $|N^*_{\iota}
X|^{{\frac{1}{2}}}$.
If $\pi: Z \to X$ is a submersion and $V_z $ is the tangent space to $\pi^{-1}(x)$. Then enhancing the fibration is giving a section of $|V_z|^{{\frac{1}{2}}}$.
In \cite[ p. 349]{GS13}, the authors explain that enhancement with $r$ is to define a half-density on $N^*(\Gamma_f)$ which is constant along the fibers of $N^*(\Gamma_f) \to \Gamma_f$. As a result, a morphism $f: X \to Y$ induces a pushforward $$f_*: \Omega^{{\frac{1}{2}}}(\Lambda_X) \to \Omega^{{\frac{1}{2}}}(f_* \Lambda_X). $$ It also induces a pullback operation $$f^* : \Omega^{{\frac{1}{2}}}(\Lambda_Y) \to \Omega^{{\frac{1}{2}}} f^* (\Lambda_Y). $$
Under appropriate clean or transversal assumptions, if $f: X \to Y$ is a morphism of half-densities, then $f_*$ and $f^*$ are morphisms of half-densities on Lagrangian submanifolds.
\begin{remark} If $f: X \to Y$ is a submersion then $f^*$ is
injective. Indeed if $f^* \eta = 0$ then $\eta \perp f_* TX = TY$. If $f$ is an immersion, then $f_*$ is injective.
\end{remark}
\end{document} | arXiv |
\begin{document}
\title{Safe Sample Screening for \\ Support Vector Machines}
\author{ Kohei~Ogawa,~\IEEEmembership{} Yoshiki~Suzuki,~\IEEEmembership{}
Shinya~Suzumura~\IEEEmembership{} and Ichiro~Takeuchi,~\IEEEmembership{Member,~IEEE} \IEEEcompsocitemizethanks{\IEEEcompsocthanksitem K. Ogawa, Y. Suzuki,
S. Suzumura and I. Takeuchi are with the Department of Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya, Japan. \protect\\
E-mail: \{ogawa, suzuki,
suzumura\}[email protected], and [email protected]} \thanks{}}
\markboth{Ogawa, Suzuki, Suzumura and Takeuchi} {Shell \MakeLowercase{\textit{et al.}}: Bare Demo of IEEEtran.cls for Computer Society Journals}
\IEEEcompsoctitleabstractindextext{ \begin{abstract}
Sparse classifiers such as the support vector machines (SVM) are efficient in test-phases because the classifier is characterized only by a subset of the samples called \emph{support vectors (SVs)}, and the rest of the samples (non SVs) have no influence on the classification result.
However, the advantage of the sparsity has not been fully exploited in training phases because it is generally difficult to know which sample turns out to be SV beforehand.
In this paper, we introduce a new approach called \emph{safe sample screening} that enables us to identify a subset of the non-SVs and screen them out prior to the training phase.
Our approach is different from existing heuristic approaches in the sense that the screened samples are \emph{guaranteed} to be non-SVs at the optimal solution.
We investigate the advantage of the safe sample screening approach through intensive numerical experiments, and demonstrate that it can substantially decrease the computational cost of the state-of-the-art SVM solvers such as {LIBSVM}.
In the current \emph{big data} era, we believe that safe sample screening would be of great practical importance since the data size can be reduced without sacrificing the optimality of the final solution.
\end{abstract}
\begin{keywords}
Support Vector Machine, Sparse Modeling, Convex Optimization, Safe Screening, Regularization Path \end{keywords}}
\maketitle
\IEEEdisplaynotcompsoctitleabstractindextext
\IEEEpeerreviewmaketitle
\section{Introduction} \label{sec:Introduction}
\IEEEPARstart{T}he support vector machines (SVM) \cite{Boser92a,Cortes95a,Vapnik98a} has been successfully applied to large-scale classification problems \cite{Ma09a,Lin11a,Lin11b}.
A trained SVM classifier is \emph{sparse} in the sense that the decision function is characterized only by a subset of the samples known as \emph{support vectors (SVs)}.
One of the computational advantages of such a sparse classifier is its efficiency in the test phase, where the classifier can be evaluated for a new test input with the cost proportional only to the number of the SVs.
The rest of the samples (non-SVs) can be discarded \emph{after} training phases because they have no influence on the classification results.
However, the advantage of the sparsity has not been fully exploited in the training phase because it is generally difficult to know which sample turns out to be SV beforehand.
Many existing SVM solvers spend most of their time for identifying the SVs \cite{Platt99a,Joachims99b,Hastie04a,Scheinberg06a,Chang11a}.
For example, well-known {LIBSVM} \cite{Chang11a} first predicts which sample would be SV (prediction step), and then solves a smaller optimization problem defined only with the subset of the samples predicted as SVs (optimization step).
These two steps must be repeated until the true SVs are identified because some of the samples might be mistakenly predicted as non-SVs in the prediction step.
In this paper, we introduce a new approach that can identify a subset of the non-SVs and screen them out \emph{before} actually solving the training optimization problem.
Our approach is different from the prediction step in the above {LIBSVM} or other similar heuristic approaches in the sense that the screened samples are \emph{guaranteed} to be non-SVs at the optimal solution.
It means that the original optimal solution can be obtained by solving the smaller problem defined only with the remaining set of the non-screened samples.
We call our approach as \emph{safe sample screening} because it never identifies a true SV as non-SV.
\figurename \ref{fig:toy.example} illustrates our approach on a toy data set (see \S \ref{subsec:toy.experiment} for details).
\begin{figure}
\caption{An example of our safe sample screening method on a binary
classification problem with a two-dimensional toy data set.
For each of the red and blue classes, 500 samples are drawn.
Our safe sample screening method found that all the samples in
the shaded regions
are guaranteed to be non-SVs.
In this example, more than 80\% of the samples
(\red{\scriptsize $\square$} and \blue{\scriptsize $\square$})
are
identified as non-SVs and they can be discarded prior to the training phase.
It means that
the optimal classifier (the green line)
can be obtained by solving a much smaller optimization problem
defined only with the remaining 20\% of the samples
(\red{\scriptsize $\blacksquare$} and \blue{\scriptsize $\blacksquare$}).
See \S \ref{subsec:toy.experiment} for details.
}
\label{fig:toy.example}
\end{figure}
Safe sample screening can be used together with any SVM solvers such as {LIBSVM} as a preprocessing step for reducing the training set size.
In our experience, it is often possible to screen out nearly 90\% of the samples as non-SVs.
In such cases, the total computational cost of SVM training can be substantially reduced because only the remaining 10\% of the samples are fed into an SVM solver (see \S \ref{sec:experiments}).
Furthermore, we show that safe sample screening is especially useful for model selection, where a sequence of SVM classifiers with different regularization parameters are trained.
In the current \emph{big data} era, we believe that safe sample screening would be of great practical importance because it enables us to reduce the data size without sacrificing the optimality.
The basic idea behind safe sample screening is inspired by a resent study by El Ghaoui et al. \cite{ElGhaoui12b}.
In the context of $L_1$ regularized sparse linear models, they introduced an approach that can \emph{safely} identify a subset of the \emph{non-active features} whose coefficients turn out to be zero at the optimal solution.
This approach has been called \emph{safe feature screening}, and various extensions have been reported \cite{Xiang12a,Xiang12b,Dai12a,Wang12a,Wang13c,Wang13d,Wu13a,Wang13a,Wang13b} (see \S \ref{subsec:relation.with.feature.screening} for details).
Our contribution is to extend the idea of \cite{ElGhaoui12b} for safely screening out non-SVs.
This extension is non-trivial because the feature sparseness in a linear model stems from the $L_1$ penalty, while the sample sparseness in an SVM is originated from the large-margin principle.
This paper is an extended version of our preliminary conference paper \cite{Ogawa13a}, where we proposed a safe sample screening method that can be used in somewhat more restricted situation than we consider here (see Appendix \ref{app:v.s.ICML.ver.} for details).
In this paper, we extend our previous method in order to overcome the limitation and to improve the screening performance.
As the best of our knowledge, our approach in \cite{Ogawa13a} is the first safe sample screening method.
After our conference paper was published, Wang et al. \cite{Wang13e} recently proposed a new method and demonstrated that it performed better than our previous method in \cite{Ogawa13a}.
In this paper, we further go beyond the Wang et al.'s method, and show that our new method has better screening performance from both theoretical and empirical viewpoints (see \S \ref{subsec:relation.with.feature.screening} for details).
The rest of the paper is organized as follows.
In \S \ref{sec:SVM}, we formulate the SVM and summarize the optimality conditions.
Our main contribution is presented in \S \ref{sec:Safe.Sample.Screening.Rule} where we propose three safe sample screening methods for SVMs.
In \S \ref{sec:in.practice}, we describe how to use the proposed safe sample screening methods in practice.
Intensive experiments are conducted in \S \ref{sec:experiments}, where we investigate how much the computational cost of the state-of-the-art SVM solvers can be reduced by using safe sample screening.
We summarize our contribution and future works in \S \ref{sec:Conclusion}.
Appendix contains the proofs of all the theorems and the lemmas, a brief description of (and comparison with) our previous method in our preliminary conference paper \cite{Ogawa13a}, the relationship between our methods and the method in \cite{Wang13e}, and some deitaled experimental protocols.
The C++ and Matlab codes are available at {\it http://www-als.ics.nitech.ac.jp/code/index.php?safe-sample-screening}.
\noindent {\bf Notation}:
We let $\bR$, $\bR_+$ and $\bR_{++}$ be the set of real, nonnegative and positive numbers, respectively.
We define $\bN_n \triangleq \{1, \ldots, n\}$ for any natural number $n$.
Vectors and matrices are represented by bold face lower and upper case characters such as $\bm v \in \RR^n$ and $\bm M \in \RR^{m \times n}$, respectively.
An element of a vector $\bm v$ is written as $v_i$ or $(\bm v)_i$.
Similarly, an element of a matrix $\bm M$ is written as $M_{ij}$ or $(\bm M)_{ij}$.
Inequalities between two vectors such as $\bm v \le \bm w$ indicate component-wise inequalities: $v_i \le w_i ~ \forall i \in \NN_n$.
Unless otherwise stated, we use
$\|\cdot\|$ as a Euclidean norm.
A vector of all 0 and 1 are denoted as $\zeros$ and $\ones$, respectively.
\section{Support vector machine} \label{sec:SVM}
In this section we formulate the support vector machine (SVM).
Let us consider a binary classification problem with $n$ samples and $d$ features.
We denote the training set as $\{(\bm x_i,y_i)\}_{i\in\bN_n}$ where $\bm x_i \in \cX \subseteq \bR^d$ and $y_i \in \{-1, +1\}$.
We consider a linear model in a feature space $\cF$ in the following form: \begin{eqnarray*}
f(\bm x) = \bm w^\top \Phi(\bm x_i), \end{eqnarray*} where $\Phi: \cX \rightarrow \cF$ is a map from the input space $\cX$ to the feature space $\cF$, and $\bm w \in \cF$ is a vector of the coefficients\footnote{ The bias term can be augmented to $\bm w$ and $\Phi(\bm x)$ as an additional dimension. }.
We sometimes write $f(\bm{x})$ as $f(\bm x; \bm w)$ for explicitly specifying the associated parameter $\bm w$.
The optimal parameter $\bm w^*$ is obtained by solving
\begin{eqnarray} \label{eq:p.SVM.prob.} \bm w^* \triangleq \arg \min_{\bm w \in \cF} ~
\frac{1}{2} \| \bm w \|^2 + C \sum_{i \in \bN_n} \max\{0, 1 - y_i f(\bm x_i)\}, \end{eqnarray} where $C \in \RR_{++}$ is the regularization parameter.
The loss function $\max\{0, 1 - y_i f(\bm x_i)\}$ is known as {\it hinge-loss}.
We use a notation such as $\bm w_{[C]}^*$ when we emphasize that it is the optimal solution of the problem \eq{eq:p.SVM.prob.} associated with the regularization parameter $C$.
The dual problem of \eq{eq:p.SVM.prob.} is formulated with the Lagrange multipliers $\bm \alpha \in \RR_+^n$ as \begin{eqnarray} \label{eq:d.SVM.prob.} \bm \alpha_{[C]}^* \triangleq \arg \max_{\bm \alpha} ~ \bigl( \cD(\bm \alpha) \triangleq -\frac{1}{2} \sum_{i,j \in \bN_n} \alpha_i \alpha_j Q_{ij} + \sum_{i \in \NN_n} \alpha_i \bigr) ~~~ \text{s.t.} ~ 0 \le \alpha_i \le C, i \in \NN_n, \end{eqnarray}
where $\bm Q \in \bR^{n \times n}$ is an $n \times n$ matrix defined as $Q_{ij} \triangleq y_i y_j K(\bm x_i, \bm x_j)$ and $K(\bm x_i, \bm x_j) \triangleq \Phi(\bm x_i)^\top \Phi(\bm x_j)$ is the \emph{Mercer kernel function} defined by the feature map $\Phi$.
Using the dual variables, the model $f$ is written
as
\begin{eqnarray} \label{eq:d.model} f(\bm x) = \sum_{i\in\bN_n} \alpha_i y_i K(\bm x_i, \bm x). \end{eqnarray}
Denoting the optimal dual variables as $\{\alpha^*_{[C]i}\}_{i \in \NN_n}$, the optimality conditions of the SVM
are summarized as \begin{eqnarray} \label{eq:SVM.opt.} i \in \cR \Rightarrow \alpha_{[C]i}^* = 0, ~~~ i \in \cE \Rightarrow \alpha_{[C]i}^* \in [0,C], ~~~ i \in \cL \Rightarrow \alpha_{[C]i}^* = C, \end{eqnarray} where we define the three index sets: \begin{eqnarray*}
\cR \triangleq \{ i \in \bN_n~|~ y_i f(\bm x_i) > 1 \}, ~~~
\cE \triangleq \{ i \in \bN_n~|~ y_i f(\bm x_i) = 1 \}, ~~~
\cL \triangleq \{ i \in \bN_n~|~ y_i f(\bm x_i) < 1 \}. \end{eqnarray*}
The optimality conditions \eq{eq:SVM.opt.} suggest that, if it is known {\it a priori} which samples turn out to be the members of $\cR$ at the optimal solution, those samples can be discarded before actually solving the training optimization problem because the corresponding $\alpha^*_{[C]i} = 0$ indicates that they have no influence on the solution.
Similarly, if some of the samples are known {\it a priori} to be the members of $\cL$ at the optimal solution, the corresponding variable can be fixed as $\alpha^*_{[C]i} = C$.
If we let $\cR^\prime$ and $\cL^\prime$ be the subset of the samples known as the members of $\cR$ and $\cL$, respectively, one could first compute $d_i \triangleq C \sum_{j \in \cL^\prime} y_j K(\bm x_i, \bm x_j)$ for all $i \in \NN_n \setminus (\cR^\prime \cup \cL^\prime)$, and put them in a cache.
Then, it is suffice to solve the following smaller optimization problem defined only with the remaining subset of the samples and the cached variables\footnote{ Note that the samples in $\cL^\prime$ are needed in the future test phase.
Here, we only mentioned that the samples in $\cR^\prime$ and $\cL^\prime$ are not used during the training phase. }:
\begin{eqnarray*} \max_{\bm \alpha} \sum_{i,j \in \bN_n \setminus (\cR^\prime \cup \cL^\prime)} \!\!\!\!\! \alpha_i \alpha_j Q_{ij} - \sum_{i \in \bN_n \setminus (\cR^\prime \cup \cL^\prime) } \!\!\!\!\! \alpha_i (1 - d_i) ~~~{\rm s.t.}~ 0 \le \alpha_i \le C,~i \in \bN_n \setminus (\cR^\prime \cup \cL^\prime). \end{eqnarray*}
Hereafter, the training samples in $\cE$ are called \emph{support vectors (SVs)}, while those in $\cR$ and $\cL$ are called \emph{non-support vectors (non-SVs)}.
Note that \emph{support vectors} usually indicate the samples both in $\cE$ and $\cL$ in the machine learning literature (we also use the term SVs in this sense in the previous section).
We adopt the above uncommon terminology because the samples in $\cR$ and $\cL$ can be treated almost in an equal manner in the rest of this paper.
In the next section, we develop three types of testing procedures for screening out a subset of the non-SVs.
Each of these tests are conducted by evaluating a simple rule for each sample.
We call these testing procedures as \emph{safe sample screening tests} and the associated rules as \emph{safe sample screening rules}.
\section{Safe Sample Screening for SVMs} \label{sec:Safe.Sample.Screening.Rule} In this section, we present our safe sample screening approach for SVMs. \subsection{Basic idea} \label{subsec:Basic.idea} Let us consider a situation that we have a region $\Theta_{[C]} \subset \cF$ in the solution space, where we only know that the optimal solution $\bm w^*_{[C]}$ is somewhere in this region $\Theta_{[C]}$,
but $\bm w^*_{[C]}$ itself is unknown.
In this case, the optimality conditions \eq{eq:SVM.opt.} indicate that \begin{eqnarray} \label{eq:basic.test.l} && \bm{w}^*_{[C]} \in \Theta_{[C]} ~\wedge~ \min_{\bm{w} \in \Theta_{[C]}} y_i f(\bm{x}_i ; \bm{w}) > 1 ~ \Rightarrow ~ y_i f(\bm{x}_i ; \bm{w}^*_{[C]}) > 1 ~ \Rightarrow ~ \alpha_{[C]i}^* = 0. \\ \label{eq:basic.test.u} && \bm{w}^*_{[C]} \in \Theta_{[C]} ~\wedge~ \max_{\bm{w} \in \Theta_{[C]}} y_i f(\bm{x}_i ; \bm{w}) < 1 ~ \Rightarrow ~ y_i f(\bm{x}_i ; \bm{w}^*_{[C]}) < 1 ~ \Rightarrow ~ \alpha_{[C]i}^* = C. \end{eqnarray}
These facts imply that, even if the optimal $\bm w^*_{[C]}$ itself is unknown, we might have a chance to screen out a subset of the samples in $\cR$ or $\cL$.
Based on the above idea, we construct safe sample screening rules in the following way: \begin{description}
\item [(Step 1)]~~~
we construct a region
$\Theta_{[C]}$
such that
\begin{eqnarray}
\label{eq:basic.theta}
\bm w^*_{[C]} \in \Theta_{[C]} \subset \cF.
\end{eqnarray}
\item [(Step 2)]~~~
we compute the lower and the upper bounds:
\begin{eqnarray}
\label{eq:basic.lower.upper.bounds}
\ell_{[C]i}
\triangleq
\min_{\bm{w} \in \Theta_{[C]}} y_i f(\bm{x}_i ; \bm{w}),
~~~
u_{[C]i}
\triangleq
\max_{\bm{w} \in \Theta_{[C]}} y_i f(\bm{x}_i ; \bm{w})
~~~
\forall i \in \bN_n.
\end{eqnarray} \end{description}
Then, the safe sample screening rules are written as
\begin{eqnarray}
\label{eq:basic.safe.sample.screening.rule}
\ell_{[C]i} > 1
~\Rightarrow~
i \in \cR
~\Rightarrow~
\alpha^*_{[C]i} = 0,
~~~
u_{[C]i} < 1
~\Rightarrow~
i \in \cL
~\Rightarrow~
\alpha^*_{[C]i} = C.
\end{eqnarray}
In section \ref{subsec:Ball.Test}, we first study so-called \emph{Ball Test} where the region $\Theta_{[C]}$ is a closed ball in the solution space.
In this case, the lower and the upper bounds can be obtained in closed forms.
In section \ref{subsec:Ball.tests.for.SVM}, we describe how to construct such a ball $\Theta_{[C]}$ for SVMs, and introduce two types of balls $\Theta_{[C]}^{\rm (BT1)}$ and $\Theta_{[C]}^{\rm (BT2)}$.
We call the corresponding tests as \emph{Ball Test 1 (BT1)} and \emph{Ball Test 2 (BT2)}, respectively.
In section \ref{subsec:Intersection.Test}, we combine these two balls and develop so-called \emph{Intersection Test (IT)}, which is shown to be more powerful (more samples can be screened out) than BT1 and BT2.
\subsection{Ball Test} \label{subsec:Ball.Test}
When $\Theta_{[C]}$ is a closed ball, the lower or the upper bounds of $y_i f(\bm x_i)$ can be obtained by minimizing a linear objective subject to a single quadratic constraint.
We can easily show that the solution of this class of optimization problems is given in a closed form \cite{Boyd04a}.
\begin{it} \begin{lemm}[Ball Test] \label{lemm:Ball.test}
Let
$\Theta_{[C]} \subset \cF$
be a ball with the center
$\bm m \in \cF$
and the radius
$r \in \bR_+$,
i.e.,
$\Theta_{[C]} \triangleq \{\bm w \in \cF ~|~ \| \bm w - \bm m \| \le r\}$.
Then,
the lower and the upper bounds in
\eq{eq:basic.lower.upper.bounds}
are written as
\begin{eqnarray}
\label{eq:ball.test.bounds}
\ell_{[C]i} \equiv \min_{ \bm w \in \Theta_{[C]} } y_i f(\bm x_i; \bm w)
=
\bm z_i^\top \bm m - r \| \bm z_i \|,
~~~
u_{[C]i} \equiv \max_{ \bm w \in \Theta_{[C]} } y_i f(\bm x_i; \bm w)
=
\bm z_i^\top \bm m + r \| \bm z_i \|,
\end{eqnarray}
where
we define
$\bm z_i \triangleq y_i \Phi(\bm x_i)$,
$i \in \NN_n$,
for notational simplicity. \end{lemm} \end{it}
\noindent The proof is presented in Appendix \ref{app:proofs}.
The geometric interpretation of Lemma \ref{lemm:Ball.test} is shown in \figurename \ref{fig:Ball.test}.
\begin{figure}
\caption{
A geometric interpretation of ball tests.
Two panels illustrate the solution space
when the $i^{\rm th}$ sample
(a) can be screened out,
and
(b) cannot be screened out,
respectively.
In both panels,
the dotted green line indicates the hyperplane
$y_i f(\bm x_i ; \bm w) \equiv \bm z_i^\top \bm w = 1$,
and the green region represents
$\{\bm w | \bm z_i^\top \bm w > 1\}$.
The orange circle
with the center $\bm m$ and the radius $r$
is the ball region
$\Theta_{[C]}$
in which the optimal solution
$\bm w^*_{[C]}$
exists.
In (a),
the fact that
the hyperplane
$\bm z_i^\top \bm w = 1$
does not intersect with
$\Theta_{[C]}$,
i.e.,
the distance
$(\bm z_i^\top \bm m - 1)/|| \bm z_i ||$
is larger than the radius
$r$,
implies that
$y_i f(\bm x_i ; \bm w^*_{[C]}) > 1$
wherever the optimal solution
$\bm w^*_{[C]}$
locates within the region $\Theta_{[C]}$,
and the $i^{\rm th}$ sample can be screened out as a member of $\cR$.
On the other hand,
in (b),
the hyperplane
$\bm z_i^\top \bm w = 1$
intersects with
$\Theta_{[C]}$,
meaning that
we do not know whether
$y_i f(\bm x_i ; \bm w^*_{[C]}) > 1$
or not
until we actually solve the optimization problem and obtain the optimal solution
$\bm w^*_{[C]}$.
}
\label{fig:Ball.test}
\end{figure}
\subsection{Ball Tests for SVMs} \label{subsec:Ball.tests.for.SVM}
The following problem is shown to be equivalent to \eq{eq:p.SVM.prob.} in the sense that $\bm w^*_{[C]}$ is the optimal solution of the original SVM problem \eq{eq:p.SVM.prob.}\footnote{ Similar problem has been studied in the context of structural SVM \cite{Joachims05,Joachims06}, and the proof of the equivalence can be easily shown by using the technique described there. }:
\begin{eqnarray}
\label{eq:str.SVM}
(\bm w^*_{[C]}, \xi^*_{[C]})
\triangleq
\arg
\!\!\!\!\!
\min_{\bm w \in \cF, \xi \in \bR}
\!\!\!
\cP_{[C]}(\bm w, \xi)
\triangleq
\frac{1}{2} \|\bm w\|^2 + C \xi
~{\rm s.t.}~
\xi \ge \sum_{i \in \bN_n} s_i (1-y_i f(\bm x_i))
~
\forall \bm s \in \{0,1\}^n.
~
\end{eqnarray}
We call the solution space of \eq{eq:str.SVM} as \emph{expanded solution space}.
In the expanded solution space, a quadratic function is minimized over a polyhedron composed of $2^n$ closed half spaces.
In the following lemma, we consider a specific type of regions in the expanded solution space.
By projecting the region onto the original solution space,
we have a ball region in the form of Lemma \ref{lemm:Ball.test}.
\begin{it} \begin{lemm} \label{lemm:structural.svm.to.ball}
Consider a region in the following form:
\begin{eqnarray}
\label{eq:tilde.theta}
\Theta^\prime_{[C]}
\triangleq
\Big\{
(\bm w, \xi) \in \cF \times \RR
~\Big|~
a_1 \| \bm w\|^2 + \bm b_1^\top \bm w + c_1 + \xi \le 0,
~
\bm b_2^\top \bm w + c_2 \le \xi
\Big\},
\end{eqnarray}
where
$a_1 \in \RR_{++}$,
$\bm b_1, \bm b_2 \in \cF$,
$c_1, c_2 \in \RR$.
If
$\Theta^\prime_{[C]}$
is non-empty\footnote{
$\Theta^\prime_{[C]}$
is non-empty iff
$\| \bm b_1 + \bm b_2\|^2 - 4 a_1 (c_1 + c_2) \ge 0$.
}
and
$(\bm w, \xi) \in \Theta^\prime_{[C]}$,
$\bm w$
is in a ball
$\Theta_{[C]}$
with the center
$\bm m \in \cF$
and the radius
$r \in \cR_{+}$
defined as \begin{eqnarray*} \bm m \triangleq - \frac{1}{2 a_1} (\bm b_1 + \bm b_2), ~ r \triangleq \sqrt{
\| \bm m \|^2 - \frac{1}{a_1} (c_1 + c_2). } \end{eqnarray*} \end{lemm} \end{it}
\noindent The proof is presented in Appendix \ref{app:proofs}.
The lemma suggests that a Ball Test can be constructed by introducing two types of necessary conditions in the form of quadratic and linear constraints in \eq{eq:tilde.theta}.
In the following three lemmas, we introduce three types of necessary conditions for the optimal solution $(\bm w^*_{[C]}, \xi^*_{[C]})$ of the problem \eq{eq:str.SVM}.
\begin{it} \begin{lemm}[Necessary Condition 1 (NC1)] \label{lemm:nc1}
Let
$(\tilde{\bm w}, \tilde{\xi})$
be a feasible solution
of
\eq{eq:str.SVM}.
Then, \begin{eqnarray}
\label{eq:nc1}
\frac{1}{C}
\| \bm w^*_{[C]} \|^2
-
\frac{1}{C}
\tilde{\bm w}^\top
\bm w^*_{[C]}
-
\tilde{\xi}
+
\xi^*_{[C]}
\le
0.
\end{eqnarray} \end{lemm} \end{it}
\begin{it} \begin{lemm}[Necessary Condition 2 (NC2)] \label{lemm:nc2}
Let
$(\bm w^*_{[\check{C}]}, \xi^*_{[\check{C}]})$
be the optimal solution for any other regularization parameter
$\check{C} \in \RR_{++}$.
Then,
\begin{eqnarray}
\label{eq:nc2}
- \frac{1}{\check{C}}
\bm w_{[\check{C}]}^{* \top} \bm w^*_{[C]}
+
\frac{1}{\check{C}} \| \bm w^*_{[\check{C}]} \|^2 + \xi^*_{[\check{C}]}
\le \xi^*_{[C]}.
\end{eqnarray} \end{lemm} \end{it}
\begin{it} \begin{lemm}[Necessary Condition 3 (NC3)] \label{lemm:nc3}
Let
$\hat{\bm s} \in \{0, 1\}^n$
be an $n$-dimensional binary vector.
Then,
\begin{eqnarray}
\label{eq:nc3}
- \bm z_{\hat{\bm s}}^\top \bm w^*_{[C]}
+
\hat{\bm s}^\top \ones
\le
\xi^*_{[C]},
\text{ where }
\bm z_{\hat{\bm s}} \triangleq \sum_{i \in \NN_n} \hat{s}_i \bm z_i.
\end{eqnarray} \end{lemm} \end{it}
\noindent The proofs of these three lemmas are presented in Appendix \ref{app:proofs}.
Note that NC1 is quadratic, while NC2 and NC3 are linear constraints in the form of \eq{eq:tilde.theta}.
As described in the following theorems, \emph{Ball Test 1 (BT1)} is constructed by using NC1 and NC2, while \emph{Ball Test 2 (BT2)} is constructed by using NC1 and NC3.
\begin{it} \begin{theo}[Ball Test 1 (BT1)]
Let
$(\tilde{\bm w}, \tilde{\xi})$
be any feasible solution
and
$(\bm w^*_{[\check{C}]}, \xi^*_{[\check{C}]})$
be the optimal solution
of
\eq{eq:str.SVM}
for any other regularization parameter
$\check{C}$.
Then,
the optimal SVM solution
$\bm w^*_{[C]}$
is included in the ball
$\Theta^{\rm (BT1)}_{[C]}
\triangleq
\{ \bm w ~\big|~ \| \bm w - \bm m_1 \| \le r_1 \}$,
where
\begin{eqnarray}
\label{eq:BT1.m.r}
\bm m_1
\triangleq
\frac{1}{2}(\tilde{\bm w} + \frac{C}{\check{C}} \bm w^*_{[\check{C}]}),
~
r_1
\triangleq
\sqrt{
\| \bm m_1\|^2
- \frac{C}{\check{C}} \|\bm w^*_{[\check{C}]}\|^2
+ C(\tilde{\xi} - \xi^*_{[\check{C}]})
}.
\end{eqnarray}
By applying the ball
$\Theta^{\rm (BT1)}_{[C]}$
to Lemma \ref{lemm:Ball.test},
we can compute the lower bound
$\ell^{\rm (BT1)}_{[C]}$
and the upper bound
$u^{\rm (BT1)}_{[C]}$. \end{theo} \end{it}
\begin{it} \begin{theo}[Ball Test 2 (BT2)]
Let
$(\tilde{\bm w}, \tilde{\xi})$
be any feasible solution of
\eq{eq:str.SVM}
and
$\hat{\bm s}$
be any $n$-dimensional binary vector
in $\{0, 1\}^n$.
Then,
the optimal SVM solution
$\bm w^*_{[C]}$
is included in the ball
$\Theta^{\rm (BT2)}_{[C]} \triangleq \{ \bm w ~\big|~ \| \bm w - \bm m_2 \| \le r_2 \}$,
where
\begin{eqnarray*}
\bm m_2
\triangleq
\frac{1}{2} (\tilde{\bm w} + C \bm z_{\hat{\bm s}}),
~
r_2
\triangleq
\sqrt{
\| \bm m_2\|^2
+ C( \tilde{\xi} - \hat{\bm s}^\top \ones)
}.
\end{eqnarray*}
By applying the ball
$\Theta^{\rm (BT2)}_{[C]}$
to Lemma \ref{lemm:Ball.test},
we can compute the lower bound
$\ell^{\rm (BT2)}_{[C]}$
and the upper bound
$u^{\rm (BT2)}_{[C]}$. \end{theo} \end{it}
\subsection{Intersection Test} \label{subsec:Intersection.Test}
We introduce a more powerful screening test called \emph{Intersection Test (IT)} based on \begin{eqnarray*} \Theta_{[C]}^{\rm (IT)} \triangleq \Theta_{[C]}^{\rm (BT1)} \cap \Theta_{[C]}^{\rm (BT2)}. \end{eqnarray*}
\begin{it} \begin{theo}[Intersection Test] \label{theo:IT}
The lower and the upper bounds of $y_i f(\bm x_i; \bm w)$ in $\Theta_{[C]}^{\rm (IT)}$ are
\begin{eqnarray}
\label{eq:it.test.low}
\ell^{\rm (IT)}_{[C]i}
\triangleq
\min_{\bm w \in \Theta_{[C]}^{\rm (IT)}} y_i f(\bm x_i ; \bm w) =
\mycase{
\ell^{\rm (BT1)}_{[C]i}
&
\text{ if }
\frac{-\bm z_i^\top \bm \phi}{\| \bm z_i \|~\| \bm \phi \| } < \frac{\zeta - \| \bm \phi\|}{r_1},
\\
\ell^{\rm (BT2)}_{[C]i}
&
\text{ if }
\frac{\zeta}{r_2} < \frac{-\bm z_i^\top \bm \phi}{\| \bm z_i \|~\| \bm \phi \|},
\\
\bm z_i^\top \bm \psi - \kappa \sqrt{\| \bm z_i \|^2 - \frac{(\bm z_i^\top \bm \phi)^2}{\| \bm \phi \|^2}}
&
\text{ if }
\frac{\zeta - \| \bm \phi \|}{r_1}
\le \frac{-\bm z_i^\top \bm \phi}{\| \bm z_i \|~\| \bm \phi \| }
\le \frac{\zeta}{r_2}
} \end{eqnarray} and \begin{eqnarray} \label{eq:it.upp} u^{\rm (IT)}_{[C]i} \triangleq \max_{\bm w \in \Theta_{[C]}^{\rm (IT)}} y_i f(\bm x_i ; \bm w) = \mycase{ u^{\rm (BT1)}_{[C]i} & \text{ if }
\frac{\bm z_i^\top \bm \phi}{\| \bm z_i \|~\| \bm \phi \| } <
\frac{\zeta - \| \bm \phi\|}{r_1}, \\ u^{\rm (BT2)}_{[C]i} & \text{ if } \frac{\zeta}{r_2} <
\frac{\bm z_i^\top \bm \phi}{\| \bm z_i \|~\| \bm \phi \| }, \\
\bm z_i^\top \bm \psi + \kappa \sqrt{\| \bm z_i \|^2 - \frac{(\bm z_i^\top \bm \phi)^2}{\| \bm \phi \|^2}} & \text{ if }
\frac{\zeta - \| \bm \phi \|}{r_1}
\le \frac{\bm z_i^\top \bm \phi}{\| \bm z_i \|~\| \bm \phi \| } \le \frac{\zeta}{r_2}, } \end{eqnarray} where \begin{eqnarray*} \bm \phi \triangleq \bm m_1 - \bm m_2, ~ \zeta \triangleq
\frac{1}{2 \| \bm \phi \| } ( \| \bm \phi \|^2 + r_2^2 - r_1^2 ), ~ \bm \psi
\triangleq \bm m_2 + \zeta \bm \phi / \|\bm \phi\| , ~ \kappa \triangleq \sqrt{ r_2^2 - \zeta^2}. \end{eqnarray*} \end{theo} \end{it}
\noindent The proof is presented in Appendix \ref{app:proofs}.
Note that IT is guaranteed to be more powerful than BT1 and BT2 because $\Theta^{\rm (IT)}_{[C]}$ is the intersection of $\Theta^{\rm (BT1)}_{[C]}$ and $\Theta^{\rm (BT2)}_{[C]}$.
\begin{figure}\label{fig:theta}
\end{figure}
\section{Safe Sample Screening in Practice} \label{sec:in.practice}
In order to use the safe sample screening methods in practice, we need two additional side information: a feasible solution $(\tilde{\bm w}, \tilde{\xi})$ and the optimal solution $(\bm w^*_{[\check{C}]}, \xi^*_{[\check{C}]})$ for a different regularization parameter $\check{C}$.
Hereafter, we focus on a particular situation that the optimal solution $\bm w^*_{[C_{\rm ref}]}$ for a smaller $C_{\rm ref} < C$ is available, and call such a solution as \emph{a reference solution}.
We later see that such a reference solution can be easily available in practical model building process.
Let $ \xi^*_{ [C_{\rm ref}] } \triangleq \sum_{i \in \NN_n} \max \{ 0, 1 - y_i f( \bm x_i ; \bm w^*_{ [C_{\rm ref}] }) \} $.
By replacing both of $(\tilde{\bm w}, \tilde{\xi})$ and $(\bm w^*_{[\check{C}]}, \xi^*_{[\check{C}]})$ with $(\bm w^*_{[C_{\rm ref}]}, \xi^*_{[C_{\rm ref}]})$, the centers and the radiuses of $\Theta^{\rm (BT1)}_{[C]}$ and $\Theta^{\rm (BT1)}_{[C]}$ are rewritten as \begin{eqnarray*}
\bm m_1 = \frac{C + C_{\rm ref}}{2 C_{\rm ref}} \bm w^*_{[C_{\rm ref}]},
r_1 = \frac{C - C_{\rm ref}}{2 C_{\rm ref}} \| \bm w^*_{[C_{\rm ref}]}\|, \end{eqnarray*} \begin{eqnarray*}
\bm m_2 = \frac{1}{2}(\bm w^*_{[C_{\rm ref}]} + C \bm z_{\hat{\bm s}}),
r_2 = \sqrt{\| \bm m_2 \|^2 + C(\xi^*_{[C_{\rm ref}]} - \hat{\bm s}^\top \bm 1)}. \end{eqnarray*}
A geometric interpretation of the two necessary conditions NC1 and NC2 in this special case is illustrated in \figurename \ref{fig:theta}.
In the rest of this section, we discuss how to obtain reference solutions and other practical issues.
\subsection{How to obtain a reference solution} \label{subsec:how.to.compute.reference.solution}
The following lemma implies that, for a sufficiently small regularization parameter $C$, we can make use of a trivially obtainable reference solution.
\begin{it} \begin{lemm} \label{lemm:C.min} Let
$C_{\rm min} \triangleq 1/\max_{i \in \bN_n} (\bm Q \ones)_i$.
Then,
for
$C \in (0,C_{\rm min}]$,
the optimal solution of the dual SVM formulation
\eq{eq:d.SVM.prob.}
is written as
$\bm \alpha^*_{[C]} = C \ones$.
\end{lemm} \end{it}
The proof is presented in Appendix \ref{app:proofs}.
Without loss of generality, we only consider the case with $C > C_{\rm min}$, where we can use the solution $\bm w^*_{[C_{\rm min}]}$ as the reference solution.
\subsection{Regularization path computation} \label{subsec:regularization.path}
In model selection process, a sequence of SVM classifiers with various different regularization parameters $C$ are trained.
Such a sequence of the solutions is sometimes referred to as \emph{regularization path} \cite{Hastie04a,Giesen12b}.
Let us write the sequence as $C_1 < \ldots < C_T$.
We note that SVM is easier to train (the convergence tends to be faster) for smaller regularization parameter $C$.
Therefore, it is reasonable to compute the regularization path from smaller $C$ to larger $C$ with the help of \emph{warm-start} approach \cite{decoste00a}, where the previous optimal solution at $C_{t - 1}$ is used as the initial starting point of the next optimization problem for $C_t$.
In such a situation, we can make use of the previous solution at $C_{t - 1}$ as the reference solution.
Note that this is more advantageous than using $C_{\rm min}$ as the reference solution because the rules can be more powerful when the reference solution is closer to $\bm w^*_{[C]}$.
Moreover, the rule evaluation cost can be reduced in regularization path computation scenario (see \ref{subsec:complexity}).
\subsection{How to select $\hat{\bm s}$ for the necessary condition 3} \label{subsec:specific.s}
We discuss how to select $\hat{\bm s} \in \{0, 1\}^n$ for NC3.
Since a smaller region leads to a more powerful rule, it is reasonable to select $\hat{\bm s} \in \{0, 1\}^n$ so that the volume of the intersection region $\Theta_{[C]}^{\rm (IT)} \equiv \Theta_{[C]}^{\rm (BT1)} \cup \Theta_{[C]}^{\rm (BT2)}$ is as small as possible.
We select $\hat{\bm s}$ such that the distance between the two balls $\Theta^{\rm (BT1)}_{[C]}$ and $\Theta^{\rm (BT2)}_{[C]}$ is maximized, while the radius of $\Theta^{\rm (BT2)}_{[C]}$ is minimized,
i.e., \begin{eqnarray} \label{eq:hat.s.selection} \hat{\bm s} = \arg\max_{\bm s \in \{0,1\}^n}
\left( \| \bm m_1 - \bm m_2 \|^2 - r_2^2 \right) = \arg\max_{\bm s \in \{0,1\}^n} \sum_{i \in \bN_n} s_i (1- \frac{C + C_{\rm ref}}{2C_{\rm ref}} y_i f(\bm x_i;\bm w_{[C_{\rm ref}]}^*)). \end{eqnarray}
Note that the solution of \eq{eq:hat.s.selection} can be straightforwardly obtained as \begin{eqnarray*}
\hat{s}_i = I\{1 - \frac{C + C_{\rm ref}}{2C_{\rm ref}} y_i f(\bm x_i;\bm w_{[C_{\rm ref}]}^*) > 0\},
~
i \in \NN_n, \end{eqnarray*} where $I(\cdot)$ is the indicator function.
\subsection{Kernelization} \label{subsec:kernelization}
The proposed safe sample screening rules can be \emph{kernelized}, i.e., all the computations can be carried out without explicitly working on the high-dimensional feature space $\cF$.
Remembering that $Q_{ij} = \bm z_i^\top \bm z_j \equiv y_i \Phi(\bm x_i)^\top \Phi(\bm x_j) y_j$, we can rewrite the rules by using the following relations:
\begin{eqnarray*}
\| \bm z_i \| = \sqrt{Q_{ii}},~
\| \bm w_{[C_{\rm ref}]}^* \| = \sqrt{\bm \alpha_{[C_{\rm ref}]}^{*\top} \bm Q \bm \alpha_{[C_{\rm ref}]}^*},~
\bm z_i^\top \bm m_1 = \frac{C + C_{\rm ref}}{2 C_{\rm ref}} (\bm Q \bm \alpha^*_{[C_{\rm ref}]})_i, \end{eqnarray*} \begin{eqnarray*}
\bm z_i^\top \bm m_2 = \frac{1}{2} (\bm Q \bm \alpha^*_{[C_{\rm ref}]})_i + \frac{C}{2} (\bm Q \hat{\bm s})_i,~
\|\bm m_1\| = \frac{C + C_{\rm ref}}{2 C_{\rm ref}} \sqrt{\bm \alpha^{*\top}_{[C_{\rm ref}]} \bm Q \bm \alpha^{*}_{[C_{\rm ref}]}}, \end{eqnarray*} \begin{eqnarray*}
\|\bm m_2\| = \frac{1}{2} \sqrt{(\bm \alpha^{*}_{[C_{\rm ref}]} + C \hat{\bm s})^\top \bm Q (\bm \alpha^{*}_{[C_{\rm ref}]} + C \hat{\bm s})},~ \bm m_1^\top \bm m_2 = \frac{C + C_{\rm ref}}{4 C_{\rm ref}} (\bm \alpha^{*\top}_{[C_{\rm ref}]} \bm Q \bm \alpha^{*}_{[C_{\rm ref}]} + C \bm \alpha^{*\top}_{[C_{\rm ref}]} \bm Q \hat{\bm s}). \end{eqnarray*}
Exploiting the sparsities of $\bm \alpha_{[C_{\rm ref}]}^*$ and $\hat{\bm s}$, some parts of the rule evaluations can be done efficiently (see \S \ref{subsec:complexity} for details).
\subsection{Computational Complexity} \label{subsec:complexity}
The computational complexities for evaluating the safe sample screening rules are summarized in Table \ref{tbl:complexity}.
Note that the rule evaluation cost can be reduced in regularization path computation scenario.
The bottleneck of the rule evaluation is in the computation of $\bm \alpha_{[C_{\rm ref}]}^{*\top} \bm Q \bm \alpha_{[C_{\rm ref}]}^*$.
Since many SVM solvers (including {LIBLINEAR} and {LIBSVM}) use the value $\bm Q \bm \alpha$ in their internal computation and store it in a cache, we can make use of the cache value for circumventing the bottleneck.
Furthermore, BT2 (and henceforth IT) can be efficiently computed in regularization path computation scenario by caching $\bm Q \hat{\bm s}$.
\begin{table}[h]
\label{tbl:complexity}
\caption{The computational complexities of the rule evaluations}
\begin{center}
\begin{tabular}{l|l|l|l}
& linear & kernel & kernel (cache)
\\ \hline
BT1 & $\mathcal{O}(n d_s)$ & $\mathcal{O}(n^2)$ & $\mathcal{O}(n)$ \\
BT2 & $\mathcal{O}(n d_s)$ & $\mathcal{O}(n^2)$ & $\mathcal{O}(n \|\Delta \hat{\bm s}\|_0)$ \\
IT & $\mathcal{O}(n d_s)$ & $\mathcal{O}(n^2)$ & $\mathcal{O}(n \|\Delta \hat{\bm s}\|_0)$ \\
\end{tabular}
\end{center}
For each of
Ball Test 1 (BT1),
Ball Test 2 (BT2),
and
Intersection Test (IT),
the complexities
for evaluating the safe sample screening rules for all
$i \in \NN_n$
of
linear SVM
and
nonlinear kernel SVM
(with and without using the cache values
as discussed in \S \ref{subsec:regularization.path})
are shown.
Here, $d_s$ indicates the average number of non-zero features for each sample
and
$\|\Delta \hat{\bm s}\|_0$
indicates the number of different elements in
$\hat{\bm s}$
between
two consecutive
$C_{t - 1}$
and
$C_t$
in regularization path computation scenario. \end{table}
\subsection{Relation with existing approaches} \label{subsec:relation.with.feature.screening}
This work is highly inspired by the \emph{safe feature screening} introduced by El Ghaoui et al. \cite{ElGhaoui12b}.
After the seminal work by El Ghaoui et al. \cite{ElGhaoui12b}, many efforts have been devoted for improving screening performances \cite{Xiang12a,Xiang12b,Dai12a,Wang12a,Wang13c,Wang13d,Wu13a,Wang13a,Wang13b}.
All the above listed studies are designed for screening the features in $L_1$ penalized linear model\footnote{ El Ghaoui et al. \cite{ElGhaoui12b} also studied safe feature screening for $L_1$-penalized SVM.
Note that their work is designed for screening features based on the property of $L_1$ penalty, and it cannot be used for sample screening.}~\footnote{
Jaggie et al. \cite{Jaggie13a} discussed the connection between LASSO and ($L_2$-hinge) SVM, where they had an comment that the techniques used in safe feature screening for LASSO might be also useful in the context of SVM. }.
As the best of our knowledge, the approach presented in our conference paper \cite{Ogawa13a} is the first safe sample screening method that can \emph{safely} eliminate a subset of the samples before actually solving the training optimization problem.
Note that this extension is non-trivial because the feature sparseness in a linear model stems from the $L_1$ penalty, while the sample sparseness in an SVM is originated from the large-margin principle.
After our conference paper \cite{Ogawa13a} was published, Wang et al. \cite{Wang13e} recently proposed a method called \emph{DVI test}, and showed that it is more powerful than our previous method in \cite{Ogawa13a}.
In this paper, we further go beyond the DVI test.
We can show that DVI test is equivalent to Ball Test 1 (BT1) in a special case (the equivalence is shown in Appendix \ref{app:equivalence.DVI}).
Since the region $\Theta^{\rm (IT)}_{[C]}$ is included in the region $\Theta^{\rm (BT1)}_{[C]}$, Intersection Test (IT) is theoretically guaranteed to be more powerful than DVI test.
We will also empirically demonstrate that IT consistently outperforms DVI test in terms of screening performances in \S \ref{sec:experiments}.
One of our non-trivial contributions is in \S \ref{subsec:Ball.tests.for.SVM}, where a ball-form region is constructed by first considering a region in the expanded solution space and then projecting it onto the original solution space.
The idea of merging two balls for constructing the intersection region in \S \ref{subsec:Intersection.Test} is also our original contribution.
We conjecture that the basic idea of Intersection Test can be also useful for safe \emph{feature} screening.
\section{Experiments} \label{sec:experiments}
We demonstrate the advantage of the proposed safe sample screening methods through numerical experiments.
We first describe the problem setup of \figurename \ref{fig:toy.example} in \S \ref{subsec:toy.experiment}.
In \S \ref{subsec:screening.rate}, we report the screening rates, i.e., how many percent of the non-SVs can be screened out by safe sample screening.
In \S \ref{subsec:computation.time}, we show that the computational cost of the state-of-the-art SVM solvers ({LIBSVM} \cite{Chang11a} and {LIBLINEAR} \cite{Fan08b}\footnote{ Since the original {LIBSVM} cannot be used for the model without \emph{bias} term,
we slightly modified the code, while we used {LIBLINEAR} as it is because it is originally designed for models without bias term.} ) can be substantially reduced with the use of safe sample screening.
Note that DVI test proposed in \cite{Wang13e} is identical with BT1 in all the experimental setups considered here (see Appendix \ref{app:equivalence.DVI}).
Table \ref{tbl:data.set} summarizes the benchmark data sets used in our experiments.
\begin{table}[h] \caption{Benchmark data sets used in the experiments} \label{tbl:data.set} \begin{center}
\begin{tabular}{l|r|r} Data Set & \#samples ($n$) & \#features ($d$) \\ \hline \hline D01: B.C.D & 569 & 30 \\
D02: dna & 2,000 & 180 \\
D03: DIGIT1 & 1,500 & 241 \\
D04: satimage & 4,435 & 36 \\ \hline
D05: gisette & 6,000 & 5,000 \\
D06: mushrooms & 8,124 & 112 \\
D07: news20 & 19,996 & 1,355,191 \\
D08: shuttle & 43,500 & 9 \\ \hline
D09: acoustic & 78,832 & 50 \\
D10: url & 2,396,130 & 3,231,961 \\
D11: kdd-a & 8,407,752 & 20,216,830 \\
D12: kdd-b & 19,264,097 & 29,890,095 \\
\end{tabular} \end{center}
\vspace*{2.5mm}
We refer
D01 $\sim$ D04
as \emph{small},
D05 and D08
as \emph{medium},
and D09 $\sim$ D12
as \emph{large} data sets.
We only used linear kernel for large data sets
because the kernel matrix computation for
$n > 50,000$
is computationally prohibitive. \end{table}
\subsection{Artificial toy example in \figurename \ref{fig:toy.example}} \label{subsec:toy.experiment}
The data set $\{(\bm x_i, y_i)\}_{i \in \NN_{1000}}$ in \figurename \ref{fig:toy.example} was generated as \begin{eqnarray*}
&&
\bm x_i
\sim
N([-0.5, -0.5]^\top, 1.5^2\bm{I})
\text{ and }
y_i = -1
~\text{for odd}~
i,
\\
&&
\bm x_i
\sim
N([+0.5, +0.5]^\top, 1.5^2\bm{I})
\text{ and }
y_i = +1
~\text{for even}~
i, \end{eqnarray*} where $\bm I$ is the identity matrix.
We considered the problem of learning a linear classifier at $C = 10$.
Intersection Test was conducted by using the reference solution at $C_{\rm ref} = 5$.
For the purpose of illustration, \figurename \ref{fig:toy.example} only highlights the area in which the samples are screened out as the members of $\cR$ (red and blue shaded regions).
\subsection{Screening rate}
\label{subsec:screening.rate}
We report the screening rates of BT1, BT2 and IT.
The screening rate is defined as the number of the screened samples over the total number of the non-SVs
(both in $\cR$ and $\cL$).
The rules were constructed by using the optimal solution at
$C_{\rm ref} (< C)$
as the reference solution.
We used linear kernel and RBF kernel
$K(\bm x, \bm x^\prime) = \exp(-\gamma \|\bm x - \bm x^\prime\|^2)$
where
$\gamma \in \{0.1/d, 1/d, 10/d\}$
is a kernel parameter and $d$ is the input dimension.
Due to the space limitation,
we only show the results on four small data sets with $C = 10$ in
\figurename \ref{fig:screening.rate}.
In each plot,
the horizontal axis denotes
$C_{\rm ref}/C \in (0, 1]$.
In most cases,
the screening rates increased as
$C_{\rm ref}/C$
increases from 0 to 1,
i.e.,
the rules are more powerful when the reference solution
$\bm w^*_{[C_{\rm ref}]}$
is closer to
$\bm w^*_{[C]}$.
The screening rates of IT were always higher than those of BT1 and BT2
because
$\Theta^{\rm (IT)}_{[C]}$
is shown to be smaller than
$\Theta^{\rm (BT1)}_{[C]}$
and
$\Theta^{\rm (BT2)}_{[C]}$
by construction.
The three tests behaved similarly in other problem setups.
\begin{figure}
\caption{The screening rates of the three proposed safe screening tests
BT1 (red), BT2 (green) and IT (blue).}
\label{fig:screening.rate}
\end{figure}
\subsection{Computation time} \label{subsec:computation.time}
We investigate how much the computational cost of the entire SVM training process can be reduced by safe sample screening.
As the state-of-the-art SVM solvers, we used {LIBSVM} \cite{Chang11a} and {LIBLINEAR} \cite{Fan08b} for nonlinear and linear kernel cases, respectively\footnote{ In this paper, we only study exact batch SVM solvers, and do not consider online or sampling-based approximate solvers such as \cite{Crammer06a,ShalevShwartz07a,Hazan11a}. }.
Many SVM solvers use \emph{non-safe} sample screening heuristics in their inner loops.
The common basic idea in these heuristic approaches is to \emph{predict} which sample turns out to be SV or non-SV (prediction step), and to solve a smaller optimization problem defined only with the subset of the samples predicted as SVs (optimization step).
These two steps must be repeated until all the optimality conditions in \eq{eq:SVM.opt.} are satisfied because the prediction step in these heuristic approaches is \emph{not safe}.
In {LIBSVM} and {LIBLINEAR}, such a heuristic is called \emph{shrinking}\footnote{ It is interesting to note that shrinking algorithms in {LIBSVM} and {LIBLINEAR} make their decisions based only on the (signed) margin $y_i f(\bm x_i)$, i.e., if it is greater or smaller than a certain threshold, the corresponding sample is predicted as a member of $\cR$ or $\cL$, respectively.
On the other hand, the decisions made by our safe sample screening methods do not solely depend on $y_i f(\bm x_i)$, but also on the other quantities obtained from the reference solution (see \figurename \ref{fig:toy.example} for example). }.
We compared the total computational costs of the following six approaches: \begin{itemize}
\item Full-sample training ({\bf Full}),
\item Shrinking ({\bf Shrink}),
\item Ball Test 1 ({\bf BT1}),
\item Shrinking + Ball Test 1 ({\bf Shrink+BT1}).
\item Intersection Test ({\bf IT}),
\item Shrinking + Intersection Test ({\bf Shrink+IT}). \end{itemize}
\vspace*{1mm} \noindent In {\bf Full} and {\bf Shrink}, we used {LIBSVM} or {LIBLINEAR} with and without shrinking option, respectively.
In {\bf BT1} and {\bf Shrink+BT1}, we first screened out a subset of the samples by Ball Test 1, and the rest of the samples were fed into {LIBSVM} or {LIBLINEAR} to solve the smaller optimization problem with and without shrinking option, respectively.
In {\bf IT} and {\bf Shrink+IT}, we used Intersection Test for safe sample screening.
\subsubsection{Single SVM training} \label{subsubsec:one.model.SVM}
First, we compared the computational costs of training a single linear SVM for the large data sets ($n > 50,000$).
Here, our task was to find the optimal solution at the regularization parameter $C = C_{\rm ref}/0.9$ using the reference solution at $C_{\rm ref} = 500C_{\rm min}$.
Table \ref{tbl:one.model.SVM} shows the average computational costs of 5 runs.
The best performance was obtained in all the setups when both shrinking and IT screening are simultaneously used ({\bf Shrink+IT}).
{\bf Shrink+BT1} also performed well, but it was consistently outperformed by {\bf Shrink+IT}.
\begin{table}[h] \caption{The computation time [sec] for training a single SVM.} \label{tbl:one.model.SVM} \begin{center} \hspace*{-4em}
\begin{tabular}{c||c|c||c|c|c|c||c|c|c|c}
& \multicolumn{2}{c||}{{LIBLINEAR}}
& \multicolumn{6}{c}{Safe Sample Screening} \\ \cline{2-11}
Data set & {\bf Full} & {\bf Shrink} & {\bf BT1} & {\bf Shrink+BT1} & {\bf Rule} & {\bf Rate} & {\bf IT} & {\bf Shrink+IT} & {\bf Rule} & {\bf Rate} \\ \hline \hline D09 & 98.2 & 2.57 & 95.1 & 2.21 & 0.0022 & 0.178 & 47.3 & {\bf 1.21} & 0.0214 & 0.51 \\
D10 & 1881 & 327 & 1690 & 247 & 0.0514 & 0.108 & 1575 & {\bf 228} & 2.24 & 0.125 \\
D11 & 2801 & 115 & 2699 & 97.2 & 0.203 & 0.136 & 2757 & {\bf 88.1} & 2.78 & 0.136 \\
D12 & 16875 & 4558 & 7170 & 4028 & 0.432 & 0.138 & 12002 & {\bf 3293} & 5.39 & 0.139 \end{tabular}
\vspace*{2.5mm}
The computation time of the best approach in each setup is written in boldface. {\bf Rule} and {\bf Rate} indicate the computation time and the screening rate of the each rules, respectively.
\end{center} \end{table}
\subsubsection{Regularization path} \label{subsubsec:eps.app.path}
As described in \S \ref{subsec:regularization.path}, safe sample screening is especially useful in regularization path computation scenario.
When we compute an SVM regularization path for an increasing sequence of the regularization parameters $C_1 < \ldots < C_T$, the previous optimal solution can be used as the reference solution.
We used a recently proposed \emph{$\eps$-approximation path ($\eps$-path)} algorithm \cite{Giesen12a,Giesen12b} for setting a practically meaningful sequence of regularization parameters.
The detail $\eps$-approximation path procedure is described in Appendix \ref{app:eps-path}.
\begin{table}[p] \caption{The computation time [sec] for computing regularization path.} \label{tab:cost.path}
\begin{center}
\begin{tabular}{c|c||c|c||c|c|c|c}
& & \multicolumn{2}{c||}{{LIBSVM or LIBLINEAR}}
& \multicolumn{4}{c}{Safe Sample Screening} \\ \cline{3-8}
Data set & Kernel & {\bf Full} & {\bf Shrink} & {\bf BT1} & {\bf Shrink+BT1} & {\bf IT} & {\bf Shrink+IT} \\ \hline \hline
& Linear & 389 & 35.2 & 174 & {\bf 34.8} & 177 & 34.8 \\ \cline{2-8} D01 & RBF($0.1/d$) & 43.8 & 4.51 & 9.08 & {\bf 2.8} & 8.48 & 2.87 \\ & RBF($1/d$) & 2.73 & 0.68 & 0.435 & 0.295 & 0.464 & {\bf 0.294} \\ & RBF($10/d$) & 0.73 & 0.4 & 0.312 & 0.221 & 0.266 & {\bf 0.213} \\ \hline
& Linear & 67 & 9.09 & 13.6 & {\bf 8.05} & 13.4 & 8.14 \\ \cline{2-8} D02 & RBF($0.1/d$) & 298 & 106 & 253 & 87.7 & 242 & {\bf 80.7} \\ & RBF($1/d$) & 13.9 & 5.27 & 7.14 & {\bf 2.5} & 7.03 & 2.62 \\ & RBF($10/d$) & 4.98 & 2.68 & 3.18 & 1.96 & 2.71 & {\bf 1.82} \\ \hline
& Linear & 369 & 59.3 & 221 & {\bf 56.7} & 167 & 56.9 \\ \cline{2-8} D03 & RBF($0.1/d$) & 938 & 261 & 928 & 262 & 741 & {\bf 203} \\ & RBF($1/d$) & 94.3 & 27.3 & 70.9 & 19.4 & 60.7 & {\bf 16.8} \\ & RBF($10/d$) & 6.93 & 2.71 & 2.92 & {\bf 0.77} & 2.45 & 0.794 \\ \hline
& Linear & 3435 & 33.7 & 3256 & {\bf 33.2} & 3248 & 33.2 \\ \cline{2-8} D04 & RBF($0.1/d$) & 1365 & 565 & 1325 & 547 & 1178 & {\bf 488} \\ & RBF($1/d$) & 635 & 218 & 392 & 129 & 277 & {\bf 88.7} \\ & RBF($10/d$) & 31 & 20.4 & 3.89 & {\bf 1.5} & 3.87 & 1.68 \\ \hline
& Linear & 1532 & 350 & 894 & {\bf 318} & 899 & 329 \\ \cline{2-8} D05 & RBF($0.1/d$) & 375 & 143 & 365 & 132 & 296 & {\bf 103} \\ & RBF($1/d$) & 63.9 & 30.1 & 33.4 & 13.5 & 25.4 & {\bf 10.2} \\ & RBF($10/d$) & 34.3 & 20.7 & 27.8 & 16.8 & 24.9 & {\bf 15.9} \\ \hline
& Linear & 19.8 & 2.64 & 8.12 & 2.08 & 8.57 & {\bf 2.03} \\ \cline{2-8} D06 & RBF($0.1/d$) & 1938 & 618 & 1838 & 572 & 1395 & {\bf 423} \\ & RBF($1/d$) & 239 & 103 & 164 & 62.3 & 134 & {\bf 50.6} \\ & RBF($10/d$) & 94.3 & 56.3 & 70.5 & 44.2 & 66.2 & {\bf 40.9} \\ \hline
& Linear & 2619 & {\bf 1665} & 2495 & 1697 & 2427 & 1769 \\ \cline{2-8} D07 & RBF($0.1/d$) & 10358 & 5565 & 10239 & {\bf 5493} & 10245 & 5770 \\ & RBF($1/d$) & 33960 & 12797 & 34019 & 12918 & 30373 & {\bf 10152} \\ & RBF($10/d$) & 270984 & 67348 & 270313 & 67062 & 264433 & {\bf 56427} \\ \hline
& Linear & 37135 & 67 & 35945 & {\bf 63.6} & 36386 & 67.8 \\ \cline{2-8} D08 & RBF($0.1/d$) & 278232 & 63192 & 275688 & 63608 & 253219 & {\bf 51932} \\ & RBF($1/d$) & 214165 & 60608 & 203155 & 56161 & 180839 & {\bf 48867} \\ & RBF($10/d$) & 167690 & 54364 & 129490 & 45644 & 125675 & {\bf 44463} \\ \hline
\end{tabular}
\vspace*{2.5mm}
The computation time of the best approach in each setup is written in boldface.
\end{center}
\end{table}
\begin{figure}
\caption{The screening rate in regularization path computation scenario for BT1 (red) and IT (blue).}
\label{fig:eps.app}
\end{figure}
In this scenario, we used the small and the medium data sets ($n \le 50,000$).
The largest regularization parameter was set as $C_T = 10^4$.
We used linear kernel and RBF kernel
$K(\bm x, \bm x^\prime) = \exp(-\gamma \|\bm x - \bm x^\prime\|^2)$
with
$\gamma \in \{0.1/d, 1/d, 10/d\}$.
In all the six approaches, we used the cache value and warm-start approach as described in \S \ref{subsec:regularization.path}.
Table \ref{tab:cost.path} summarizes the total computation time of the six approaches, and \figurename \ref{fig:eps.app} shows how screening rates change with $C$ in each data set (due to the space limitation, we only show the results on four medium data sets in \figurename \ref{fig:eps.app}).
Note first that shrinking heuristic was very helpful, and safe sample screening alone ({\bf BT1} and {\bf IT}) was not as effective as shrinking.
However, except one setup (D07, Linear), simultaneously using shrinking and safe sample screening worked better than using shrinking alone.
As we discuss in \S \ref{subsec:complexity}, the rule evaluation cost of BT1 is cheaper than that of IT.
Therefore, if the screening rates of these two tests are same, the former is slightly faster than the latter.
In Table \ref{tab:cost.path}, we see that {\bf Shrink+BT1} was a little faster than {\bf Shrink+IT} in several setups.
We conjecture that those small differences are due to the differences in the rule evaluation costs.
In the remaining setups, {\bf Shrink+IT} was faster than {\bf Shrink+BT1}.
The differences tend to be small in the cases of linear kernel and RBF kernel with relatively small $\gamma$.
On the other hand, significant improvements were sometimes observed especially when RBF kernels with relatively large $\gamma$ is used.
In \figurename~\ref{fig:eps.app}, we confirmed that the screening rates of IT was never worse than BT1.
In summary, the experimental results indicate that safe sample screening is often helpful for reducing the computational cost of the state-of-the-art SVM solvers.
Furthermore, Intersection Test seems to be the best safe sample screening method among those we considered here.
\section{Conclusion} \label{sec:Conclusion}
In this paper, we introduced safe sample screening approach that can safely identify and screen out a subset of the non-SVs prior to the training phase.
We believe that our contribution would be of great practical importance in the current \emph{big data} era because it enables us to reduce the data size without sacrificing the optimality.
Our approach is quite general in the sense that it can be used together with any SVM solvers as a preprocessing step for reducing the data set size.
The experimental results indicate that safe sample screening is not so harmful even when it cannot screen out any instances because the rule evaluation costs are much smaller than that of SVM solvers.
Since the screening rates highly depend on the choice of the reference solution, an important future work is to find a better reference solution.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments} \else
\section*{Acknowledgment} \fi
We thank Kohei Hatano and Masayuki Karasuyama for their furuitful comments.
We also thank Martin Jaggi for letting us know recent studies on approximate parametric programming.
IT thanks the supports from MEXT Kakenhi 23700165 and CREST, JST.
\appendices
\section{Proofs} \label{app:proofs}
\begin{proof}[Proof of Lemma \ref{lemm:Ball.test}]
The lower bound
$\ell_{[C]i}$
is obtained as follows:
\begin{eqnarray*}
&&
\min_{\bm w}
~
y_i f(\bm x_i;\bm w)
~{\rm s.t.}~
\| \bm w - \bm m \|^2 \le r^2
=
\min_{\bm w } \max_{\mu > 0}
~
\bm z_i^\top \bm w + \mu ( \|\bm w - \bm m\|^2 - r^2)
\\
&=& \max_{\mu > 0} ~ (- \mu r^2) + \min_{\bm w} ~
( \mu \| \bm w - \bm m \|^2 + \bm z_i^\top \bm w ) = \max_{\mu > 0} ~
L(\mu) \triangleq -\mu r^2 - \frac{\|\bm z_i\|^2}{4\mu} + \bm z_i^\top \bm m,
\end{eqnarray*}
where
the Lagrange multiplier
$\mu > 0$
because the ball constraint is strictly active when the bound is attained.
By solving
$\partial L(\mu)/\partial \mu = 0$,
the optimal Lagrange multiplier is given as
$\mu = \|\bm z_i\|/2r$.
Substituting this into $L(\mu)$,
we obtain
\begin{eqnarray*}
\max_{\mu \ge 0} L(\mu) = \bm z_i^\top \bm m - r \|\bm z_i\|.
\end{eqnarray*}
The upper bound $u_{[C]i}$ is obtained similarly. \end{proof}
\begin{proof}[Proof of Lemma~\ref{lemm:structural.svm.to.ball}]
By substituting $\xi$ in the second inequality in
\eq{eq:tilde.theta}
into the first inequality,
we immediately have
$\| \bm w - \bm m\| \le r$. \end{proof}
\begin{proof}[Proof of Lemma \ref{lemm:nc1}]
From
Proposition 2.1.2 in
\cite{Bertsekas99a},
the optimal solution
$(\bm w^*_{[C]}, \xi^*_{[C]})$
and
a feasible solution
$(\tilde{\bm w}, \tilde{\xi})$
satisfy the following relationship:
\begin{eqnarray*}
\nabla \cP_{[C]}(\bm w^*_{[C]}, \xi^*_{[C]})^\top
\left(
\mtx{c}{
\tilde{\bm w} \\
\tilde{\xi} \\
}
-
\mtx{c}{
\bm w_{[C]}^* \\
\xi_{[C]}^* \\
}
\right)
=
\mtx{cc}{
\bm w_{[C]}^{*\top} &
C
}
\left(
\mtx{c}{
\tilde{\bm w} \\
\tilde{\xi} \\
}
-
\mtx{c}{
\bm w_{[C]}^* \\
\xi_{[C]}^* \\
}
\right)
\ge 0.
\end{eqnarray*} \end{proof}
\begin{proof}[Proof of Lemma \ref{lemm:nc2}]
From Proposition 2.1.2 in
\cite{Bertsekas99a},
the optimal solution
$(\bm w^*_{[C_{\rm ref}]}, \xi^*_{[C_{\rm ref}]})$
and
a feasible solution
$(\bm w^*_{[C]}, \xi^*_{[C]})$
satisfy the following relationship:
\begin{eqnarray*}
\nabla \cP_{[\check{C}]}(\bm w^*_{[\check{C}]}, \xi^*_{[\check{C}]})^\top
\left(
\mtx{c}{
\bm w_{[C]}^* \\
\xi_{[C]}^* \\
}
-
\mtx{c}{
\bm w_{[\check{C}]}^* \\
\xi_{[\check{C}]}^* \\
}
\right)
=
\mtx{cc}{
\bm w_{[\check{C}]}^{*\top} &
\check{C}
}
\left(
\mtx{c}{
\bm w_{[C]}^* \\
\xi_{[C]}^* \\
}
-
\mtx{c}{
\bm w_{[\check{C}]}^* \\
\xi_{[\check{C}]}^* \\
}
\right)
\ge 0.
\end{eqnarray*} \end{proof}
\begin{proof}[Proof of Lemma \ref{lemm:nc3}]
\eq{eq:nc3} is necessary for the optimal solution just because it is
one of the $2^n$ constraints in
\eq{eq:str.SVM}. \end{proof}
\begin{proof}[Proof of Theorem \ref{theo:IT}] First, we prove the following lemma.
\vspace*{2.5mm}
{\it
\begin{lemm}
\label{lemm:IT.equivalence}
Let
$\overline{\bm w} \in \cF$
be the optimal solution of
\begin{eqnarray}
\label{eq:IT.opt.prob.1}
\min_{\bm w} ~ \bm z_i^\top \bm w
~
{\rm s.t.} ~ \| \bm w - \bm m_1 \|^2 \le r_1^2, ~ \| \bm w - \bm m_2 \|^2 \le r_2^2,
\end{eqnarray}
and
$(\underline{\bm w}, \underline{\xi}) \in \cF \times \RR$
be the optimal solution of
\begin{eqnarray}
\label{eq:IT.opt.prob.2}
\min_{\bm w, \xi} ~ \bm z_i^\top \bm w
~
{\rm s.t.}
~
\| \bm w \|^2 \le \xi,
~
\xi \le 2 \bm m_1^\top \bm w + r_1^2 - \| \bm m_1 \|^2,
~
\xi \le 2 \bm m_2^\top \bm w + r_2^2 - \| \bm m_2 \|^2.
\end{eqnarray}
Then, the two optimization problems
\eq{eq:IT.opt.prob.1}
and
\eq{eq:IT.opt.prob.2}
are equivalent in the sense that
$\bm z_i^\top \overline{\bm w} = \bm z_i^\top \underline{\bm w}$.
\end{lemm} }
\begin{proof}
Let
$\overline{\xi} \triangleq \| \overline{\bm w} \|^2$.
Then,
$(\overline{\bm w}, \overline{\xi})$
is a feasible solution of
\eq{eq:IT.opt.prob.2}
because
\begin{eqnarray*}
\| \overline{\bm w} - \bm m_1 \|^2 \le r_1^2
~\Rightarrow~
2 \bm m_1^\top \overline{\bm w} + r_1^2 - \| \bm m_1 \|^2 \ge \| \overline{\bm w}\|^2 = \overline{\xi},
\\
\| \overline{\bm w} - \bm m_2 \|^2 \le r_2^2
~\Rightarrow~
2 \bm m_2^\top \overline{\bm w} + r_2^2 - \| \bm m_2 \|^2 \ge \| \overline{\bm w}\|^2 = \overline{\xi}.
\end{eqnarray*}
On the other hand,
$(\underline{\bm w}, \underline{\xi})$
is a feasible solution of
\eq{eq:IT.opt.prob.1}
because
\begin{eqnarray*}
\| \underline{\bm w} \|^2 \le \underline{\xi}
\text{ and }
\underline{\xi} \le 2 \bm m_1^\top \underline{\bm w} + r_1^2 - \| \bm m_1 \|^2
~\Rightarrow~
\| \underline{\bm w} - \bm m_1 \|^2 \le r_1^2,
\\
\| \underline{\bm w} \|^2 \le \underline{\xi}
\text{ and }
\underline{\xi} \le 2 \bm m_2^\top \underline{\bm w} + r_2^2 - \| \bm m_2 \|^2
~\Rightarrow~
\| \underline{\bm w} - \bm m_2 \|^2 \le r_2^2.
\end{eqnarray*}
These facts indicate that
$\bm z_i^\top \overline{\bm w} = \bm z_i^\top \underline{\bm w}$
for arbitrary
$\bm z_i \in \cF$.
\end{proof}
\vspace*{2.5mm}
We first note that at least one of the two balls $\Theta^{\rm (BT1)}_{[C]i}$ and $\Theta^{\rm (BT2)}_{[C]i}$
are strictly active when the lower bound is attained.
It means that we can only consider the following three cases:
\begin{itemize}
\item Case 1) $\Theta^{\rm (BT1)}_{[C]i}$ is active and $\Theta^{\rm (BT2)}_{[C]i}$ is inactive,
\item Case 2) $\Theta^{\rm (BT2)}_{[C]i}$ is active and $\Theta^{\rm (BT1)}_{[C]i}$ is inactive, and
\item Case 3) Both $\Theta^{\rm (BT1)}_{[C]i}$ and $\Theta^{\rm (BT2)}_{[C]i}$ are active.
\end{itemize}
From Lemma \ref{lemm:IT.equivalence},
the lower bound
$\ell^{\rm (IT)}_{[C]i}$
is the solution of
\begin{eqnarray}
\label{eq:IT.proof.equivalent.prob}
\min_{\bm w, \xi}
~
\bm z_i^\top \bm w
~
{\rm s.t.}
~
\| \bm w \|^2 \le \xi,
~
\xi \le 2 \bm m_1^\top \bm w + r_1^2 - \| \bm m_1 \|^2,
~
\xi \le 2 \bm m_2^\top \bm w + r_2^2 - \| \bm m_2 \|^2.
\end{eqnarray}
Introducing the Lagrange multipliers
$\mu, \nu_1, \nu_2 \in \RR_{+}$
for the three constraints in
\eq{eq:IT.proof.equivalent.prob},
we write the Lagrangian of the problem
\eq{eq:IT.proof.equivalent.prob}
as
$L(\bm w, \xi, \mu, \nu_1, \nu_2)$.
From the stationary conditions,
we have
\begin{eqnarray}
\label{eq:IT.proof.stationary.cond}
\pd{L}{\bm w} = 0
~\Leftrightarrow~
\bm w = \frac{1}{2 \mu} (2 \nu_1 \bm m_1 + 2 \nu_2 \bm m_2 - \bm z_i),
~
\pd{L}{\xi} = 0
~\Leftrightarrow~
\mu - \nu_1 - \nu_2 = 0.
\end{eqnarray}
where
$\mu > 0$
because at least one of the two balls $\Theta^{\rm (BT1)}_{[C]i}$ and $\Theta^{\rm (BT2)}_{[C]i}$ are strictly active.
{\bf Case 1})
Let us first consider the case where $\Theta^{\rm (BT1)}_{[C]i}$ is active and $\Theta^{\rm (BT2)}_{[C]i}$ is inactive,
i.e.,
$\| \bm w - \bm m_1 \|^2 = r_1^2$
and
$\| \bm w - \bm m_2 \|^2 < r_2^2$.
Noting that
$\nu_2 = 0$,
the latter can be rewritten as
\begin{eqnarray*}
\| \bm w - \bm m_2 \|^2 < r_2^2
~\Leftrightarrow~
\frac{- \bm z_i^\top \bm \phi}{\|\bm z_i\| \|\bm \phi\|} < \frac{\zeta - \|\bm \phi\|}{r_1},
\end{eqnarray*}
where we have used the stationary condition in
\eq{eq:IT.proof.stationary.cond}.
In this case,
it is clear that the lower bound is identical with that of BT1, i.e.,
$\ell^{\rm {(IT)}}_{[C]i} = \ell^{\rm {(BT1)}}_{[C]i}$.
{\bf Case 2})
Next, let us consider the case where $\Theta^{\rm (BT2)}_{[C]i}$ is active and $\Theta^{\rm (BT1)}_{[C]i}$ is inactive,
i.e.,
$\| \bm w - \bm m_2 \|^2 = r_2^2$
and
$\| \bm w - \bm m_1 \|^2 < r_1^2$.
In the same way as Case 1),
the latter condition is rewritten as
\begin{eqnarray*}
\| \bm w - \bm m_1 \|^2 < r_1^2
~\Leftrightarrow~
\frac{\zeta}{r_2} < \frac{- \bm z_i^\top \bm \phi}{\|\bm z_i\| \|\bm \phi\|}.
\end{eqnarray*}
In this case,
the lower bound of IT is identical with that of BT2, i.e.,
$\ell^{\rm {(IT)}}_{[C]i} = \ell^{\rm {(BT2)}}_{[C]i}$.
{\bf Case 3})
Finally, let us consider the remaining case where both of the two balls $\Theta^{\rm (BT1)}_{[C]i}$ and $\Theta^{\rm (BT2)}_{[C]i}$ are strictly active.
From the conditions of Case 1) and Case 2),
the condition of Case 3) is written as \begin{eqnarray}
\label{eq:proof6.case3.cond}
\frac{\zeta - \|\bm \phi\|}{r_1}
\le
\frac{- \bm z_i^\top \bm \phi}{\|\bm z_i\| \|\bm \phi\|}
\le
\frac{\zeta}{r_2}. \end{eqnarray}
After plugging the stationary conditions
\eq{eq:IT.proof.stationary.cond}
into
$L(\bm w, \xi, \mu, \nu_1, \nu_2)$,
the solution of the following linear system of equations \begin{eqnarray*}
\pd{L}{\mu} = 0, ~
\pd{L}{\nu_1} = 0, ~
\pd{L}{\nu_2} = 0, \end{eqnarray*}
are given as \begin{eqnarray}
\label{eq:IT-proof.mu.nu1.nu2}
\mu
=
\frac{1}{2\kappa}\sqrt{\|\bm z_i\|^2 - \frac{(\bm z_i^\top\bm \phi)^2}{\|\bm \phi\|^2}},
~
\nu_1
=
\mu \frac{\zeta}{\|\bm \phi\|} + \frac{\bm z_i^\top \bm \phi}{2\|\bm \phi\|^2},
~
\nu_2
=
\mu - \nu_1. \end{eqnarray}
From \eq{eq:proof6.case3.cond}, $\mu, \nu_1, \nu_2$ in \eq{eq:IT-proof.mu.nu1.nu2} are shown to be non-negative, meaning that \eq{eq:IT-proof.mu.nu1.nu2} are the optimal Lagrange multipliers.
By plugging these
$\mu, \nu_1, \nu_2$
into
$\bm w$ in
\eq{eq:IT.proof.stationary.cond}, the lower bound is obtained as \begin{eqnarray*}
\ell_i^{\rm (IT)}
=
\bm z_i^\top \bm \psi - \kappa \sqrt{\| \bm z_i \|^2 - \frac{(\bm z_i^\top \bm \phi)^2}{\| \bm \phi \|^2}}.
\end{eqnarray*}
By combining all the three cases above,
the lower bound
\eq{eq:it.test.low}
is asserted.
The upper bound
\eq{eq:it.upp}
can be similarly derived. \end{proof}
\begin{proof}[Proof of Lemma \ref{lemm:C.min}]
It is suffice to show that
$\bm \alpha = C \ones$
satisfies the optimality condition for any
$C \in (0,C_{min}]$.
Remembering that
$f(\bm x_i) = \sum_{j \in \NN_n} \alpha_j y_j K(\bm x_i, \bm x_j) = C (\bm Q \ones)_i$,
we have \begin{eqnarray*}
\max_{i \in \NN_n} y_i f(\bm x_i) = \max_{i \in \NN_n} C (\bm Q \ones)_i \le C_{\rm min} \max_{i \in \NN_n} (\bm Q \ones)_i = 1. \end{eqnarray*}
Noting that positive semi-definiteness of the matrix $\bm Q$ indicates $\ones^\top \bm Q \ones \ge 0$, the above inequality holds because at least one component of $\bm Q \ones$ must have nonnegative value.
It implies that all the $n$ samples are in either $\cE$ or $\cL$, where $\alpha_i = C~\forall i \in \NN_n$ clearly satisfies the optimality. \end{proof}
\section{A comparison with the method in \cite{Ogawa13a}} \label{app:v.s.ICML.ver.}
\begin{figure}
\caption{ The comparison between Intersection Test and Dome Test \cite{Ogawa13a}.
The red and blue bars (the left vertical axis) indicate the screening rates, i.e., the number of screened samples in $\cR$ and $\cL$ out of the total size
$|\cR| + |\cL|$.
The red and blue lines (the right vertical axis) show the speedup improvement, where the baseline is naive full-sample training without any screening.}
\label{fig:v.s.ICML.ver.}
\end{figure}
We briefly describe the safe sample screening method proposed in our preliminary conference paper \cite{Ogawa13a}, which we call, \emph{Dome Test (DT)}\footnote{ We call it as \emph{Dome Test} because the shape of the region $\Theta$ looks like a dome (see \cite{Ogawa13a} for details).}.
We discuss the difference among DT and IT, and compare their screening rates and computation times in simple numerical experiments.
DT is summarized in the following theorem:
\begin{it}
\begin{theo}[Dome Test] \label{theo:dome.test}
Consider two positive scalars
$C_a < C_b$.
Then,
for any
$C \in [C_a, C_b]$,
the lower and the upper bounds of
$y_i f(\bm x_i; \bm w^*_{[C]})$
are given by
\begin{align*}
\ell_{[C]i}^{\rm (DT)}
\triangleq
\min_{\bm w \in \Theta} y_i f(\bm x_i; \bm w)
=
\mycase{
-\sqrt{2 \gamma_b} \|\bm z_i\| &
\text{if}~
{\scriptstyle\frac{-\bm z_i^\top\bm w_{[C_a]}^*}{\| \bm z_i \|}} \ge {\scriptstyle\frac{\gamma_a\sqrt{2}}{\sqrt{\gamma_b}}}
\\ \bm z_i^\top\bm w_{[C_a]}^*
- \sqrt{{\scriptstyle\frac{\gamma_b - \gamma_a}{\gamma_a}}(\gamma_a\|\bm z_i\|^2 - (\bm z_i^\top \bm w_{[C_a]}^*)^2 }
&
\text{otherwise}. }
\end{align*}
and
\begin{align*} & u_{[C]i}^{\rm (DT)} \triangleq \max_{\bm w \in \Theta} y_i f(\bm x_i; \bm w) = \mycase{
\sqrt{2 \gamma_b} \|\bm z_i\| & \text{if}~
{\scriptstyle\frac{\bm z_i^\top\bm w_{[C_a]}^*}{\| \bm z_i \|}} \ge {\scriptstyle\frac{\gamma_a\sqrt{2}}{\sqrt{\gamma_b}}} \\ \bm z_i^\top\bm w_{[C_a]}^*
+ \sqrt{{\scriptstyle\frac{\gamma_b - \gamma_a}{\gamma_a}}(\gamma_a\|\bm z_i\|^2 - (\bm z_i^\top \bm w_{[C_a]}^*)^2 } & \text{otherwise}, }
\end{align*} where
$\gamma_a \triangleq \|\bm w_{[C_a]}^*\|^2$ and
$\gamma_b = \|\bm w_{[C_b]}^*\|^2$.
\end{theo}
\end{it}
\noindent See \cite{Ogawa13a} for the proof.
A limitation of DT is that we need to know a feasible solution with a larger $C_b > C$ as well as the optimal solution with a smaller $C_a < C$ (remember that we only need the latter for BT1, BT2 and IT).
As discussed in \S \ref{subsec:regularization.path}, we usually train an SVM regularization path from smaller $C$ to larger $C$ by using warm-start approach.
Therefore, it is sometimes computationally expensive to obtain a feasible solution with a larger $C_b > C$.
In \cite{Ogawa13a}, we have used a bit tricky algorithm for obtaining such a feasible solution.
\figurename \ref{fig:v.s.ICML.ver.} shows the results of empirical comparison among DT and IT on the four data sets used in \cite{Ogawa13a} with linear kernel (CVX \cite{cvx12a} is used as the SVM solver in order to simply compare the effects of the screening performances).
Here, we fixed $C_{\rm ref} = C_a = 10^4 C_{\rm min}$ and varied $C$ in the range of $[0.5C_{\rm ref}, 0.95C_{\rm ref}]$.
For DT, we assumed that the optimal solution with $C_b = 1.3C$ can be used as a feasible solution although it is a bit unfair setup for IT.
We see that, IT is clearly better in \emph{B.C.D.} and \emph{IJCNN1}, comparable in \emph{PCMAC} and slightly worse in \emph{MAGIC} data sets albeit a bit unfair setup for IT.
The reason why DT behaved poorly even when $C_{\rm ref}/C$ is close to 1 is that the lower and the upper bounds in DT depends on the value $(\gamma_b - \gamma_a)/\gamma_a$, and does not depend on $C$ itself.
It means that, when the range $[C_a,C_b]$ is somewhat large, the performance of DT deteriorate.
\section{Equivalence between a special case of BT1 and the method in Wang et al. \cite{Wang13e}}
\label{app:equivalence.DVI}
When we use the reference solution $\bm w^*_{[C_{\rm ref}]}$ as both of the feasible solution and the (different) optimal solution, the lower bound by BT1 is written as \begin{eqnarray*} \ell^{\rm (BT1)}_{[C]i}
=
\frac{C + C_{\rm ref}}{2 C_{\rm ref}} \bm z_i^\top \bm w^*_{[C_{\rm ref}]}
-
\frac{C - C_{\rm ref}}{2 C_{\rm ref}} \| \bm w^*_{[C_{\rm ref}]} \| \| \bm z_i \| \end{eqnarray*}
Using the relationships described in \S \ref{subsec:kernelization}, the dual form of the lower bound is written as \begin{eqnarray}
\label{eq:DVI.equiv.2} \ell^{\rm (BT1)}_{[C]i}
=
\frac{C + C_{\rm ref}}{2 C_{\rm ref}} (\bm Q \bm \alpha^*_{[C_{\rm ref}]})_i
-
\frac{C - C_{\rm ref}}{2 C_{\rm ref}}
\sqrt{\bm \alpha^{*\top}_{[C_{\rm ref}]} \bm Q \bm \alpha^*_{[C_{\rm ref}]} Q_{ii}}. \end{eqnarray}
After transforming some variables, \eq{eq:DVI.equiv.2} is easily shown to be equivalent to the first equation in Corollary 11 in \cite{Wang13e}.
Note that we derive BT1 in the primal solution space, while Wang et al. \cite{Wang13e} derived the identical test in the dual space.
\section{$\eps$-approximation Path Procedure} \label{app:eps-path} The $\eps$-path algorithm enables us to compute an SVM regularization path such that the relative approximation error between two consecutive solutions are bounded by a small constant $\eps$ (we set $\eps = 10^{-3}$).
Precisely speaking, the sequence of the regularization parameters $\{C_t\}_{t \in \NN_T}$ produced by the $\eps$-path algorithm has a property that, for any $C_{t - 1}$ and $C_{t}$, $t \in \{2, \ldots, T\}$, the former dual optimal solution $\bm \alpha^*_{C_{t - 1}}$ satisfies \begin{eqnarray} \label{eq:relative.epsilon.approximation} \frac{
| \cD(\bm \alpha_{[C]}^*)
- \cD( {\scriptstyle\frac{C}{C_{t-1}}} \bm \alpha_{[C_{t - 1}]}^*)|
}{
\cD(\bm \alpha_{[C]}^*)
} \le \eps
~
\forall
~
C \in [C_{t - 1}, C_t], \end{eqnarray} where $\cD$ is the dual objective function defined in \eq{eq:d.SVM.prob.}.
This property roughly implies that, the optimal solution $\bm \alpha^*_{[C_{t - 1}]}$ is a reasonably good approximate solutions within the range of $C \in [C_{t - 1}, C_{t}]$.
Algorithm \ref{alg:eps.app.path} describes the regularization path computation procedure with the safe sample screening and the $\eps$-path algorithms.
Given $\bm w^*_{[C_{t - 1}]}$, the $\eps$-path algorithm finds the largest $C_t$ such that any solutions between $[C_{t - 1}, C_t]$ can be approximated by the current solution in the sense of \eq{eq:relative.epsilon.approximation}.
Then, the safe sample screening rules for $\bm w^*_{[C_t]}$ are constructed by using $\bm w^*_{[C_{t - 1}]}$ as the reference solution.
After screening out a subset of the samples, an SVM solver ({LIBSVM} and {LIBLINEAR} in our experiments) is applied to the reduced set of the samples to obtain $\bm w^*_{[C_t]}$.
\begin{algorithm}[h] \caption{SVM regularization path computation with the safe sample screening and the $\eps$-path algorithms} \label{alg:eps.app.path} \begin{algorithmic}[1] \Require{Training set $\{(\bm x_i,y_i)\}_{i\in\bN_n}$}, the largest regularization parameter $C_T$. \Ensure{Regularization path $\{\bm w_{[C_t]}^*\}_{t \in \NN_T}$}.
\State Compute $C_{\rm min}$. \State $t \leftarrow 1,~C_t \leftarrow C_{\rm min},~\bm \alpha_{[C_t]}^* \leftarrow C_{t} \ones$. \While {$\cL \neq \emptyset$ and $C_t < C_T$}
\State $t \leftarrow t+1$.
\State Compute the next $C_t$ by the $\eps$-path algorithm.
\State Construct the safe rules for $C_t$ by using $\bm w^*_{[C_{t - 1}]}$.
\State Screen out a subset of the samples by those rules.
\State Compute
$\bm w^*_{[C_t]}$
by an SVM solver.
\EndWhile \end{algorithmic} \end{algorithm}
\ifCLASSOPTIONcaptionsoff
\fi
\end{document} | arXiv |
Existence of positive solutions for a semilinear Schrödinger equation in \(\mathbb{R}^{N}\)
Houqing Fang1,2 &
Jun Wang2
Boundary Value Problems volume 2015, Article number: 9 (2015) Cite this article
In this paper, we study the existence of multi-bump solutions for the semilinear Schrödinger equation \(-\Delta u+(1+\lambda a(x))u=(1-\lambda b(x))|u|^{p-2}u\), \(\forall u\in H^{1}(\mathbb{R}^{N})\), where \(N\geq1\), \(2< p<2N/(N-2)\) if \(N\geq3\), \(p>2\) if \(N=2\) or \(N=1\), \(a(x)\in C(\mathbb{R}^{N})\) and \(a(x)>0\), \(b(x)\in C(\mathbb{R}^{N})\) and \(b(x)>0\). For any \(n\in\mathbb{N}\), we prove that there exists \(\lambda(n)>0\) such that, for \(0<\lambda<\lambda(n)\), the equation has an n-bump positive solution. Moreover, the equation has more and more multi-bump positive solutions as \(\lambda\rightarrow0\).
Introduction and main results
In this paper we study the following time independent semilinear Schrödinger equation:
$$ (\mathcal{S}_{\lambda})\quad -\Delta u+\bigl(1+\lambda a(x)\bigr)u=\bigl(1- \lambda b(x)\bigr)|u|^{p-2}u,\quad \forall u\in H^{1}\bigl( \mathbb{R}^{N}\bigr), $$
where \(N\geq1\), \(2< p<2^{*}\), 2∗ is the critical Sobolev exponent defined by \(2^{*}=\frac{2N}{N-2}\) if \(N\geq3\) and \(2^{*}=\infty\) if \(N=2\) or \(N=1\), and \(\lambda>0\) is a parameter.
This kind of equation arises in many fields of physics. For the following nonlinear Schrödinger equation:
$$ i\hbar\frac{\partial\varphi}{\partial t}=-\frac{\hbar^{2}}{2m}\Delta\varphi+N(x) \varphi-f\bigl(x,\vert \varphi \vert \bigr)\varphi, $$
where i is the imaginary unit, Δ is the Laplacian operator, and \(\hbar>0\) is the Planck constant. A standing wave solution of (1.1) is a solution of the form
$$ \varphi(x,t)=u(x)e^{-\frac{iEt}{\hbar}}, \quad u(x)\in\mathbb{R}. $$
Thus, \(\varphi(x,t)\) solves (1.1) if and only if \(u(x)\) solves the equation
$$ -\hbar^{2}\Delta u+V(x)u=g(x,u), $$
where \(V(x)=N(x)-E\) and \(g(x,u)=f(x,|u|)u\). The function V is called the potential of (1.2). If \(g(x,u)=(1-\lambda b(x))|u|^{p-2}u\), then (1.2) can be written as
$$ -\hbar^{2}\Delta u+V(x)u=\bigl(1-\lambda b(x) \bigr)|u|^{p-2}u. $$
If \(\hbar=1\) and \(V(x)=1+\lambda a(x)\), then (1.3) is reduced to (\(\mathcal{S}_{\lambda}\)).
The nonlinear Schrödinger equation (\(\mathcal{S}_{\lambda}\)) models some phenomena in physics, for example, in nonlinear optics, in plasma physics, and in condensed matter physics, and the nonlinear term simulates the interaction effect, called the Kerr effect in nonlinear optics, among a large number of particles; see, for example, [1, 2]. The case of \(p=4\) and \(N=3\) is of particular physical interest, and in this case the equation is called the Gross-Pitaevskii equation; see [3].
The limiting equation of (\(\mathcal{S}_{\lambda}\)) is
$$ -\Delta u+u=|u|^{p-2}u, \quad u\in H^{1}\bigl( \mathbb{R}^{N}\bigr), $$
as \(\lambda\rightarrow0\). It is well known that (1.4) has a unique positive radial solution z, which decays exponentially at ∞. This z will serve as a building block to construct multi-bump solutions of (\(\mathcal{S}_{\lambda}\)). For \(n\in\mathbb{N}\), let \(y_{1},\ldots,y_{n}\in\mathbb{R}^{N}\) be the sufficiently separated points. The profile of the function \(\sum_{i=1}^{n}z(x-y_{i})\) resembles n bumps and accordingly a solution of (\(\mathcal{S}_{\lambda}\)) which is close to \(\sum_{i=1}^{n}z(x-y_{i})\) in \(H^{1}(\mathbb{R}^{N})\) is called an n-bump solution.
As we know, multi-bump solutions arise as solutions of (1.2) as \(\hbar\rightarrow0\), under the assumption that V has several critical points; see for example [4–7]. Particularly, in the interesting paper [5], the authors proved that the solutions of (1.2) have several peaks near the point of a maximum of V. These peaks converge to the maximum of V as \(\hbar\rightarrow0\). Actually, there have been enormous studies on the solutions of (1.2) as \(\hbar\rightarrow0\), which exhibit a concentration phenomenon and are called semiclassical states. In the early results, most of the researchers focused on the case \(\inf_{x\in\mathbb{R}^{N}}V(x)>0\) and g is subcritical. Here and in the sequel, we say g is subcritical if \(g(x,u)\leq C|u|^{p-1}\) for \(2\leq p<2^{*}\) with \(2^{*}:=2N/(N-2)\) (\(N\geq3\)), and g is critical or supercritical if \(c_{1}|u|^{2^{*}-1}\leq g(x,u)\leq c_{2}|u|^{2^{*}-1}\) or only \(c_{1}|u|^{2^{*}-1}\leq g(x,u)\) for all large \(|u|\). In the case of \(\inf_{x\in\mathbb{R}^{N}}V(x)>0\), Floer and Weinstein in [8] first considered \(N=1\), \(g(u)=u^{3}\). Using the Lyapunov-Schmidt reduction argument, they proved that the system (1.2) has spike solutions, which concentrate near a nondegenerate critical point of the potential V. This result was extended to the high dimension case with \(N\geq2\) and for \(g(u)=|u|^{p-2}u\) by Oh [7, 9]. If the potential V has a nondegenerate critical point, Rabinowitz [10] obtained the existence result for (1.2) with ħ small, provided that \(0<\inf_{x\in\mathbb{R}^{N}}V(x)<\liminf_{|x|\rightarrow\infty}V(x)\). Using a global variational argument, Del Pino and Felmer [11, 12] established the existence of multi-peak solutions having exactly k maximum points provided that there are k disjoint open bounded sets \(\Omega_{i}\) such that \(\inf_{x\in\partial\Omega_{i}}V(x)>\inf_{x\in\Omega_{i}}V(x)\), each \(\Omega_{i}\) having one peak concentrating at its bottom. For the subcritical case, Refs. [1, 6, 13–15] also proved that the solutions of (1.2) are concentrated at critical points of V. There have also been recent results on the existence of solutions concentrating on manifolds; for instance, see [16–18] and the references therein.
If g is subcritical, Refs. [19, 20] first obtained the semiclassical solutions of (1.2) with critical frequency, i.e., \(\inf_{\in\mathbb{R}^{N}}V(x)=0\). They exhibited new concentration phenomena for bound states and their results were extended and generalized in [3, 21, 22]. Later, if \(\inf_{\in\mathbb{R}^{N}}V(x)=0\), Ding and Lin [23] obtained semiclassical states of (1.2) when the nonlinearity g is of the critical case. Recently, if the potentials V change sign, that is, \(\inf_{x\in\mathbb{R}^{N}}V(x)<0\), Refs. [24, 25] proved that the system (1.2) has semiclassical states.
Some researchers had also obtained multi-bump solutions for the equation
$$ -\Delta u+V(x)u=f(x,u), \quad u\in H^{1}\bigl( \mathbb{R}^{N}\bigr), $$
where V and f are \(T_{i}\) periodic in \(x_{i}\). Coti Zelati and Rabinowitz [26] first constructed multi-bump solutions for the Schrödinger equation (1.5). The building blocks are one-bump solutions at the mountain pass level and the existence of such solutions as well as multi-bump solutions is guaranteed by a nondegeneracy assumption of the solutions near the mountain pass level. Later, under the same nondegeneracy assumption, Coti Zelati and Rabinowitz in [27] constructed multi-bump solutions for periodic Hamiltonian systems. Multi-bump solutions have also been obtained for asymptotically periodic Schrödinger equations by Alama and Li [28]. For subsequent studies in this direction, for example, see [29–35] and the references therein. Recently, Refs. [36–38] also proved the existence of multi-bump solutions in other elliptic equations.
In this paper, we are interested in constructing multi-bump solutions of (\(\mathcal{S}_{\lambda}\)) with λ small enough. Similar results have been obtained in [39, 40] for the equations
$$ -\Delta u+\bigl(1+\lambda a(x)\bigr)u=|u|^{p-2}u, \quad u \in H^{1}\bigl(\mathbb{R}^{N}\bigr) $$
$$ -\Delta u+u=\bigl(1-\lambda a(x)\bigr)|u|^{p-2}u,\quad u\in H^{1}\bigl(\mathbb{R}^{N}\bigr). $$
To state the main result for (\(\mathcal{S}_{\lambda}\)), we need the following conditions on the functions a and b:
(\(\mathcal{R}_{1}\)):
\(a(x)>0\) and \(a(x) \in C(\mathbb{R}^{N})\), \(b(x)>0\) and \(b(x) \in C(\mathbb{R}^{N})\), and
$$ \lim_{|x|\rightarrow\infty}a(x)=\lim_{|x|\rightarrow\infty}b(x)=0. $$
One of the following holds: (i) \(\lim_{|x|\rightarrow\infty}\frac{\ln(a(x))}{|x|}=0\); (ii) \(\lim_{|x|\rightarrow\infty}\frac{\ln(b(x))}{|x|}=0\).
Suppose that the assumptions (\(\mathcal{R}_{1}\)) and (\(\mathcal {R}_{2}\)) hold. Then for any positive integer n there exists \(\lambda(n)>0\) such that, for \(0<\lambda<\lambda(n)\), the system (\(\mathcal{S}_{\lambda}\)) has an n-bump positive solution. As a consequence, for any positive integer n, there exists \(\lambda_{1}(n)>0\) such that, for \(0<\lambda<\lambda_{1}(n)\), the system (\(\mathcal{S}_{\lambda}\)) has at least n positive solutions.
Similar to [39, 40], the solutions in Theorem 1.1 do not concentrate near any point in the space. Instead, the bumps of the solutions we obtain are separated far apart and the distance between any pair of bumps goes to infinity as \(\lambda\rightarrow0\). The size of each bump does not shrink and is fixed as \(\lambda\rightarrow0\). This is in sharp contrast to the concentration phenomenon described above. This phenomenon has been observed by D'Aprile and Wei in [41] for a Maxwell-Schrödinger system.
We shall use the variational reduction method to prove the main results. Our argument is partially inspired by [39–42]. This paper is organized as follows. In Section 2, preliminary results are revisited. We prove Theorem 1.1 in Section 3.
Some preliminary works
Variational framework
In this section, we shall establish a variational framework for the system (\(\mathcal{S}_{\lambda}\)). For convenience of notation, let C and \(C_{i}\) denote various positive constants which may be variant even in the same line. In the Hilbert space \(H^{1}(\mathbb{R}^{N})\), we shall use the usual inner product,
$$ \langle u,v\rangle=\int_{\mathbb{R}^{N}}\nabla u\cdot\nabla v+uv, $$
and the induced norm \(\|u\|=\langle u,u\rangle^{\frac{1}{2}}\). Let \(|\cdot|_{q}\) denote the usual \(L^{q}(\mathbb{R}^{N})\)-norm and \((\cdot, \cdot)_{2}\) be the usual \(L^{2}(\mathbb{R}^{N})\)-inner product. Let \(n\in N\). We shall use \(\sum_{i< j}\) and \(\sum_{i\neq j}\) to represent summation over all subscripts i and j satisfying \(1\leq i< j\leq n\) and \(1\leq i\neq j\leq n\), respectively. Let us first introduce some basic inequalities which will be used later.
The following four lemmas are taken from [39, 40].
For \(q>1\), there exists \(C>0\) such that, for any real numbers a and b,
$$ \bigl\vert |a+b|^{q}-|a|^{q}-|b|^{q}\bigr\vert \leq C|a|^{q-1}|b|+C|b|^{q-1}|a|. $$
For \(q\geq2\), there exists \(C>0\) such that, for any \(a>0\) and \(b\in\mathbb{R}\),
$$ \bigl\vert |a+b|^{q}-a^{q}-qa^{q-1}b\bigr\vert \leq C \bigl(a^{q-2}|b|^{2}+|b|^{q} \bigr). $$
For \(q\geq2\), \(n\in N\), and \(a_{i}\geq0\), \(i=1,\ldots,n\),
$$ \Biggl(\sum_{i=1}^{n}a_{i} \Biggr)^{q}\geq\sum_{i=1}^{n}a_{i}^{q}+(q-1) \sum_{i\neq j}^{n}a_{i}^{q-1}a_{j} $$
$$ \Biggl(\sum_{i=1}^{n}a_{i} \Biggr)^{q}\geq\sum_{i=1}^{n}a_{i}^{q}+q \sum_{1\leq i< j\leq n}^{n}a_{i}^{q-1}a_{j}. $$
For \(q\geq2\), there exists \(C>0\) such that, for any \(a_{i}\geq0\), \(i = 1,\ldots,n\),
$$ \Biggl[ \Biggl(\sum_{i=1}^{n}a_{i} \Biggr)^{q-1}-\sum_{i=1}^{n}a_{i}^{q-1} \Biggr]^{\frac{q}{q-1}}\leq C\sum_{i\neq j}a_{i}^{q-1}a_{j}. $$
Recall that, for \(2< p<2^{*}\), the unique positive solution of the equation
$$ -\Delta u+u=|u|^{p-2}u, \quad u\in H^{1}\bigl( \mathbb{R}^{N}\bigr) $$
has the following properties; see, for example, [40, 43–45].
If \(2< p<2^{*}\), then every positive solution of (2.1) has the form \(z_{y}:=z(\cdot-y)\) for some \(y\in\mathbb{R}^{N}\), where \(z\in C^{\infty}(\mathbb{R}^{N})\) is the unique positive radial solution of (2.1) which satisfies, for some \(c>0\),
$$ z(r)r^{\frac{N-1}{2}}e^{r}\rightarrow c>0,\qquad z'(r)r^{\frac{N-1}{2}}e^{r} \rightarrow-c>0, \quad \textit{as } r=|x|\rightarrow\infty. $$
Furthermore, if \(\beta_{1}\leq\cdots\leq\beta_{n}\leq\cdot\cdot\cdot\) are the eigenvalues of the problem
$$ -\Delta v+v=\beta z^{p-2}v, \quad v\in H^{1} \bigl(\mathbb{R}^{N}\bigr), $$
then \(\beta_{1}=1\), \(\beta_{2}=p-1\), and the eigenspaces corresponding to \(\beta_{1}\) and \(\beta_{2}\) are spanned by z and \(\{\partial z/\partial x_{\alpha}\mid \alpha=1,\ldots,N\}\), respectively.
We shall use \(z_{y}\) as building blocks to construct multi-bump solutions of (\(\mathcal{S}_{\lambda}\)). For \(y_{i}, y_{j}\in\mathbb{R}^{N}\), the identity
$$ \int_{\mathbb{R}^{N}}z_{y_{i}}^{p-1}z_{y_{j}} = \langle z_{y_{i}},z_{y_{j}}\rangle=\int_{\mathbb{R}^{N}}z_{y_{i}}z_{y_{j}}^{p-1} $$
will be frequently used in the sequel. The following lemma is a consequence of Lemma 2.4 in [46] (see also Lemma II.2 of [47]).
There exists a positive constant \(c>0\) such that, as \(|y_{i}-y_{j}|\rightarrow\infty\),
$$ \int_{\mathbb{R}^{N}}z_{y_{i}}^{p-1}z_{y_{j}}\sim c|y_{i}-y_{j}|^{-\frac{N-1}{2}}e^{-|y_{i}-y_{j}|}. $$
For \(h>0\), \(n\geq2\), and \(n\in\mathbb{N}\), define
$$ \mathcal {D}_{h}=\bigl\{ (y_{1},\ldots,y_{n})\in \bigl(\mathbb{R}^{N}\bigr)^{n}\mid|y_{i}-y_{j}|>h \text{ for } i\neq j\bigr\} . $$
For convenience, we make the convention
$$ \mathcal{D}_{h}=\mathbb{R}^{N},\quad \text{if } n=1. $$
For \(y=(y_{1},\ldots,y_{n})\in\mathcal{D}_{h}\), denote
$$\begin{aligned}& u_{y}(x)=\sum_{i=1}^{n}z(x-y_{i}), \\& \mathcal{T}_{y}= \biggl\{ \frac{\partial z(\cdot-y_{i})}{\partial x_{\alpha}}\Bigm|\alpha=1,\ldots,N, i=1, \ldots,n \biggr\} \end{aligned}$$
$$ \mathcal{W}_{y}= \bigl\{ v\in H^{1}\bigl( \mathbb{R}^{N}\bigr)\mid\langle v,u\rangle=0, \forall u\in \mathcal{T}_{y} \bigr\} . $$
Then \(H^{1}(\mathbb{R}^{N})=\mathcal{T}_{y}\oplus\mathcal{W}_{y}\). Set \(P_{\lambda}(x)=1-\lambda b(x)\), \(V_{\lambda}(x)=1+\lambda a(x)\), \(\mathcal{N}_{\lambda}=(p-1)(-\Delta+V_{\lambda})^{-1}\), and \(\mathcal{N}_{0}=\mathcal{N}\). For \(y\in\mathcal{D}_{h}\) and \(\varphi\in H^{1}(\mathbb{R}^{N})\), define
$$ \mathcal{K}_{y}=\varphi-\sum_{i=1}^{n} \mathcal {N}\bigl(z^{p-2}(\cdot-y_{i})\varphi\bigr)+\sum _{i=1}^{n}L_{i}\varphi, $$
$$ \sum_{i=1}^{n}L_{i}\varphi=\sum _{i\neq j}\sum_{\alpha=1}^{N} \biggl\langle \mathcal {N}\bigl(z^{p-2}(\cdot-y_{j})\varphi \bigr),\frac{\partial z(\cdot-y_{i})}{\partial x_{\alpha}} \biggr\rangle \biggl\Vert \frac{\partial z(\cdot-y_{i})}{\partial x_{\alpha}}\biggr\Vert ^{-2}\frac{\partial z(\cdot-y_{i})}{\partial x_{\alpha}}. $$
Noting that \(\mathcal{K}_{y}|_{\mathcal{W}_{y}}: \mathcal {W}_{y}\rightarrow\mathcal{W}_{y}\) has the form identity-compact.
(See Lemma 2.3 of [40])
If \(h\rightarrow\infty\), then
$$ |u_{y}|^{p-2}-\sum_{i=1}^{n}z^{p-2}( \cdot-y_{i})\rightarrow0 $$
in \(L^{p/(p-2)}(\mathbb{R}^{N})\) uniformly in \(y\in\mathcal {D}_{h}\).
Let \(u, v\in H^{1}(\mathbb{R}^{N})\). If \(v\rightarrow0\), then
$$ |u+v|^{p-1}-|u|^{p-2}\rightarrow0 $$
in \(L^{p/(p-2)}(\mathbb{R}^{N})\) uniformly in u in any bounded set.
There exist \(h_{0}>0\) and \(\eta_{0}>0\) such that, for \(h>h_{0}\) and \(y\in\mathcal {D}_{h}\), \(\mathcal{K}_{y}|_{\mathcal{W}_{y}}: \mathcal {W}_{y}\rightarrow\mathcal{W}_{y}\) is invertible and
$$ \bigl\Vert (\mathcal{K}_{y}|_{\mathcal {W}_{y}})^{-1}\bigr\Vert \leq\eta_{0}. $$
Lemma 2.10
Let \(v\in H^{1}(\mathbb{R}^{N})\). If \(\lambda\rightarrow0\), \(v\rightarrow0\), and \(h\rightarrow\infty\), then
$$ \sup_{y\in\mathcal{D}_{h},\varphi\in H^{1}(\mathbb{R}^{N}),\|\varphi\|=1}\bigl\Vert \mathcal {K}_{y}\varphi- \bigl(\varphi-\mathcal {N}_{\lambda}\bigl(P_{\lambda}|u_{y}+v|^{p-2} \varphi\bigr)\bigr)\bigr\Vert \rightarrow0 $$
$$ \sup_{y\in\mathcal{D}_{h},\varphi\in H^{1}(\mathbb{R}^{N}),\|\varphi\|=1}\bigl\Vert \mathcal {K}_{y}\varphi- \bigl(\varphi-\mathcal {N}\bigl(P_{\lambda}|u_{y}+v|^{p-2} \varphi\bigr)\bigr)\bigr\Vert \rightarrow0. $$
By the definition of \(\mathcal{K}_{y}\), one has
$$\begin{aligned} \mathcal{K}_{y}\varphi-\bigl(\varphi-\mathcal {N}_{\lambda}\bigl(P_{\lambda}|u_{y}+v|^{p-2}\varphi \bigr)\bigr) =&\mathcal {N}_{\lambda}\bigl(|u_{y}+v|^{p-2} \varphi\bigr)-\sum_{j=1}^{n}\mathcal {N} \bigl(z^{p-2}(\cdot-y_{i})\varphi\bigr) \\ &{} -\lambda\mathcal {N}_{\lambda}\bigl(b(x)|u_{y}+v|^{p-2} \varphi\bigr)+\sum_{i=1}^{n}L_{i} \varphi. \end{aligned}$$
Obviously, \(\mathcal{N}_{\lambda}\rightarrow\mathcal{N}\) in \(\mathcal{L}(L^{\frac{p}{p-1}}(\mathbb{R}^{N}), H^{1}(\mathbb{R}^{N}))\) as \(\lambda\rightarrow0\). Therefore, if \(\lambda\rightarrow0\), \(v\in H^{1}(\mathbb{R}^{N})\) with \(v\rightarrow0\), and \(h\rightarrow\infty\), then for \(\psi,\varphi \in H^{1}(\mathbb{R}^{N})\), and uniformly in \(y\in\mathcal{D}_{h}\),
$$\begin{aligned}& \Biggl\vert \Biggl\langle \mathcal {N}_{\lambda} \bigl(|u_{y}+v|^{p-2}\varphi\bigr)-\sum _{j=1}^{n}\mathcal {N}\bigl(z^{p-2}( \cdot-y_{i})\varphi\bigr),\psi \Biggr\rangle \Biggr\vert \\& \quad =\bigl\vert \bigl\langle (\mathcal{N}_{\lambda}-\mathcal {N}) \bigl(|u_{y}+v|^{p-2}\varphi\bigr),\psi \bigr\rangle \bigr\vert +\bigl\vert \bigl\langle \mathcal {N}\bigl(\bigl(|u_{y}+v|^{p-2}-|u_{y}|^{p-2} \bigr)\varphi\bigr),\psi \bigr\rangle \bigr\vert \\& \qquad {}+\Biggl\vert \Biggl\langle \mathcal {N}\Biggl(\Biggl(|u_{y}|^{p-2}- \sum_{j=1}^{n}z^{p-2}( \cdot-y_{j})\Biggr)\varphi\Biggr),\psi \Biggr\rangle \Biggr\vert \\& \quad \leq\bigl\Vert (\mathcal{N}_{\lambda}-\mathcal {N}) \bigl(|u_{y}+v|^{p-2}\varphi\bigr)\bigr\Vert \|\psi\|+C\bigl\vert \bigl(|u_{y}+v|^{p-2}-|u_{y}|^{p-2} \bigr)\bigr\vert _{L^{\frac {p}{p-2}}(\mathbb{R}^{N})} \|\varphi\|\|\psi\| \\& \qquad {}+C\Biggl\vert \Biggl(|u_{y}|^{p-2}-\sum _{j=1}^{n}z^{p-2}(\cdot -y_{j}) \Biggr)\Biggr\vert _{L^{\frac{p}{p-2}}(\mathbb{R}^{N})} \|\varphi\|\|\psi\| \\& \quad \rightarrow0, \end{aligned}$$
as a consequence of Lemmas 2.7 and 2.8. Moreover, by Lemma 2.6, for \(|y_{i}-y_{j}|\rightarrow\infty\) (\(i\neq j\)), one sees that
$$ \sup_{y\in\mathcal {D}_{h}}\Biggl\Vert \sum _{i=1}^{n}L_{i}\varphi\Biggr\Vert \rightarrow0. $$
For \(\psi,\varphi\in H^{1}(\mathbb{R}^{N})\),
$$\begin{aligned} \lambda\bigl\vert \bigl\langle \mathcal {N}_{\lambda} \bigl(b(x)|u_{y}+v|^{p-2}\varphi\bigr),\psi \bigr\rangle \bigr\vert =&\lambda\bigl\vert \bigl\langle (\mathcal{N}_{\lambda}-\mathcal {N}) \bigl(b(x)|u_{y}+v|^{p-2}\varphi\bigr),\psi\bigr\rangle \bigr\vert \\ &{} +\bigl\vert \bigl\langle \mathcal{N}\bigl(b(x)|u_{y}+v|^{p-2} \varphi\bigr),\psi\bigr\rangle \bigr\vert \\ \leq& c\lambda\bigl\Vert (\mathcal{N}_{\lambda}-\mathcal {N}) \bigl(|u_{y}+v|^{p-2}\varphi\bigr)\bigr\Vert \|\psi\| \\ &{} +c\lambda\|u_{y}+v\|\|\varphi\|\|\psi\| \\ \rightarrow&0, \end{aligned}$$
as \(\lambda\rightarrow0\). We infer from (2.3)-(2.6) that, if \(\lambda\rightarrow0\), \(v\in H^{1}(\mathbb{R}^{N})\) with \(v\rightarrow0\), and \(h\rightarrow\infty\),
$$ \sup_{y\in\mathcal{D}_{h},\varphi\in H^{1}(\mathbb{R}^{N}),\|\varphi\|=1}\bigl\Vert \mathcal {K}_{y}\varphi- \bigl(\varphi-\mathcal {N}_{\lambda}\bigl(P_{\lambda}|u_{y}+v|^{p-2} \varphi\bigr)\bigr)\bigr\Vert \rightarrow0. $$
Similar to above arguments, one can easily acquire the second conclusion of this lemma. □
Clearly, the energy functional corresponding to the system (\(\mathcal{S}_{\lambda}\)) is defined by
$$ \Phi_{\lambda}(u)=\frac{1}{2}\int_{\mathbb{R}^{N}}\bigl(| \nabla u|^{2}+V_{\lambda}|u|^{2}\bigr)-\frac{1}{p} \int_{\mathbb {R}^{N}}P_{\lambda}|u|^{p}\quad \text{for } u \in H^{1}\bigl(\mathbb{R}^{N}\bigr), $$
where \(V_{\lambda}=(1+\lambda a(x))\) and \(P_{\lambda}=(1-\lambda b(x))\). It is easy to see that the critical points of \(\Phi_{\lambda}\) are solutions of (\(\mathcal{S}_{\lambda}\)). In the following, we shall use a Lyapunov-Schmidt reduction argument to find critical points of \(\Phi_{\lambda}\). The first procedure is to convert the problem of finding critical points of \(\Phi_{\lambda}\) to a finite dimensional problem, which consists of the following two lemmas.
There exist \(\lambda_{0}>0\) and \(H_{0}>0\) such that, for \(0<\lambda<\lambda_{0}\) and \(h>H_{0}\), there exists a \(C^{1}\)-map
$$ v_{h,\lambda}:\mathcal{D}_{h}\rightarrow H^{1}\bigl( \mathbb{R}^{N}\bigr), $$
depending on h and λ, such that
for any \(y\in\mathcal{D}_{h}\), \(v_{h,\lambda}\in \mathcal {W}_{y}\);
for any \(y\in\mathcal{D}_{h}\), \(\mathcal {P}_{y}\nabla\Phi_{\lambda}(u_{y}+v_{h,\lambda})=0\), where \(\mathcal {P}_{y}:H^{1}(\mathbb{R}^{N})\rightarrow\mathcal{W}_{y}\) is the orthogonal projection onto \(\mathcal{W}_{y}\);
\(\lim_{\lambda\rightarrow0,h\rightarrow\infty}\| v_{h,\lambda,y}\|=0\) uniformly in \(y\in\mathcal{D}_{h}\); \(\lim_{|y|\rightarrow\infty}\|v_{h,\lambda,y}\|=0\) if \(n=1\).
Decreasing \(\lambda_{0}\) and increasing \(H_{0}\) if necessary, we have the following result.
For \(0<\lambda<\lambda_{0}\) and \(h>H_{0}\), if \(y^{0}=(y_{1}^{0},\ldots,y_{n}^{0})\) is a critical point of \(\Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\), then \(u_{y^{0}}+v_{h,\lambda,y^{0}}\) is a critical point of \(\Phi_{\lambda}\).
Using Lemmas 2.9 and 2.10, repeating the arguments of Lemmas 2.6 and 2.7 in [40], one can easily prove Lemmas 2.11 and 2.12.
Estimates on \(\Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\) and \(v_{h,\lambda,y}\)
In order to prove Theorem 1.1 in the next section. We need first to estimate \(\Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\) and \(v_{h,\lambda,y}\). Denote \(c_{0}=\Phi_{0}(z)\), where \(\Phi_{0}\) is the functional \(\Phi_{\lambda}\) with \(\lambda=0\). Then
$$ c_{0}=\Phi_{0}(z)=\frac{1}{2}\int _{\mathbb{R}^{N}}\bigl(|\nabla z|^{2}+|z|^{2}\bigr)- \frac{1}{p}\int_{\mathbb{R}^{N}}|z|^{p}. $$
In the following, we first estimate \(\Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\). Note that
$$\begin{aligned} \Phi_{\lambda}(u_{y}+v_{h,\lambda,y}) =& \frac{1}{2}\int_{\mathbb {R}^{N}}|\nabla u_{y}+\nabla v_{h,\lambda,y}|^{2}+\frac{1}{2}\int_{\mathbb{R}^{N}} \bigl(1+\lambda a(x)\bigr)|u_{y}+v_{h,\lambda,y}|^{2} \\ &{} -\frac{1}{p}\int_{\mathbb{R}^{N}}|u_{y}+v_{h,\lambda,y}|^{p} +\frac{\lambda}{p}\int_{\mathbb{R}^{N}}b(x)|u_{y}+v_{h,\lambda,y}|^{p}. \end{aligned}$$
A direct computation shows that
$$\begin{aligned}& \Phi_{\lambda}(u_{y}+v_{h,\lambda,y}) \\& \quad = \frac{1}{2}\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}\bigl\vert \nabla z(x-y_{i})\bigr\vert ^{2}+\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}\nabla z(x-y_{i})\cdot\nabla(v_{h,\lambda,y})+ \frac{1}{2}\int_{\mathbb {R}^{N}}\bigl\vert \nabla(v_{h,\lambda,y}) \bigr\vert ^{2} \\& \qquad {} +\sum_{i< j}\int_{\mathbb{R}^{N}}\nabla z(x-y_{i})\cdot\nabla z(x-y_{j})+\frac{1}{2}\sum _{i=1}^{n}\int_{\mathbb{R}^{N}}\bigl\vert z(x-y_{i})\bigr\vert ^{2} +\frac{1}{2}\int _{\mathbb{R}^{N}}|v_{h,\lambda,y}|^{2} \\& \qquad {} +\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}z(x-y_{i})\cdot v_{h,\lambda,y}+\sum _{i<j}\int_{\mathbb{R}^{N}}z(x-y_{i})\cdot z(x-y_{j}) \\& \qquad {} +\frac{\lambda}{2}\int_{\mathbb{R}^{N}}a(x)u_{y}^{2}+ \lambda \int_{\mathbb{R}^{N}}a(x)u_{y}v_{h,\lambda,y} + \frac{\lambda}{2}\int_{\mathbb{R}^{N}}a(x) (v_{h,\lambda,y})^{2} \\& \qquad {} -\frac{1}{p}\int_{\mathbb{R}^{N}}|u_{y}+v_{h,\lambda,y}|^{p} +\frac{\lambda}{p}\int_{\mathbb{R}^{N}}b(x)|u_{y}+v_{h,\lambda,y}|^{p}. \end{aligned}$$
By Lemma 2.11, we may assume that \(\|v_{h,\lambda,y}\|\leq1\). Taking \(a=u_{y}\) and \(b=v_{h,\lambda,y}\) in Lemma 2.2, we have
$$ \frac{1}{p}\int_{\mathbb{R}^{N}}|u_{y}+v_{h,\lambda,y}|^{p}= \frac {1}{p}\int_{\mathbb{R}^{N}}|u_{y}|^{p} + \int_{\mathbb{R}^{N}}(u_{y})^{p-1}v_{h,\lambda,y}+O \bigl(\|v_{h,\lambda ,y}\|^{2}\bigr) $$
$$ \frac{1}{p}\int_{\mathbb{R}^{N}}b(x)|u_{y}+v_{h,\lambda ,y}|^{p}= \frac{1}{p}\int_{\mathbb{R}^{N}}b(x)|u_{y}|^{p} +\int_{\mathbb{R}^{N}}b(x) (u_{y})^{p-1}v_{h,\lambda,y}+O \bigl(\| v_{h,\lambda,y}\|^{2}\bigr). $$
Here and in what follows, \(O(\|v_{h,\lambda,y}\|^{2})\) satisfies
$$ \bigl\vert O\bigl(\|v_{h,\lambda,y}\|^{2}\bigr)\bigr\vert \leq C \|v_{h,\lambda,y}\|^{2} $$
for some positive constant C independent of h, λ, y. Therefore, substituting (2.9) and (2.10) into (2.8), it follows that
$$\begin{aligned}& \Phi_{\lambda}(u_{y}+v_{h,\lambda,y}) \\& \quad = \frac{1}{2}\sum _{i=1}^{n}\int_{\mathbb{R}^{N}}\bigl\vert \nabla z(x-y_{i})\bigr\vert ^{2}+\sum _{i=1}^{n}\int_{\mathbb{R}^{N}}\nabla z(x-y_{i})\cdot\nabla(v_{h,\lambda,y})+\frac{1}{2}\int _{\mathbb {R}^{N}}\bigl\vert \nabla(v_{h,\lambda,y})\bigr\vert ^{2} \\& \qquad {} +\sum_{i< j}\int_{\mathbb{R}^{N}}\nabla z(x-y_{i})\cdot\nabla z(x-y_{j})+\frac{1}{2}\sum _{i=1}^{n}\int_{\mathbb{R}^{N}}\bigl\vert z(x-y_{i})\bigr\vert ^{2} +\frac{1}{2}\int _{\mathbb{R}^{N}}|v_{h,\lambda,y}|^{2} \\& \qquad {} +\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}z(x-y_{i})\cdot v_{h,\lambda,y}+\sum _{i<j}\int_{\mathbb{R}^{N}}z(x-y_{i})\cdot z(x-y_{j}) \\& \qquad {} +\frac{\lambda}{2}\int_{\mathbb{R}^{N}}a(x)u_{y}^{2}+ \lambda \int_{\mathbb{R}^{N}}a(x)u_{y}v_{h,\lambda,y} + \frac{\lambda}{2}\int_{\mathbb{R}^{N}}a(x) (v_{h,\lambda,y})^{2} \\& \qquad {} -\frac{1}{p}\int_{\mathbb{R}^{N}}(u_{y})^{p} -\int_{\mathbb{R}^{N}}(u_{y})^{p-1}v_{h,\lambda,y} + \frac{\lambda}{p}\int_{\mathbb{R}^{N}}b(x) (u_{y})^{p} \\& \qquad {} +\int_{\mathbb{R}^{N}}b(x) (u_{y})^{p-1}v_{h,\lambda,y}+O \bigl(\| v_{h,\lambda,y}\|^{2}\bigr). \end{aligned}$$
Denote
$$\begin{aligned} \mathcal{K}_{y} =&-\sum_{i=1}^{n} \int_{\mathbb{R}^{N}}\nabla z(x-y_{i})\cdot \nabla(v_{h,\lambda,y})-\frac{1}{2}\int_{\mathbb {R}^{N}}\bigl\vert \nabla(v_{h,\lambda,y})\bigr\vert ^{2} \\ &{}-\sum _{i< j}\int_{\mathbb {R}^{N}}\nabla z(x-y_{i}) \cdot\nabla z(x-y_{j}) \\ &{} -\frac{1}{2}\int_{\mathbb{R}^{N}}(v_{h,\lambda,y})^{2} -\sum_{i=1}^{n}\int_{\mathbb{R}^{N}}z(x-y_{i}) \cdot v_{h,\lambda,y}-\sum_{i<j}\int _{\mathbb{R}^{N}}z(x-y_{i})\cdot z(x-y_{j}) \\ &{} -\lambda\int_{\mathbb{R}^{N}}a(x)u_{y}v_{h,\lambda,y} - \frac{\lambda}{2}\int_{\mathbb{R}^{N}}a(x) (v_{h,\lambda,y})^{2} +\frac{1}{p}\int_{\mathbb{R}^{N}}(u_{y})^{p} +\int_{\mathbb{R}^{N}}(u_{y})^{p-1}v_{h,\lambda,y} \\ &{} -\lambda\int_{\mathbb{R}^{N}}b(x) (u_{y})^{p-1}v_{h,\lambda,y} -\frac{1}{p}\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}z^{p}(x-y_{i}) +O\bigl( \|v_{h,\lambda,y}\|^{2}\bigr). \end{aligned}$$
$$ \Phi_{\lambda}(u_{y}+v_{h,\lambda,y})=nc_{0}+ \frac{\lambda }{2}\int_{\mathbb{R}^{N}}a(x)u_{y}^{2} +\frac{\lambda}{p}\int_{\mathbb{R}^{N}}b(x)u_{y}^{p}- \mathcal {K}_{y}. $$
Thus, in order to estimate the functional \(\Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\), it suffices to get the estimations for \(\mathcal{K}_{y}\). Since
$$ \int_{\mathbb{R}^{N}}\nabla z(x-y_{i})\cdot\nabla v+\int _{\mathbb{R}^{N}}z(x-y_{i})v=\int_{\mathbb {R}^{N}}z^{p-1}(x-y_{i})v, \quad \forall v\in H^{1}\bigl(\mathbb{R}^{N}\bigr), $$
\(\mathcal{K}_{y}\) can be rewritten as
$$\begin{aligned} \mathcal{K}_{y} =&-\sum_{i=1}^{n} \int_{\mathbb{R}^{N}} z^{p-1}(x-y_{i})v_{h,\lambda,y}- \sum_{i< j}\int_{\mathbb{R}^{N}} z^{p-1}(x-y_{i})z(x-y_{j}) \\ &{} -\lambda\int_{\mathbb{R}^{N}}a(x)u_{y}v_{h,\lambda,y} - \frac{\lambda}{2}\int_{\mathbb{R}^{N}}a(x) (v_{h,\lambda,y})^{2} +\frac{1}{p}\int_{\mathbb{R}^{N}}(u_{y})^{p} +\int_{\mathbb{R}^{N}}(u_{y})^{p-1}v_{h,\lambda,y} \\ &{} -\lambda\int_{\mathbb{R}^{N}}b(x) (u_{y})^{p-1}v_{h,\lambda,y} -\frac{1}{p}\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}z^{p}(x-y_{i}) +O\bigl( \|v_{h,\lambda,y}\|^{2}\bigr)+\lambda O\bigl(\|v_{h,\lambda,y} \|^{2}\bigr). \end{aligned}$$
Moreover, by the Hölder inequality one has
$$\begin{aligned} \lambda\biggl\vert \int_{\mathbb{R}^{N}}a(x)u_{y}v_{h,\lambda,y} \biggr\vert \leq&\lambda C\biggl(\int_{\mathbb{R}^{N}}a(x)u_{y}^{2} \biggr)^{\frac{1}{2}}\|v_{h,\lambda ,y}\| \\ \leq& C\lambda^{2}\int _{\mathbb{R}^{N}}a(x)u_{y}^{2}+C\|v_{h,\lambda,y} \|^{2} \end{aligned}$$
$$\begin{aligned} \lambda\biggl\vert \int_{\mathbb{R}^{N}}b(x) (u_{y})^{p-1}v_{h,\lambda,y} \biggr\vert \leq& C\lambda\biggl(\int_{\mathbb{R}^{N}}b(x)u_{y}^{p} \biggr)^{\frac{p-1}{p}}\| v_{h,\lambda,y}\| \\ \leq& C\lambda^{2}\int _{\mathbb{R}^{N}}b(x)u_{y}^{p}+C\|v_{h,\lambda,y} \|^{2}. \end{aligned}$$
Therefore, we have
$$\begin{aligned} \mathcal {K}_{y} =&\int_{\mathbb{R}^{N}}(u_{y})^{p-1}v_{h,\lambda,y}- \sum_{i=1}^{n}\int_{\mathbb{R}^{N}} z^{p-1}(x-y_{i})v_{h,\lambda,y}-\sum _{i< j}\int_{\mathbb{R}^{N}} z^{p-1}(x-y_{i})z(x-y_{j}) \\ &{} +\frac{1}{p}\int_{\mathbb{R}^{N}}(u_{y})^{p} -\frac{1}{p}\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}z^{p}(x-y_{i}) +O\bigl( \|v_{h,\lambda,y}\|^{2}\bigr)+\lambda O\bigl(\|v_{h,\lambda,y} \|^{2}\bigr) \\ &{} +O \biggl(\lambda^{2} \biggl(\int_{\mathbb{R}^{N}}a(x)u_{y}^{2} +\int_{\mathbb{R}^{N}}b(x)u_{y}^{p} \biggr) \biggr). \end{aligned}$$
There exist \(h_{0}>0\), \(\lambda_{0}>0\), and \(C_{i}>0\) (\(i=1,2,3\)) such that, if \(0<\lambda\leq\lambda_{0}\), \(h\geq h_{0}\), and \(y\in\mathcal{D}_{h}\), then \(\mathcal{K}_{y}\) satisfies
$$\begin{aligned}& \mathcal{K}_{y}\geq C\sum_{i< j}\int _{\mathbb{R}^{N}} z^{p-1}(x-y_{i})z(x-y_{j})-C_{1} \|v_{h,\lambda,y}\|^{2}-\lambda C_{2}\|v_{h,\lambda,y} \|^{2}-C_{3}\lambda^{2}, \\& \mathcal{K}_{y}\leq C \biggl(\sum_{i<j} \int_{\mathbb{R}^{N}} z^{p-1}(x-y_{i})z(x-y_{j})+ \|v_{h,\lambda,y}\|^{2}+\lambda \|v_{h,\lambda,y}\|^{2}- \lambda^{2} \biggr). \end{aligned}$$
From Lemmas 2.4 and 2.6, one sees that
$$\begin{aligned}& \Biggl\vert \int_{\mathbb{R}^{N}}(u_{y})^{p-1}v_{h,\lambda,y}- \sum_{i=1}^{n}\int_{\mathbb{R}^{N}} z^{p-1}(x-y_{i})v_{h,\lambda,y}\Biggr\vert \\& \quad \leq \Biggl(\int_{\mathbb{R}^{N}} \Biggl((u_{y})^{p-1}- \sum_{i=1}^{n}z^{p-1}(x-y_{i}) \Biggr)^{\frac{p}{p-1}} \Biggr)^{\frac{p-1}{p}} \biggl(\int_{\mathbb{R}^{N}}|v_{h,\lambda,y}|^{p} \biggr)^{\frac {1}{p}} \\& \quad \leq C \biggl(\int_{\mathbb{R}^{N}}\sum _{i\neq j}z^{p-1}(x-y_{i})z(x-y_{j}) \biggr)^{\frac{p-1}{p}}\|v_{h,\lambda ,y}\| \\& \quad \leq C \biggl(\int_{\mathbb{R}^{N}}\sum _{i\neq j}z^{p-1}(x-y_{i})z(x-y_{j}) \biggr)^{\frac{2(p-1)}{p}}+C\| v_{h,\lambda,y}\|^{2} \\& \quad \leq C \biggl(\int_{\mathbb{R}^{N}}\sum _{i\neq j}z^{p-1}(x-y_{i})z(x-y_{j}) \biggr)o(1)+C\|v_{h,\lambda,y}\|^{2}. \end{aligned}$$
Moreover, by Lemma 2.3, we have
$$ \int_{\mathbb{R}^{N}}u_{y}^{p}\geq \sum_{i=1}^{n}\int_{\mathbb {R}^{N}}z^{p}(x-y_{i}) +2(p-1)\sum_{1\leq i< j\leq n}\int_{\mathbb{R}^{N}}z^{p-1}(x-y_{i})z(x-y_{j}) $$
and by Lemma 2.1, one has
$$ \int_{\mathbb{R}^{N}}u_{y}^{p}\leq \sum_{i=1}^{n}\int_{\mathbb {R}^{N}}z^{p}(x-y_{i}) +C\sum_{1\leq i< j\leq n}\int_{\mathbb{R}^{N}}z^{p-1}(x-y_{i})z(x-y_{j}). $$
Here the fact
$$ \int_{\mathbb{R}^{N}}z^{p-1}(x-y_{i})z(x-y_{j})= \int_{\mathbb {R}^{N}}z(x-y_{i})z^{p-1}(x-y_{j}) $$
has been used. Substituting (2.13)-(2.15) into (2.12), one can easily get the desired conclusion. □
Next, we are in a position to estimate \(\|v_{h,\lambda,y}\|\).
\(\|v_{h,\lambda,y}\|\) satisfies
$$\begin{aligned} \|v_{h,\lambda,y}\| \leq& C\lambda \biggl(\int_{\mathbb{R}^{N}}a(x)u_{y}^{2} \biggr)^{\frac{1}{2}} +C\lambda \biggl(\int_{\mathbb{R}^{N}}b(x)u_{y}^{p} \biggr)^{\frac{p-1}{p}} \\ &{}+C \biggl(\sum_{i< j}\int _{\mathbb {R}^{N}}z^{p-1}(x-y_{i})z(x-y_{j}) \biggr)^{\frac{p-1}{p}}. \end{aligned}$$
By Lemma 2.11, for \(v\in\mathcal{W}_{y}\), one has
$$\begin{aligned} 0 =& \bigl\langle \nabla\Phi_{\lambda}(u_{y}+v_{h,\lambda ,y}),v \bigr\rangle \\ =&\sum_{i=1}^{n}\int_{\mathbb{R}^{N}} \nabla z(x-y_{i})\cdot\nabla v+\int_{\mathbb{R}^{N}} \nabla(v_{h,\lambda,y})\cdot\nabla v \\ &{} +\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}z(x-y_{i})v+\int_{\mathbb{R}^{N}}v_{h,\lambda,y}v +\lambda\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}a(x)z(x-y_{i})v \\ &{} +\lambda\int_{\mathbb{R}^{N}}a(x)v_{h,\lambda,y}v -\int _{\mathbb{R}^{N}}P_{\lambda}|u_{y}+v_{h,\lambda ,y}|^{p-2}(u_{y}+v_{h,\lambda,y})v. \end{aligned}$$
There exists \(\theta\in(0,1)\) such that
$$\begin{aligned}& \int_{\mathbb{R}^{N}}P_{\lambda}|u_{y}+v_{h,\lambda ,y}|^{p-2}(u_{y}+v_{h,\lambda,y})v \\& \quad =(p-1)\int_{\mathbb{R}^{N}}P_{\lambda}|u_{y}+ \theta v_{h,\lambda,y}|^{p-2}v_{h,\lambda,y}v +\int_{\mathbb{R}^{N}}P_{\lambda}u_{y}^{p-1}v. \end{aligned}$$
Substituting (2.17) into (2.16) yields
$$\begin{aligned}& \int_{\mathbb{R}^{N}} \nabla(v_{h,\lambda,y})\cdot\nabla v+\int _{\mathbb{R}^{N}}v_{h,\lambda,y}v-(p-1)\int_{\mathbb {R}^{N}}P_{\lambda}|u_{y}+ \theta v_{h,\lambda,y}|^{p-2}v_{h,\lambda,y}v \\& \quad =-\lambda\int_{\mathbb{R}^{N}}a(x)v_{h,\lambda,y}v-\lambda\sum _{i=1}^{n}\int_{\mathbb{R}^{N}}a(x)z(x-y_{i})v \\& \qquad {}+\int_{\mathbb{R}^{N}}P_{\lambda}u_{y}^{p-1}v- \sum_{i=1}^{n}\int_{\mathbb{R}^{N}}z^{p-1}(x-y_{i})v. \end{aligned}$$
Using the operator \(\mathcal{N}\) and \(\mathcal{P}_{y}\) defined in Section 2.1, we have
$$\begin{aligned}& \bigl\langle v_{h,\lambda,y}-\mathcal{P}_{y}\mathcal {N}\bigl(P_{\lambda}|u_{y}+\theta v_{h,\lambda,y}|^{p-2}v_{h,\lambda,y} \bigr),v\bigr\rangle \\& \quad = -\lambda\int_{\mathbb{R}^{N}}a(x)v_{h,\lambda,y}v- \lambda\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}a(x)z(x-y_{i})v \\& \qquad {} +\int_{\mathbb{R}^{N}}P_{\lambda}u_{y}^{p-1}v- \sum_{i=1}^{n}\int_{\mathbb{R}^{N}}z^{p-1}(x-y_{i})v. \end{aligned}$$
By Lemma 2.4, one has
$$\begin{aligned}& \Biggl\vert \int_{\mathbb{R}^{N}}P_{\lambda}u_{y}^{p-1}v- \sum_{i=1}^{n}\int_{\mathbb{R}^{N}}z^{p-1}(x-y_{i})v \Biggr\vert \\& \quad \leq \Biggl(\int_{\mathbb{R}^{N}}\Biggl\vert u_{y}^{p-1}-\sum_{i=1}^{n}z^{p-1}(x-y_{i}) \Biggr\vert |v| \Biggr)+\lambda\int_{\mathbb {R}^{N}}bu_{y}^{p-1}|v| \\& \quad \leq C \biggl(\int_{\mathbb{R}^{N}}\sum_{i\neq j}z^{p-1}(x-y_{i})z(x-y_{j}) \biggr)^{\frac{p-1}{p}}\|v\| +\lambda C \biggl(\int_{\mathbb{R}^{N}}bu_{y}^{p} \biggr)^{\frac{p-1}{p}}\|v\|. \end{aligned}$$
Therefore, choosing \(v=v_{h,\lambda,y}-\mathcal{P}_{y}\mathcal {N}(P_{\lambda}|u_{y}+\theta v_{h,\lambda,y}|^{p-2}v_{h,\lambda,y})\in\mathcal{W}_{y}\) in (2.18) and using Lemmas 2.9 and 2.10, we obtain, for some \(\eta>0\),
$$\begin{aligned} \begin{aligned} \eta\|v_{h,\lambda,y}\|\|v\|\leq{}&\lambda\int_{\mathbb {R}^{N}}a(x)|v_{h,\lambda,y}v| -\lambda\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}a(x)z(x-y_{i})|v| \\ &{}+C \biggl(\int_{\mathbb{R}^{N}}\sum_{i\neq j}z^{p-1}(x-y_{i})z(x-y_{j}) \biggr)^{\frac{p-1}{p}}\|v\|+\lambda C \biggl(\int_{\mathbb{R}^{N}}bu_{y}^{p} \biggr)^{\frac{p-1}{p}}\|v\|, \end{aligned} \end{aligned}$$
which implies, for \(\lambda>0\) sufficiently small,
$$\begin{aligned} \|v_{h,\lambda,y}\|\|v\| \leq& C\lambda\biggl(\int_{\mathbb{R}^{N}}a(x)u_{y}^{2} \biggr)^{\frac{1}{2}}\|v\| +\lambda C\biggl(\int_{\mathbb{R}^{N}}bu_{y}^{p} \biggr)^{\frac{p-1}{p}}\|v\| \\ &{}+C\biggl(\int_{\mathbb{R}^{N}}\sum_{i\neq j}z^{p-1}(x-y_{i})z(x-y_{j}) \biggr)^{\frac{p-1}{p}}\|v\|. \end{aligned}$$
Thus, we obtain the result. □
Proof of Theorem 1.1
The main purpose of this section is to prove Theorem 1.1. For this, we shall prove that, for \(\lambda>0\) small enough, we can choose \(\mu(\lambda)\) large enough such that the function \(\Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\) defined in Section 2.1 reaches its maximum in \(\mathcal{D}_{\mu}\) at some point \(y^{0}=(y_{1}^{0},\ldots,y_{n}^{0})\). Then \(u_{y^{0}}+v_{h,\lambda,y^{0}}\) is a solution of (\(\mathcal {S}_{\lambda}\)) by Lemma 2.12.
We shall mainly consider the case \(n\geq2\) since the case \(n=1\) is much easier. Define
$$ \gamma= \sup_{y\in(\mathbb{R}^{N})^{n}} \biggl(\int_{\mathbb{R}^{N}}b(x)u_{y}^{p}(x)+ \int_{\mathbb {R}^{N}}a(x)u_{y}^{2}(x) \biggr). $$
By Lemmas 2.1 and 2.2, there exist \(\lambda'_{0}>0\), \(h'_{0}>\), and \(C'_{i}>0\) (\(i=1,2,3\)) such that, if \(0<\lambda\leq\lambda'_{0}\), \(h\geq h'_{0}\), and \(y\in\mathcal {D}_{h}\), then \(\mathcal{K}_{y}\) satisfies
$$ \mathcal{K}_{y}\geq C'_{1}\sum _{i< j}\int_{\mathbb{R}^{N}} z^{p-1}(x-y_{i})z(x-y_{j})-C'_{2} \lambda^{2}-C'_{3}\lambda^{3}. $$
Here and in the sequel, \(C_{i}\), \(C'_{i}\), and C are various positive constants independent of λ. We choose a number k such that \(k>\max\{1,12\gamma/C'_{1}\}\). Then, for any λ satisfying
$$ 0<\lambda<\lambda'=\min \biggl\{ \frac{\|z\|_{L^{p}}}{k}, \frac {kC'_{1}}{2C'_{2}},\sqrt{\frac{kC'_{1}}{4C'_{3}}},\lambda_{0} \biggr\} , $$
there exists \(\mu^{*}=\mu^{*}(\lambda)>\mu=\mu(\lambda)>0\) such that, for \(w\in\mathbb{R}^{N}\) with \(|w|\in[\mu^{*},\mu]\),
$$ k\lambda\leq\int_{\mathbb{R}^{N}}z^{p-1}(x)z(x-w) \leq2k\lambda. $$
$$ \Gamma_{\lambda}=\bigl\{ \Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\mid y \in \mathcal {D}_{\mu}\bigr\} . $$
To obtain an n-bump solution of (\(\mathcal{S}_{\lambda}\)), it suffices to prove that \(\Gamma_{\lambda}\) is achieved in the interior of \(\mathcal{D}_{\mu}\).
Assume \(n\geq2\). Then there exists \(\lambda_{1}\in(0,\lambda')\) such that, for \(0<\lambda<\lambda_{1}\),
$$ \Gamma_{\lambda}>\sup \bigl\{ \Phi_{\lambda}(u_{y}+v_{h,\lambda ,y})\mid y \in\mathcal {D}_{\mu} \textit{ and } |y_{i}-y_{j}| \in\bigl[\mu^{*},\mu\bigr] \textit{ for some } i\neq j \bigr\} . $$
Note that \(\mu(\lambda)\rightarrow\infty\) as \(\lambda\rightarrow0\). By Lemma 2.2 and (3.3) we see that, if \(y\in\mathcal{D}_{\mu(\lambda)}\), then
$$ \|v_{\mu,\lambda,y}\|\leq C\lambda^{\frac{p-1}{p}}. $$
Suppose that \(y=(y_{1},\ldots,y_{n})\in\mathcal {D}_{\mu(\lambda)}\) and \(|y_{i}-y_{j}|\in[\mu(\lambda),\mu^{*}(\lambda)]\) for some \(i\neq j\). By (3.1)-(3.3), one has
$$ \mathcal{K}_{y}\geq C'_{1}k \lambda-C'_{2}\lambda^{2}-C'_{3} \lambda^{3}\geq\frac {1}{2}C'_{1}k \lambda-C'_{3}\lambda^{3} \geq \frac{1}{4}C'_{1}k\lambda\geq3\gamma\lambda. $$
By (2.11) and (3.5), for \(\lambda>0\) small enough, we obtain
$$\begin{aligned} \Phi_{\lambda}(u_{y}+v_{\mu,\lambda,y}) =&nc_{0}+ \frac{\lambda }{2}\int_{\mathbb{R}^{N}}a(x)u_{y}^{2} +\frac{\lambda}{p}\int_{\mathbb{R}^{N}}b(x)u_{y}^{p}- \mathcal {K}_{y} \\ \leq& nc_{0}+\gamma\lambda-3\gamma\lambda=nc_{0}-2\gamma \lambda \end{aligned}$$
for \(y=(y_{1},\ldots,y_{n})\in\mathcal{D}_{\mu(\lambda)}\) with \(|y_{i}-y_{j}|\in[\mu(\lambda),\mu^{*}(\lambda)]\) for some \(i\neq j\). On the other hand, if \(y=(y_{1},\ldots,y_{n})\in\mathcal{D}_{\mu(\lambda)}\) and \(|y_{i}-y_{j}|\rightarrow\infty\) for some \(i\neq j\), then by (2.11) and Lemmas 2.1 and 2.2, we have
$$\begin{aligned} \Phi_{\lambda}(u_{y}+v_{\mu,\lambda,y}) =&nc_{0}+ \frac{\lambda }{2}\int_{\mathbb{R}^{N}}a(x)u_{y}^{2} +\frac{\lambda}{p}\int_{\mathbb{R}^{N}}b(x)u_{y}^{p}- \mathcal {K}_{y} \\ \geq& nc_{0}+\frac{\lambda}{p} \biggl(\int_{\mathbb {R}^{N}}a(x)u_{y}^{2}+ \int_{\mathbb{R}^{N}}b(x)u_{y}^{p} \biggr) -C \lambda^{2} \\ &{} -C_{4}\lambda\|v_{\mu,\lambda,y}\|^{2}-C_{5} \|v_{\mu,\lambda ,y}\|^{2}+o(1) \\ \geq& nc_{0}+\frac{\lambda}{p} \biggl(\int_{\mathbb {R}^{N}}a(x)u_{y}^{2}+ \int_{\mathbb{R}^{N}}b(x)u_{y}^{p} \biggr) -C \lambda^{2} \\ &{} -C'_{4}\|v_{\mu,\lambda,y}\|^{2}+o(1) \\ \geq& nc_{0}+\frac{\lambda}{p} \biggl(\int_{\mathbb {R}^{N}}a(x)u_{y}^{2}+ \int_{\mathbb{R}^{N}}b(x)u_{y}^{p} \biggr) -C \lambda^{2} \\ &{} -\lambda^{2}C'_{5} \biggl(\int _{\mathbb {R}^{N}}a(x)u_{y}^{2}+\int _{\mathbb{R}^{N}}b(x)u_{y}^{p} \biggr)+o(1), \end{aligned}$$
where \(o(1)\) means some quantities which depend only on y and converge to 0 as \(|y_{i}-y_{j}|\rightarrow\infty\) for all \(i\neq j\). Therefore, for λ small enough,
$$ \liminf_{|y_{i}-y_{j}|\rightarrow\infty,\forall i\neq j}\Phi_{\lambda}(u_{y}+v_{h,\lambda,y}) \geq nc_{0}. $$
This inequality contradicts (3.6). Thus, we obtain the result. □
We choose \(y^{k}(\lambda)=(y_{1}^{k}(\lambda),\ldots,y_{n}^{k}(\lambda))\in \mathcal {D}_{\mu(\lambda)}\) such that
$$ \lim_{k\rightarrow\infty}\Phi_{\lambda}(u_{y^{k}(\lambda)}+v_{\mu ,\lambda,y^{k}(\lambda)})= \Gamma_{\lambda}. $$
Then Lemma 3.1 implies that
$$ \inf_{k}\min_{i\neq j}\bigl\vert y_{i}^{k}(\lambda)-y_{j}^{k}(\lambda) \bigr\vert \geq\mu^{*}(\lambda). $$
Therefore, for any \(1\leq i\leq n\), passing to a subsequence if necessary, we may assume either \(\lim_{k\rightarrow\infty}y_{i}^{k}(\lambda)=y_{i}^{0}(\lambda)\) with \(|y_{i}^{0}(\lambda)-y_{j}^{0}(\lambda)|\geq\mu^{*}\) for \(i\neq j\) or \(\lim_{k\rightarrow\infty}y_{i}^{k}(\lambda)=\infty\). Define
$$ \mathcal{U}(\lambda)= \bigl\{ 1\leq i\leq n\mid \bigl\vert y_{i}^{k}( \lambda)\bigr\vert \rightarrow\infty, \text{as } k\rightarrow\infty \bigr\} . $$
In the following, we shall prove that \(\mathcal {U}(\lambda)=\emptyset\) for \(\lambda>0\) sufficiently small and thus \(\Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\) attains its maximum at \((y_{1}^{0}(\lambda),\ldots,y_{n}^{0}(\lambda))\in\mathcal {D}_{\mu(\lambda)}\).
Assume \(n\geq2\). Then there exists \(\lambda(n)>0\) such that for \(\lambda\in(0,\lambda(n))\), \(\mathcal{U}(\lambda)=\emptyset\).
We adopt an argument borrowed from Lin and Liu [39, 40]. We argue by contradiction and assume that \(\mathcal{U}(\lambda)\neq\emptyset\) along a sequence \(\lambda_{m}\rightarrow0\). Without loss of generality, we may assume \(\mathcal {U}(\lambda_{m})=\{1,\ldots,j_{n}\}\) for all \(m\in\mathbb{N}\) and for some \(1\leq j_{n}< n\). The case in which \(j_{n}=n\) can be handled similarly. For convenience of notations, we shall denote \(\lambda_{m}=\lambda\), \(y_{i}^{k}=y_{i}^{k}(\lambda_{m})\), \(y^{k}=(y_{1}^{k},\ldots,y_{n}^{k})\), \(y_{*}^{k}=(y_{j_{n}+1}^{k},\ldots,y_{n}^{k})\), and \(y_{*}^{0}=(y_{j_{n}+1}^{0},\ldots,y_{n}^{0})\) for \(k=1,2,\ldots\) . Then, as \(k\rightarrow\infty\),
$$ \bigl\vert y_{1}^{k}\bigr\vert \rightarrow\infty,\qquad \ldots,\qquad \bigl|y_{j_{n}}^{k}\bigr|\rightarrow\infty $$
$$ y_{j_{n}+1}^{k}\rightarrow y_{j_{n}+1}^{0}, \qquad \ldots, \qquad y_{n}^{k}\rightarrow y_{n}^{0}. $$
$$ w_{k}=\sum_{i=1}^{n}z \bigl(x-y_{i}^{k}\bigr),\qquad w_{k,1}=\sum _{i=1}^{j_{n}}z\bigl(x-y_{i}^{k} \bigr) $$
$$ w_{k,2}=\sum_{i=j_{n}+1}^{n}z \bigl(x-y_{i}^{k}\bigr),\qquad w_{y_{*}^{0}}=\sum _{i=j_{n}+1}^{n}z\bigl(x-y_{i}^{0} \bigr). $$
Similar to (3.4), we have
$$ \Vert v_{\mu,\lambda,y^{k}}\Vert \leq C\lambda^{\frac{p-1}{p}}, \qquad \Vert v_{\mu,\lambda,y_{*}^{k}}\Vert \leq C\lambda^{\frac{p-1}{p}},\quad k=1,2, \ldots. $$
By (2.11), we obtain
$$\begin{aligned} \begin{aligned}[b] \Phi_{\lambda}(w_{k}+v_{\mu,\lambda,y^{k}})={}&nc_{0}+ \frac{\lambda }{2}\int_{\mathbb{R}^{N}}a(x)w_{k}^{2} +\frac{\lambda}{p}\int_{\mathbb{R}^{N}}b(x)w_{k}^{p}- \mathcal {K}_{y^{k}} \\ ={}&j_{n}c_{0}+\frac{\lambda}{2}\int_{\mathbb {R}^{N}}a(x)w_{k,1}^{2}+ \frac{\lambda}{2}\int_{\mathbb {R}^{N}}a(x)w_{k,2}^{2} +\lambda\int_{\mathbb{R}^{N}}a(x)w_{k,1}w_{k,2} \\ &{} +(n-j_{n})c_{0}+\frac{\lambda}{p}\int _{\mathbb {R}^{N}}b(x)w_{k}^{p}+\mathcal {K}_{y_{*}^{k}}-\mathcal{K}_{y^{k}}-\mathcal{K}_{y_{*}^{k}} \\ ={}&j_{n}c_{0}+\frac{\lambda}{2}\int_{\mathbb {R}^{N}}a(x)w_{k,1}^{2}+ \lambda\int_{\mathbb {R}^{N}}a(x)w_{k,1}w_{k,2}+ \mathcal {K}_{y_{*}^{k}}-\mathcal{K}_{y^{k}} \\ &{} +\frac{\lambda}{p}\int_{\mathbb {R}^{N}}b(x) \bigl(w_{k}^{p}-w_{k,2}^{p} \bigr)+\Phi_{\lambda}(w_{k,2}+v_{\mu ,\lambda,y_{*}^{k}}). \end{aligned} \end{aligned}$$
By Lemma 2.1, one sees
$$\begin{aligned} \int_{\mathbb{R}^{N}}b(x) \bigl(w_{k}^{p}-w_{k,2}^{p} \bigr) \leq& C_{6}\sum_{i=1}^{j_{n}} \int_{\mathbb{R}^{N}}b(x)z^{p-1}\bigl(x-y_{i}^{k} \bigr)w_{k,2} \\ &{}+C_{8}\sum_{i=1}^{j_{n}} \int_{\mathbb{R}^{N}}b(x)z^{p}\bigl(x-y_{j}^{k} \bigr) \\ &{} +C_{7}\sum_{j=j_{n}+1}^{n}\int _{\mathbb {R}^{N}}b(x)z^{p-1}\bigl(x-y_{j}^{k} \bigr)w_{k,1}. \end{aligned}$$
Therefore, since \(|y_{i}^{k}|\rightarrow\infty\), \(i=1,\ldots,j_{n}\), as \(k\rightarrow\infty\), we obtain
$$ \frac{\lambda}{2}\int_{\mathbb{R}^{N}}a(x)w_{k,1}^{2}+ \lambda\int_{\mathbb{R}^{N}}a(x)w_{k,1}w_{k,2} \rightarrow0. $$
Furthermore, by (3.9) and the condition (\(\mathcal{R}_{1}\)), we have
$$ \frac{\lambda}{p}\int_{\mathbb {R}^{N}}b(x) \bigl(w_{k}^{p}-w_{k,2}^{p}\bigr) \rightarrow0,\quad \text{as } k\rightarrow\infty. $$
From (3.8), (3.10) and (3.11), we arrive at
$$ \Phi_{\lambda}(w_{k}+v_{\mu,\lambda,y^{k}})\leq \Phi_{\lambda }(w_{k,2}+v_{\mu,\lambda,y_{*}^{k}})+j_{n}c_{0} +\mathcal{K}_{y_{*}^{k}}-\mathcal{K}_{y^{k}}+o(1). $$
Using Lemma 2.4, (3.3), and (3.7), we obtain
$$\begin{aligned}& \int_{\mathbb{R}^{N}} \Biggl\vert \sum _{i=1}^{n}z^{p-1}\bigl(x-y_{i}^{k} \bigr)- \Biggl(\sum_{i=1}^{n}z \bigl(x-y_{i}^{k}\bigr) \Biggr)^{p-1}\Biggr\vert |v_{\mu,\lambda ,y^{k}}| \\& \quad \leq C \biggl(\sum_{i< j}\int _{\mathbb {R}^{N}}z^{p-1}\bigl(x-y_{i}^{k} \bigr)z\bigl(x-y_{j}^{k}\bigr) \biggr)^{\frac{p-1}{p}}\| v_{\mu,\lambda,y^{k}}\| \\& \quad \leq C'\lambda^{\frac{2(p-1)}{p}}. \end{aligned}$$
From Lemma 2.2, (2.12), (3.7), and (3.13), one gets
$$\begin{aligned} \mathcal {K}_{y^{k}} =&\frac{1}{p}\int _{\mathbb{R}^{N}} \Biggl(\sum_{i=1}^{n}z \bigl(x-y_{i}^{k}\bigr) \Biggr)^{p} - \frac{1}{p}\sum_{i=1}^{n}\int _{\mathbb{R}^{N}}z^{p}\bigl(x-y_{i}^{k} \bigr) \\ &{} -\sum_{i< j}\int_{\mathbb {R}^{N}}z^{p-1} \bigl(x-y_{i}^{k}\bigr)z\bigl(x-y_{j}^{k} \bigr)+O\bigl(\lambda^{\frac{2(p-1)}{p}}\bigr). \end{aligned}$$
In the same way, we have
$$\begin{aligned} \mathcal {K}_{y_{*}^{k}} =&\frac{1}{p}\int _{\mathbb{R}^{N}} \Biggl(\sum_{i=j_{n}+1}^{n}z \bigl(x-y_{i}^{k}\bigr) \Biggr)^{p} - \frac{1}{p}\sum_{i=j_{n}+1}^{n}\int _{\mathbb {R}^{N}}z^{p}\bigl(x-y_{i}^{k} \bigr) \\ &{} -\sum_{j_{n}< i<j}\int_{\mathbb {R}^{N}}z^{p-1} \bigl(x-y_{i}^{k}\bigr)z\bigl(x-y_{j}^{k} \bigr)+O\bigl(\lambda^{\frac{2(p-1)}{p}}\bigr). \end{aligned}$$
We infer from (3.14) and (3.15) that
$$\begin{aligned} \mathcal{K}_{y_{*}^{k}}-\mathcal {K}_{y^{k}} =& \frac{1}{p}\int_{\mathbb{R}^{N}}(w_{k,2})^{p}- \frac {1}{p}\int_{\mathbb{R}^{N}}(w_{k})^{p} + \frac{1}{p}\sum_{i=1}^{j_{n}}\int _{\mathbb {R}^{N}}z^{p}\bigl(x-y_{i}^{k} \bigr) \\ &{} +\sum_{i< j\leq j_{n}}\int_{\mathbb{R}^{N}}z^{p-1} \bigl(x-y_{i}^{k}\bigr)z\bigl(x-y_{j}^{k} \bigr) +O\bigl(\lambda^{\frac{2(p-1)}{p}}\bigr) \\ &{} +\sum_{i=1}^{j_{n}}\int _{\mathbb{R}^{N}}z^{p-1}\bigl(x-y_{i}^{k} \bigr)w_{k,2}. \end{aligned}$$
By Lemma 2.3, the sum of the terms except \(O(\lambda^{\frac{2(p-1)}{p}})\) on the right side of (3.16) is negative. Thus, one has
$$ \mathcal{K}_{y_{*}^{k}}-\mathcal{K}_{y^{k}}\leq O \bigl(\lambda^{\frac{2(p-1)}{p}}\bigr). $$
Letting \(k\rightarrow\infty\), by (3.12), and using (3.17), we obtain
$$ \Gamma_{\lambda}\leq j_{n}c_{0}+ \Phi_{\lambda}(w_{y_{*}^{0}}+v_{\mu,\lambda ,y_{*}^{0}})+C'_{8} \lambda^{\frac{2(p-1)}{p}}. $$
On the other hand, by Lemma 2.6 and (3.3), there exist \(C_{9}, C_{10}> 0\) such that
$$ C_{9}\lambda\leq\mu^{-\frac{N-2}{2}}e^{-\mu}\leq C_{10}\lambda, $$
which implies for λ small enough
$$ (1-\delta)\ln\frac{1}{\lambda}\leq\mu=\mu(\lambda)\leq (1+\delta) \ln\frac{1}{\lambda}, $$
where \(0<\delta<\frac{1}{p}\). We choose τ such that \(0<\tau\leq\frac{p-2}{10np}\). By (\(\mathcal{R}_{2}\)), there exists \(R>0\) such that
$$ a(x)\geq e^{-\tau|x|}, \quad |x|\geq R $$
$$ b(x)\geq e^{-\tau|x|},\quad |x|\geq R. $$
For \(\lambda>0\) small enough, define
$$ \hat{y}_{s}^{\lambda}=\biggl(10n\ln \frac{1}{\lambda}-4s\mu(\lambda ),0,\ldots,0\biggr)\in\mathbb{R}^{N}, \quad s=1,2,\ldots,n. $$
The open balls \(B(\hat{y}_{s}^{\lambda},2\mu(\lambda))\) are mutually disjoint. Thus there are \(j_{n}\) integers from \(\{1,\ldots,n\}\), denoted by \(s_{1}< s_{2}<\cdots< s_{j_{n}}\), such that
$$ \bigl\vert \hat{y}_{s_{i}}^{\lambda}-y_{j}^{0} \bigr\vert \geq2\mu(\lambda),\quad i=1,\ldots,j_{n}, j=j_{n}+1,\ldots,n. $$
Denote \(\hat{y}_{s_{i}}^{\lambda}\) by \(y_{i}^{\lambda}\) for simplicity, \(i=1,\ldots,j_{n}\). By (3.20), (3.23), and (3.24), one has
$$\begin{aligned}& R+1\leq\bigl\vert y_{i}^{\lambda}\bigr\vert \leq10n\ln\frac{1}{\lambda},\quad i=1,\ldots,j_{n}, \end{aligned}$$
$$\begin{aligned}& \bigl\vert y_{i}^{\lambda}-y_{j}^{\lambda} \bigr\vert \geq2\mu(\lambda),\quad 1\leq i< j\leq j_{n}, \end{aligned}$$
$$\begin{aligned}& \bigl\vert y_{i}^{\lambda}-y_{j}^{0} \bigr\vert \geq2\mu(\lambda),\quad i=1,\ldots, j_{n}, j=j_{n}+1,\ldots,n. \end{aligned}$$
$$ \bigl(y_{1}^{\lambda},\ldots,y_{j_{n}}^{\lambda},y_{j_{n}+1}^{0}, \ldots ,y_{n}^{0}\bigr)\in\mathcal {D}_{\mu(\lambda)}. $$
Denote \(y^{\lambda}=(y_{1}^{\lambda},\ldots,y_{j_{n}}^{\lambda },y_{j_{n}+1}^{0},\ldots,y_{n}^{0})\). Set \(w_{\lambda,1}=\sum_{i=1}^{j_{n}}z(x-y_{i}^{\lambda})\), \(w_{y_{*}^{0}}=\sum_{i=j_{n}+1}^{n}z(x-y_{i}^{0})\), and \(w_{\lambda}=w_{\lambda,1}+w_{y_{*}^{0}}\). Similar to (3.8), one has
$$\begin{aligned} \Phi_{\lambda}(w_{\lambda}+v_{\mu,\lambda,y^{\lambda}}) =&j_{n}c_{0}+\frac{\lambda}{2}\int_{\mathbb{R}^{N}}a(x)w_{\lambda ,1}^{2}+ \lambda\int_{\mathbb{R}^{N}}a(x)w_{\lambda ,1}w_{y_{*}^{0}}+ \mathcal {K}_{y_{*}^{\lambda}}-\mathcal{K}_{y^{\lambda}} \\ &{} +\frac{\lambda}{p}\int_{\mathbb{R}^{N}}b(x) \bigl(w_{\lambda }^{p}-w_{y_{*}^{0}}^{p} \bigr)+\Phi_{\lambda}(w_{y_{*}^{0}}+v_{\mu,\lambda ,y_{*}^{0}}). \end{aligned}$$
As in (3.16), we have
$$\begin{aligned} \mathcal{K}_{y_{*}^{\lambda}}-\mathcal {K}_{y^{\lambda}} =&\frac{1}{p} \int_{\mathbb {R}^{N}}(w_{y_{*}^{0}})^{p}-\frac{1}{p} \int_{\mathbb {R}^{N}}(w_{\lambda})^{p} +\frac{1}{p} \sum_{i=1}^{j_{n}}\int_{\mathbb {R}^{N}}z^{p} \bigl(x-y_{i}^{\lambda}\bigr) \\ &{} +\sum_{i< j\leq j_{n}}z^{p-1} \bigl(x-y_{i}^{\lambda}\bigr)z\bigl(x-y_{j}^{\lambda} \bigr) +O\bigl(\lambda^{\frac{2(p-1)}{p}}\bigr) \\ &{} +\sum_{i=1}^{j_{n}}\int _{\mathbb {R}^{N}}z^{p-1}\bigl(x-y_{i}^{k} \bigr)w_{y_{*}^{0}} \\ \geq&\frac{1}{p}\int_{\mathbb{R}^{N}}(w_{y_{*}^{0}})^{p} +\frac{1}{p}\sum_{i=1}^{j_{n}}\int _{\mathbb {R}^{N}}z^{p}\bigl(x-y_{i}^{\lambda} \bigr) \\ &{} -\frac{1}{p}\int_{\mathbb{R}^{N}}(w_{\lambda})^{p}+O \bigl(\lambda^{\frac{2(p-1)}{p}} \bigr). \end{aligned}$$
Together with Lemma 2.1 this implies that
$$\begin{aligned} \mathcal{K}_{y_{*}^{\lambda}}-\mathcal{K}_{y^{\lambda}} \geq& -C \sum_{i=1}^{j_{n}}\int_{\mathbb{R}^{N}}z^{p-1} \bigl(x-y_{i}^{\lambda }\bigr)w_{y_{*}^{0}} -C\sum _{i=1}^{j_{n}}\int_{\mathbb {R}^{N}}(w_{y_{*}^{0}})^{p-1}z \bigl(x-y_{i}^{\lambda}\bigr) \\ &{}-C\sum_{1\leq i< j\leq j_{n}}\int_{\mathbb{R}^{N}}z^{p-1} \bigl(x-y_{i}^{\lambda }\bigr)z\bigl(x-y_{j}^{\lambda} \bigr) +O \bigl(\lambda^{\frac{2(p-1)}{p}} \bigr). \end{aligned}$$
By Lemma 2.6, (3.20), and (3.26), one sees that
$$\begin{aligned} \int_{\mathbb{R}^{N}}z^{p-1}\bigl(x-y_{i}^{\lambda} \bigr)z\bigl(x-y_{j}^{\lambda}\bigr) \leq& C_{11}e^{-2\mu(\lambda)} \leq C_{12}e^{-2(1-\delta)\ln\frac{1}{\lambda}} \\ =&C_{12}\lambda^{2(1-\delta)} \leq C_{13} \lambda^{\frac{2(p-1)}{p}}. \end{aligned}$$
In view of (3.27), a similar argument shows that
$$ \sum_{i=1}^{j_{n}}\int _{\mathbb{R}^{N}}z^{p-1}\bigl(x-y_{i}^{\lambda } \bigr)w_{y_{*}^{0}} +\sum_{i=1}^{j_{n}}\int _{\mathbb {R}^{N}}(w_{y_{*}^{0}})^{p}z\bigl(x-y_{i}^{\lambda} \bigr)\leq C_{14}\lambda^{\frac{2(p-1)}{p}}. $$
Combining (3.29)-(3.31), we have
$$ \mathcal{K}_{y_{*}^{\lambda}}-\mathcal {K}_{y^{\lambda}}\geq-C_{15} \lambda^{\frac{2(p-1)}{p}}. $$
Together with (3.28), it follows that
$$\begin{aligned} \Phi_{\lambda}(u_{y^{\lambda}}+v_{\mu,\lambda,y^{\lambda}}) \geq& j_{n}c_{0}+\frac{\lambda}{2}\int_{\mathbb{R}^{N}}a(x)w_{\lambda ,1}^{2}+ \lambda\int_{\mathbb{R}^{N}}a(x)w_{\lambda,1}w_{y_{*}^{0}} -C_{15}\lambda^{\frac{2(p-1)}{p}} \\ &{} +\frac{\lambda}{p}\int_{\mathbb{R}^{N}}b(x) \bigl(w_{y^{\lambda }}^{p}-w_{y_{*}^{0}}^{p} \bigr) +\Phi_{\lambda}(w_{y_{*}^{0}}+v_{\mu,\lambda,y_{*}^{0}}). \end{aligned}$$
We distinguish the following two cases to finish the proof of this lemma.
If (3.21) holds, then by (3.25), we have, for \(i=1,\ldots,j_{n}\),
$$\begin{aligned} \int_{\mathbb{R}^{N}}a(x)w_{\lambda,1}^{2} \geq&\int_{|x-y_{i}^{\lambda}|\leq1}a(x)z^{2}\bigl(x-y_{i}^{\lambda} \bigr) \geq\int_{|x-y_{i}^{\lambda}|\leq1}e^{-\tau |x|}z^{2} \bigl(x-y_{i}^{\lambda}\bigr) \\ \geq& C_{16}e^{-\tau|y_{i}^{\lambda}|}\geq C_{16}e^{-10n\tau\ln\frac{1}{\lambda}}=C_{16} \lambda^{10n\tau}. \end{aligned}$$
$$ \Phi_{\lambda}(u_{y^{\lambda}}+v_{\mu,\lambda,y^{\lambda}})\geq j_{n}c_{0}+\Phi_{\lambda}(w_{y_{*}^{0}}+v_{\mu,\lambda ,y_{*}^{0}})+C_{16} \lambda^{10n\tau+1} -C_{15}\lambda^{\frac{2(p-1)}{p}}. $$
Since \(10n\tau+1<\frac{2(p-1)}{p}\), we obtain, for λ small enough,
$$ \Phi_{\lambda}(u_{y^{\lambda}}+v_{\mu,\lambda,y^{\lambda}})\geq j_{n}c_{0}+\Phi_{\lambda}(w_{y_{*}^{0}}+v_{\mu,\lambda ,y_{*}^{0}})+C'_{16} \lambda^{10n\tau+1}, $$
which contradicts (3.18).
Suppose that (3.22) holds. Similar to (3.32), one has
$$\begin{aligned} \int_{\mathbb{R}^{N}}b(x) \bigl(u_{y^{\lambda }}^{p}-u_{y_{*}^{0}}^{p} \bigr) \geq&\int_{|x-y_{1}^{\lambda}|\leq 1}b(x)z^{p} \bigl(x-y_{1}^{\lambda}\bigr) \geq\int_{|x-y_{1}^{\lambda}|\leq1}e^{-\tau |x|}z^{p} \bigl(x-y_{1}^{\lambda}\bigr) \\ \geq& C_{17}e^{-\tau|y_{1}^{\lambda}|}\geq C_{17}e^{-10n\tau\ln\frac{1}{\lambda}}=C_{17} \lambda^{10n\tau}. \end{aligned}$$
Repeating the arguments of (i), we get, for λ small enough,
$$ \Phi_{\lambda}(u_{y^{\lambda}}+v_{\mu,\lambda,y^{\lambda}})\geq j_{n}c_{0}+\Phi_{\lambda}(w_{y_{*}^{0}}+v_{\mu,\lambda ,y_{*}^{0}})+C'_{17} \lambda^{10n\tau+1}. $$
This contradicts (3.18).
From (i) and (ii), we know that there exists \(\lambda(n)>0\) such that, if \(0 <\lambda<\lambda(n)\), then \(\mathcal {U}(\lambda)=\emptyset\) and \(\Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\) reaches its maximum at some point \((y_{1}^{0},\ldots,y_{n}^{0})\in\mathcal {D}_{\mu(\lambda)}\). □
Next, we shall prove Theorem 1.1.
For \(n\geq2\), according to Lemma 3.2, if \(0<\lambda<\lambda(n)\), then \(\Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\) reaches its maximum at some point \(y^{0}=(y_{1}^{0},\ldots,y_{n}^{0})\in\mathcal {D}_{\mu(\lambda)}\). Then \(u_{y^{0}}+v_{h,\lambda,y^{0}}\) is an n-bump solution of (\(\mathcal{S}_{\lambda}\)). For \(n=1\), as a consequence of Lemma 2.11(iii), if \(\lambda\in(0,\lambda_{0}]\), then
$$ \lim_{|y|\rightarrow\infty}\Phi_{\lambda}(u_{y}+v_{h,\lambda ,y})= \Phi_{0}(z)=c_{0}. $$
Since \(\Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\) is defined on all \(\mathbb{R}^{N}\), \(\Phi_{\lambda}(u_{y}+v_{h,\lambda,y})\) has a critical point \(y^{0}\in\mathbb{R}^{N}\) and \(u_{y^{0}}+v_{h,\lambda,y^{0}}\) is a 1-bump solution of (\(\mathcal {S}_{\lambda}\)). By an argument similar to those in [34, 35], one sees that \(u_{y^{0}}+v_{h,\lambda,y^{0}}\) is a positive solution of (\(\mathcal{S}_{\lambda}\)). Set \(\lambda(1)=\lambda_{0}\) and \(\lambda_{1}(n)=\min\{\lambda(1),\ldots,\lambda(n)\}\). If \(0<\lambda<\lambda_{1}(n)\), then (\(\mathcal{S}_{\lambda}\)) has at least n nontrivial positive solutions. □
Ambrosetti, A, Malchiodi, A, Secchi, S: Multiplicity results for some nonlinear Schrödinger equations with potentials. Arch. Ration. Mech. Anal. 159(3), 253-271 (2001)
Article MATH MathSciNet Google Scholar
Besieris, I-M: Solitons in randomly inhomogeneous media. In: Nonlinear Electromagnetics, pp. 87-116. Academic Press, New York (1980)
Byeon, J, Oshita, Y: Existence of multi-bump standing waves with a critical frequency for nonlinear Schrödinger equations. Commun. Partial Differ. Equ. 29(11-12), 1877-1904 (2004)
Cingolani, S, Nolasco, M: Multi-peak periodic semiclassical states for a class of nonlinear Schrödinger equations. Proc. R. Soc. Edinb., Sect. A 128(6), 1249-1260 (1998)
Kang, X-S, Wei, J-C: On interacting bumps of semi-classical states of nonlinear Schrödinger equations. Adv. Differ. Equ. 5(7-9), 899-928 (2000)
MATH MathSciNet Google Scholar
Li, Y-Y: On a singularly perturbed elliptic equation. Adv. Differ. Equ. 2(6), 955-980 (1997)
MATH Google Scholar
Oh, Y-G: On positive multi-lump bound states of nonlinear Schrödinger equations under multiple well potential. Commun. Math. Phys. 131(2), 223-253 (1990)
Floer, A, Weinstein, A: Nonspreading wave packets for the cubic Schrödinger equation with a bounded potential. J. Funct. Anal. 69(3), 397-408 (1986)
Oh, Y-G: Existence of semiclassical bound states of nonlinear Schrödinger equations with potentials of the class \((V)_{a}\). Commun. Partial Differ. Equ. 13(12), 1499-1519 (1988)
Rabinowitz, P-H: On a class of nonlinear Schrödinger equations. Z. Angew. Math. Phys. 43(2), 270-291 (1992)
Del Pino, M, Felmer, P-L: Local mountain passes for semilinear elliptic problems in unbounded domains. Calc. Var. Partial Differ. Equ. 4(2), 121-137 (1996)
Del Pino, M, Felmer, P-L: Multi-peak bound states for nonlinear Schrödinger equations. Ann. Inst. Henri Poincaré, Anal. Non Linéaire 15(2), 127-149 (1998)
Ambrosetti, A, Badiale, M, Cingolani, S: Semiclassical states of nonlinear Schrödinger equations. Arch. Ration. Mech. Anal. 140(3), 285-300 (1997)
Del Pino, M, Felmer, P-L: Semi-classical states of nonlinear Schrödinger equations: a variational reduction method. Math. Ann. 324(1), 1-32 (2002)
Del Pino, M, Felmer, P-L: Semi-classical states for nonlinear Schrödinger equations. J. Funct. Anal. 149(1), 245-265 (1997)
Ambrosetti, A, Malchiodi, A: Perturbation Methods and Semilinear Elliptic Problems on R n. Progress in Mathematics, vol. 240. Birkhäuser, Basel (2006)
Ambrosetti, A, Malchiodi, A, Ni, W-M: Singularly perturbed elliptic equations with symmetry: existence of solutions concentrating on spheres. I. Commun. Math. Phys. 235(3), 427-466 (2003)
Del Pino, M, Kowalczyk, M, Wei, J-C: Concentration on curves for nonlinear Schrödinger equations. Commun. Pure Appl. Math. 60(1), 113-146 (2007)
Byeon, J, Wang, Z-Q: Standing waves with a critical frequency for nonlinear Schrödinger equations. Arch. Ration. Mech. Anal. 165(4), 295-316 (2002)
Byeon, J, Wang, Z-Q: Standing waves with a critical frequency for nonlinear Schrödinger equations. II. Calc. Var. Partial Differ. Equ. 18(2), 207-219 (2003)
Cao, D-M, Noussair, E-S: Multi-bump standing waves with a critical frequency for nonlinear Schrödinger equations. J. Differ. Equ. 203(2), 292-312 (2004)
Cao, D-M, Peng, S-J: Multi-bump bound states of Schrödinger equations with a critical frequency. Math. Ann. 336(4), 925-948 (2006)
Ding, Y-H, Lin, F-H: Solutions of perturbed Schrödinger equations with critical nonlinearity. Calc. Var. Partial Differ. Equ. 30(2), 231-249 (2007)
Ding, Y-H, Szulkin, A: Bound states for semilinear Schrödinger equations with sign-changing potential. Calc. Var. Partial Differ. Equ. 29(3), 397-419 (2007)
Ding, Y-H, Wei, J-C: Semiclassical states for nonlinear Schrödinger equations with sign-changing potentials. J. Funct. Anal. 251(2), 546-572 (2007)
Coti Zelati, V, Rabinowitz, P-H: Homoclinic type solutions for a semilinear elliptic PDE on \(\mathbf{R}^{n}\). Commun. Pure Appl. Math. 45(10), 1217-1269 (1992)
Coti Zelati, V, Rabinowitz, P-H: Homoclinic orbits for second order Hamiltonian systems possessing superquadratic potentials. J. Am. Math. Soc. 4(4), 693-727 (1991)
Alama, S, Li, Y-Y: On 'multibump' bound states for certain semilinear elliptic equations. Indiana Univ. Math. J. 41(4), 983-1026 (1992)
Cerami, G, Devillanova, G, Solimini, S: Infinitely many bound states for some nonlinear scalar field equations. Calc. Var. Partial Differ. Equ. 23(2), 139-168 (2005)
Cerami, G, Passaseo, D, Solimini, S: Infinitely many positive solutions to some scalar field equations with nonsymmetric coefficients. Commun. Pure Appl. Math. 66(3), 372-413 (2013)
Ackermann, N, Weth, T: Multibump solutions of nonlinear periodic Schrödinger equations in a degenerate setting. Commun. Contemp. Math. 7(3), 269-298 (2005)
Ackermann, N: A nonlinear superposition principle and multibump solutions of periodic Schrödinger equations. J. Funct. Anal. 234(2), 277-320 (2006)
Rabinowitz, P-H: A multibump construction in a degenerate setting. Calc. Var. Partial Differ. Equ. 5(2), 159-182 (1997)
Liu, Z-L, Wang, Z-Q: Multi-bump type nodal solutions having a prescribed number of nodal domains. I. Ann. Inst. Henri Poincaré, Anal. Non Linéaire 22(5), 597-608 (2005)
Liu, Z-L, Wang, Z-Q: Multi-bump type nodal solutions having a prescribed number of nodal domains. II. Ann. Inst. Henri Poincaré, Anal. Non Linéaire 22(5), 609-631 (2005)
Lin, L-S, Liu, Z-L: Multi-bubble solutions for equations of Caffarelli-Kohn-Nirenberg type. Commun. Contemp. Math. 13(6), 945-968 (2011)
Wang, J, Xu, J-X, Zhang, F-B, Chen, X-M: Existence of multi-bump solutions for a semilinear Schrödinger-Poisson system. Nonlinearity 26(5), 1377-1399 (2013)
Pi, H-R, Wang, C-H: Multi-bump solutions for nonlinear Schrödinger equations with electromagnetic fields. ESAIM Control Optim. Calc. Var. 19(1), 91-111 (2013)
Lin, L-S, Liu, Z-L: Multi-bump solutions and multi-tower solutions for equations on \(\mathbb{R}^{N}\). J. Funct. Anal. 257(2), 485-505 (2009)
Lin, L-S, Liu, Z-L, Chen, X-W: Multi-bump solutions for a semilinear Schrödinger equation. Indiana Univ. Math. J. 58(4), 1659-1689 (2009)
D'Aprile, T, Wei, J-C: Standing waves in the Maxwell-Schrödinger equation and an optimal configuration problem. Calc. Var. Partial Differ. Equ. 25(1), 105-137 (2006)
Ambrosetti, A, Badiale, M: Homoclinics: Poincaré-Melnikov type results via a variational approach. Ann. Inst. Henri Poincaré, Anal. Non Linéaire 15(2), 233-252 (1998)
Berestycki, H, Lions, P-L: Nonlinear scalar field equations. II. Existence of infinitely many solutions. Arch. Ration. Mech. Anal. 82(4), 347-375 (1983)
Kwong, M-K: Uniqueness of positive solutions of \(\Delta u-u+u^{p}=0\) in \(\mathbf{R}^{n}\). Arch. Ration. Mech. Anal. 105(3), 243-266 (1989)
Ni, W-M, Takagi, I: Locating the peaks of least-energy solutions to a semilinear Neumann problem. Duke Math. J. 70(2), 247-281 (1993)
Cerami, G, Passaseo, D: Existence and multiplicity results for semilinear elliptic Dirichlet problems in exterior domains. Nonlinear Anal. 24(11), 1533-1547 (1995)
Bahri, A, Lions, P-L: On the existence of a positive solution of semilinear elliptic equations in unbounded domains. Ann. Inst. Henri Poincaré, Anal. Non Linéaire 14(3), 365-413 (1997)
This work was supported by Natural Science Foundation of China (11201186, 11071038, 11171135), NSF of Jiangsu Province (BK2012282), Jiangsu University foundation grant (11JDG117), China Postdoctoral Science Foundation funded project (2012M511199, 2013T60499).
School of Economics and Management, Southeast University, Nanjing, 210096, China
Houqing Fang
Faculty of Science, Jiangsu University, Zhenjiang, Jiangsu, 212013, P.R. China
Houqing Fang & Jun Wang
Correspondence to Houqing Fang.
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.
Fang, H., Wang, J. Existence of positive solutions for a semilinear Schrödinger equation in \(\mathbb{R}^{N}\) . Bound Value Probl 2015, 9 (2015). https://doi.org/10.1186/s13661-014-0270-8
Received: 08 September 2014
35J61
multi-bump solution
semilinear Schrödinger equation
variational methods | CommonCrawl |
\begin{document}
\title[Lie Biderivations on Triangular Algebras] {Lie Biderivations on Triangular Algebras}
\author{Xinfeng Liang, Dandan Ren and Feng Wei}
\address{Liang: School of Mathematics and Statistics, AnHui university of science \& technology, 232001, Huainan, P.R. China}
\email{[email protected]}
\address{Ren: School of Mathematics and Statistics, AnHui university of science \& technology, 232001, Huainan, P.R. China}
\email{[email protected]}
\address{Wei: School of Mathematics and Statistics, Beijing Institute of Technology, 100081, Beijing, P. R. China}
\email{[email protected]\\ [email protected]}
\begin{abstract} Let $\mathcal{T}$ be a triangular algebra over a commutative ring $\mathcal{R}$ and $\varphi: \mathcal{T} \times \mathcal{T}\longrightarrow \mathcal{T}$ be an arbitrary Lie biderivation of $\mathcal{T}$. We will address the question of describing the form of $\varphi$ in the current work. It is shown that under certain mild assumptions, $\varphi$ is the sum of an inner biderivation and an extremal biderivation and a some central bilinear mapping. Our results is immediately applied to block upper triangular algebras and Hilbert space nest algebras . \end{abstract}
\date{\today}
\subjclass[2000]{16W25, 15A78, 47L35}
\keywords{Lie biderivation, derivation, triangular algebra}
\thanks{The work of the first author is partially supported by Key Projects of Natural Science Research in Anhui Province (Grant No. KJ2019A0107,KJ2018A0082); the second author is partially supported by the National Natural Science Foundation of China (11801008), Key Program of Scientific Research Fund for Young Teachers of AUST (Grant No.QN2017209), talent Introduction Project of Anhui University of Science \& Technology(Grant No. 11690).}
\maketitle
\section{Introduction}\label{xxsec1}
Let $\mathcal{R}$ be a commutative ring with identity and $\mathcal{A}$ be an associative $\mathcal{R}$-algebra with center $\mathcal{Z(A)}$. An $\mathcal{R}$-linear mapping $d: \mathcal{A}\longrightarrow \mathcal{A}$ is called a \textit{derivation} if $d(xy)=d(x)y+xd(y)$ for all $x,y\in \mathcal{A}$, and is called a \textit{Lie derivation} if $$ d([x,y])=[d(x),y]+[x,d(x)] $$ for all $x\in \mathcal{A}$. An $\mathcal{R}$-linear mapping $d: \mathcal{A}\longrightarrow \mathcal{A}$ of the form $a\mapsto am-ma $ for some $m\in \mathcal{A}$, is said to be an \textit{inner derivation}. A $\mathcal{R}$-bilinear mapping $\varphi: \mathcal{A}\times \mathcal{A}\rightarrow \mathcal{A}$ is a \textit{biderivation} if it is a derivation with respect to both components, that is $$ \varphi(xz, y)=\varphi(x,y)z+x\varphi(z,y)~\text{and}~\varphi(x,yz)=\varphi(x,y)z+y\varphi(x,z) $$ for all $x,y\in \mathcal{A}$. If the algebra $\mathcal{A}$ is noncommutative, then the mapping $\varphi(x, y)=\lambda[x, y]$ for all $x,y\in \mathcal{A}$ and some $\lambda\in \mathcal{Z(A)}$ is called an \textit{inner biderivation}. An $\mathcal{R}$-bilibear mapping $\varphi: \mathcal{A}\times \mathcal{A}\rightarrow \mathcal{A}$ is said to be an \textit{extremal biderivation} if it is of the form $\varphi(x,y)=[x,[y, a]]$ for all $x, y\in \mathcal{A}$ and some $a\in \mathcal{A}, a\notin \mathcal{Z(A)}$. An $\mathcal{R}$-bilinear mapping $\varphi: \mathcal{A}\times \mathcal{A}\rightarrow \mathcal{A}$ is a \textit{Lie biderivation} if it is a Lie derivation with respect to both components, implying that $$ \begin{aligned} \varphi([x,z],y)&=[\varphi(x,y),z]+[x,\varphi(z,y)]~\text{and}~\\ \varphi(x,[y,z])&=[\varphi(x,y),z]+[y,\varphi(x,z)] \end{aligned} $$ for all $x,y\in \mathcal{A}$.
Suppose that $A$ and $B$ are two unital algebras over $\mathcal{R}$ and $M$ is a nonzero faithful bimodule as a left $A$-module and also right $B$-module. Then one can define $$ \left[ \begin{array} [c]{cc} A & M\\ 0 & B\\ \end{array} \right]=\left\{ \hspace{2pt} \left[ \begin{array} [c]{cc} a & m\\ 0 & b\\ \end{array} \right] \hspace{2pt} \vline \hspace{2pt} a\in A, b\in B, m\in M \hspace{2pt} \right\} $$ to be an associative algebra under matrix-like addition and matrix-like multiplication. An algebra $\mathcal{T}$ is called a \textit{triangular algebra} if there exist algebras $A, B$ and nonzero faithful $(A, B)$-bimodule $M$ such that $\mathcal{T}$ is (algebraically) isomorphic to $$ \left[ \begin{array} [c]{cc} A & M\\ O & B\\ \end{array} \right] $$ under matrix-like addition and matrix-like multiplication. Usually, we denote a triangular algebra by $\mathcal{T}=\left[\smallmatrix A & M\\ O & B \endsmallmatrix \right]$. This kind of algebra was first introduced by Chase in \cite{Chase}. He applied triangular algebras to show us the asymmetric behavior of semi-hereditary rings and constructed an classical example of a left semi-hereditary ring which is not right semi-hereditary. Harada referred to the triangular algebras as generalized triangular matrix rings in the literature \cite{Harada} which he used triangular algebras to study the structure of hereditary semi-primary rings. The definition of triangular algebra is somewhat formal and hence they are said to be formal triangular matrix algebras in the situation of noncommutative algebras \cite{HaghanyVara}.
The concept of biderivation originates from difference or functional equations and their inequalities instead of associative algebras. It was Maksa \cite{Maksa1980} who initially introduced the concept of biderivations. Vukman \cite{Vukman1989, Vukman1990} investigated biderivations on prime and semiprime rings. Bre\v sar et al. \cite{BresarMartindaleMiers1993} have shown that each biderivation $\varphi$ on a noncommutative prime ring $\mathcal{R}$ is of the form $\varphi(x, y)=\lambda [x, y]$, for some element $\lambda$ in the extended centroid of $\mathcal{R}$. It has turned out that this result can be applied to the problem of describing the form of commuting mappings. We encourage the reader to refer to the survey paper \cite{Bresar2004} where applications of biderivations to some other areas are provided. The study of commuting mappings and biderivations has its deep roots in the structure theory of associative algebras, where it has proved to be influential and far-reaching, see \cite{Bresar2004} and references therein.
They have been becoming an active research topic in the theory of additive mappings of associative algebras since Bre\v sar's elegant work \cite{BresarMartindaleMiers1993, Bresar1995}. On the other hand, an interest in studying these mappings on Lie algebras has been increasing more recently, see \cite{BresarZhao2018, LiuGuoZhao2018, Tang2018}.
The objective of this paper is to investigate Lie biderivations on triangular algebras. Many authors have made important and essantial contributions to the related topics, see \cite{AbdiogluLee2017, Ahmed2016, Benkovic2009, BenkovicEremita2004, Bresar1995, BresarZhao2018, CheraghpourGhosseiri2019, DuWang2013, Eremita2017, Fosner2015, Ghosseiri2013, Ghosseiri2017, LiangRenWei2019, LiuGuoZhao2018, MGRSO2017, Tang2018, Wang2016, ZhangFengLiWu2006, ZhaoWangYao2009}. Cheung in \cite{Cheung2000} initiated the study of linear mappings of abstract triangular algebras and obtained a number of elegant results. He gave detailed descriptions concerning automorphisms, derivations, commuting mappings and Lie derivations of triangular algebras in \cite{Cheung2000, Cheung2001}. Benkovi\v c \cite{Benkovic2005} considered Jordan derivations of triangular matrices over a commutative ring with identity and proved that any Jordan derivation from the algebra of all upper triangular matrices into its arbitrary bimodule is the sum of a derivation and an antiderivation. Zhang and Yu \cite{ZhangYu2006} observed that each Jordan derivation on a 2-torsion free triangular algebra is a derivation. Generalized Biderivations on nest algebras were also studied by Zhang et al. \cite{ZhangFengLiWu2006}. They provided a sufficient and necessary condition which enable each generalized biderivation on a complex separable Hilbert space nest algebra to be inner. Zhao et al. \cite{ZhaoWangYao2009} investigated biderivations on upper triangular matrix algebras over a commutative ring $\mathcal{R}$. They proved that each biderivation on the algebra $\mathcal{T}_n(\mathcal{R})$ of all upper triangular $n\times n$ matrices over $\mathcal{R}$ is the sum of an inner biderivation and an extremal biderivation.
Benkovi\v c \cite{Benkovic2009} paid special attention to biderivations on a certain class of triangular algebras. He obtained that a bilinear biderivation $\varphi$ of a triangular algebra $\mathcal{T}$ satisfying certain conditions (see the conditions (1)--(4) in Theorem \ref{xxsec3.2}) is of the form $\varphi(x, y)=\lambda[x, y]+[x, [y, r]]$ for some element $\lambda\in \mathcal{Z(T)}$ and some element $r\in \mathcal{T}$. On the other hand, Ghosseiri \cite{Ghosseiri2013} considered biderivations of an arbitrary triangular ring $\mathcal{T}$ (not assuming $M$ is a faithful $(A, B)$-bimodule). He proved that each biderivation $\varphi: \mathcal{T}\times \mathcal{T}\longrightarrow \mathcal{T}$ can be decomposed into $\varphi= \tau+\psi+\delta$, where $\tau$ is a biderivation satisfying certain conditions, $\psi$ is an extremal biderivation, $\delta$ is a special kind of biderivation. Basing on previous Benkovi\v c's and Ghosseiri's works, our\cite{LiangRenWei2019} use the same as condition (see the conditions (1)--(4) in Theorem \ref{xxsec3.4}) to obtain the form of Jordan biderivation of a triangular algebras $\mathcal{T}$; Eremita \cite{Eremita2017} used the notion of the maximal left ring of quotients to describe the form of biderivations of a triangular ring. His distinguished approach permit him to achieve double purpose: generalizing Benkovi\v c's result on biderivations \cite[Theorem 4.11]{Benkovic2009} and refining Ghosseiri's result on biderivations \cite[Theorem 2.4]{Ghosseiri2013}. More recently, Ghosseiri and his collaborators \cite{CheraghpourGhosseiri2019, Ghosseiri2017} further characterized the structure of biderivations and superderivations on trivial extensions, which are natural generalizations of the corresponding results on triangular algebras. Wang et al\cite{WAngYuChen2011} investigated biderivations on Parabolic Subalgebras of Simple Lie Algebras $\mathfrak{g}$ of rank l over an algebraically closed field of characteristic zero. Let $\mathfrak{p}$ an arbitrary parabolic subalgebra of $\mathfrak{g}$, they proved that a bilinear map $\varphi:\mathfrak{p}\times \mathfrak{p}\rightarrow \mathfrak{p}$ is a biderivation if and only if it is a sum of an inner and an extremal biderivation. Wang and Yu\cite{WangYu2013} observed that each biderivation of Schr\"{0}dinger-Virasoro Lie algebra $\Omega$ over the complex field $\mathfrak{C}$ is inner. As an application of biderivations, they show that every linear commuting map $\varphi$ on $\Omega$ has the form $\varphi(x)=\lambda x+\mathfrak{f}(x)M_0$, where $\lambda\in \mathfrak{C}$, $M_0$ is a basis of the one-dimensional center of $\Omega$, and $\mathfrak{f}$ is a linear function from $\Omega$ to $\mathfrak{C}$. Cheng et al. \cite{ChengWangSunZhang2017} prove that each skew-symmetric biderivation of the Lie algebra $\mathfrak{gca}$ over the complex field $\mathfrak{C}$ is inner. As an application of biderivations, we will show that every linear commuting map $\phi$ on the Lie algebra $\mathfrak{gca}$ has the form $\phi(x)=\lambda x$ , where $\lambda\in \mathfrak{C}$. Liu et al.\cite{LiuGuoZhao2018} determine the biderivations of the block Lie algebras $\mathcal{B}(q)$ for all $q\in \mathfrak{C}$. More precisely, they prove that the space of biderivations of $\mathcal{B}(q)$ is spanned by inner biderivations and one outer biderivation. Applying this result, they also research all commuting maps on $\mathcal{B}(q)$.
This paper is devoted to the treatment of Lie biderivations of triangular algebras, and its framework is as follows. After Introduction, the second section states some fundamental facts about triangular algebras and demonstrates three classical examples. The kernel question of the current work is to describe the decomposition forms of Lie biderivations on triangular algebras, which takes places in the mainbody--Section 3.
\section{Preliminaries}\label{xxsec2}
Let $\mathcal{R}$ be a commutative ring with identity. Let $A$ and $B$ be unital algebras over $\mathcal{R}$. Recall that an $(A, B)$-bimodule $M$ is \textit{faithful} if for any $a\in A$ and $b\in B$, $aM=0$ (resp. $Mb=0$) implies that $a=0$ (resp. $b=0$).
Let $A, B$ be unital associative algebras over $\mathcal{R}$ and $M$ be a unital $(A,B)$-bimodule, which is faithful as a left $A$-module and also as a right $B$-module. We denote the {\em triangular algebra} consisting of $A, B$ and $M$ by $$ \mathcal{T}=\left[ \begin{array} [c]{cc} A & M\\ O & B\\ \end{array} \right] . $$ Then $\mathcal{T}$ is an associative and noncommutative $\mathcal{R}$-algebra. The center $\mathcal{Z(T)}$ of $\mathcal{T}$ is (see \cite[Proposition 3]{Cheung2003}) $$ \mathcal{Z(T)}=\left\{ \left[ \begin{array} [c]{cc} a & 0\\ 0 & b \end{array} \right] \vline \hspace{3pt} am=mb,\ \forall\ m\in M \right\}. $$ Let us define two natural $\mathcal{R}$-linear projections $\pi_A:\mathcal{T}\rightarrow A$ and $\pi_B:\mathcal{T}\rightarrow B$ by $$ \pi_A: \left[ \begin{array} [c]{cc} a & m\\ 0 & b\\ \end{array} \right] \longmapsto a \quad \text{and} \quad \pi_B: \left[ \begin{array} [c]{cc} a & m\\ 0 & b\\ \end{array} \right] \longmapsto b. $$ It is easy to see that $\pi_A \left(\mathcal{Z(T)}\right)$ is a subalgebra of ${\mathcal Z}(A)$ and that $\pi_B(\mathcal{Z(T)})$ is a subalgebra of ${\mathcal Z}(B)$. Furthermore, there exists a unique algebraic isomorphism $\tau\colon \pi_A(\mathcal{Z(T)})\longrightarrow \pi_B(\mathcal{Z(T)})$ such that $am=m\tau(a)$ for all $a\in \pi_A(\mathcal{Z(T)})$ and for all $m\in M$.
Let $1$ (resp. $1^\prime$) be the identity of the algebra $A$ (resp. $B$), and let $I$ be the identity of the triangular algebra $\mathcal{T}$. We will use the following notations: $$ e=\left[ \begin{array} [c]{cc} 1 & 0\\ 0 & 0\\ \end{array} \right], \hspace{8pt} f=I-e=\left[ \begin{array} [c]{cc} 0 & 0\\ 0 & 1^\prime\\ \end{array} \right] $$ and $$ \mathcal{T}_{11}=e{\mathcal T}e, \hspace{6pt} \mathcal{T}_{12}=e{\mathcal T}f, \hspace{6pt} \mathcal{T}_{22}=f{\mathcal T}f. $$ Thus the triangular algebra $\mathcal{T}$ can be written as $$ \mathcal{T}=e{\mathcal T}e+e{\mathcal T}f+f{\mathcal T}f =\mathcal{T}_{11}+\mathcal{T}_{12}+\mathcal{T}_{22}. $$ Here, $\mathcal{T}_{11}$ and $\mathcal{T}_{22}$ are subalgebras of $\mathcal{T}$ which are isomorphic to $A$ and $B$, respectively. $\mathcal{T}_{12}$ is a $(\mathcal{T}_{11}, \mathcal{T}_{22})$-bimodule which is isomorphic to the $(A, B)$-bimodule $M$. It should be remarked that $\pi_A(\mathcal{Z(T)})$ and $\pi_B(\mathcal{Z(T)})$ are isomorphic to $e\mathcal{Z(T)}e$ and $f\mathcal{Z(T)}f$, respectively. Then there is an algebra isomorphism $\tau\colon e\mathcal{Z(T)}e\longrightarrow f\mathcal{Z(T)}f$ such that $am=m\tau(a)$ for all $m\in e\mathcal{T}f$.
Let us see several classical triangular algebras which will be frequently invoked in the sequel discussion.
\subsection{Upper triangular matrix algebras} \label{xxsec2.1}
Let $\mathcal{R}$ be a commutative ring with identity. We denote the set of all $p\times q$ matrices over $\mathcal{R}$ by $M_{p\times q}(\mathcal{R})$ and denote the set of all $n\times n$ upper triangular matrices over $\mathcal{R}$ by $T_n(\mathcal{R})$. For $n\geq 2$ and each $1\leq k \leq n-1$, the \textit{upper triangular matrix algebra} $T_n(\mathcal{R})$ can be written as $$ T_n(\mathcal{R})=\left[ \begin{array} [c]{cc} T_k(\mathcal{R}) & M_{k\times (n-k)}(\mathcal{R})\\ O & T_{n-k}(\mathcal{R}) \end{array} \right] . $$
\subsection{Block upper triangular matrix algebras} \label{xxsec2.2}
Let $\mathcal{R}$ be a commutative ring with identity. For each positive integer $n$ and each positive integer $m$ with $m\leq n$, we denote by $\bar{d}=(d_1, \cdots, d_i, \cdots, d_m)\in \mathbb{N}^m$ an ordered $m$-vector of positive integers such that $n=d_1+\cdots +d_i+\cdots+d_m$. The \textit{block upper triangular matrix algebra} $B^{\bar{d}}_n(\mathcal{R})$ is a subalgebra of $M_n(\mathcal{R})$ of the form $$ B^{\bar{d}}_n(\mathcal{R})=\left[ \begin{array} [c]{ccccc} M_{d_1}(\mathcal{R}) & \cdots & M_{d_1\times d_i}(\mathcal{R}) & \cdots & M_{d_1\times d_m}(\mathcal{R})\\ & \ddots & \vdots & & \vdots \\
& & M_{d_i}(\mathcal{R}) & \cdots & M_{d_i\times d_m}(\mathcal{R}) \\
& O & & \ddots & \vdots \\
& & & & M_{d_m}(\mathcal{R}) \\ \end{array} \right] = $$ $$ \left[\smallmatrix \boxed{\smallmatrix r_{1,1} & \cdots & r_{1,d_1}\\ \vdots & \ddots & \vdots\\ r_{d_1,1}& \cdots & r_{d_1,d_1}\endsmallmatrix} & \cdots & \boxed{\smallmatrix r_{1, x+1} & \cdots & r_{1, x+d_i}\\ \vdots & \ddots & \vdots\\ r_{d_1,x+1}& \cdots & r_{d_1, x+d_i} \endsmallmatrix} & \cdots & \boxed{\smallmatrix r_{1, y+1} & \cdots & r_{1, y+d_m}\\ \vdots & \ddots & \vdots\\ r_{d_1, y+1}& \cdots & r_{d_1, y+d_m} \endsmallmatrix}\\
& \ddots & \vdots \hspace{48pt}\vdots & & \vdots \hspace{48pt}\vdots \\
& &
\boxed{\smallmatrix r_{x+1,x+1} & \cdots & r_{x+1, x+d_i}\\ \vdots & \ddots & \vdots\\ r_{x+d_i, x+1}& \cdots & r_{x+d_i,x+d_i} \endsmallmatrix} & \cdots & \boxed{\smallmatrix r_{x+1,y+1} & \cdots & r_{x+1,y+d_m}\\ \vdots & \ddots & \vdots\\ r_{x+d_i,y+1}& \cdots & r_{x+d_i, y+d_m} \endsmallmatrix} \\
& & & \ddots & \vdots \hspace{48pt}\vdots\\
& O & & &
\boxed{\smallmatrix r_{y+1, y+1} & \cdots & r_{y+1,y+d_m}\\ \vdots & \ddots & \vdots\\ r_{y+d_m,y+1}& \cdots & r_{y+d_m,y+d_m} \endsmallmatrix} \endsmallmatrix \right] . $$ Note that the full matrix algebra $M_n(\mathcal{R})$ of all $n\times n$ matrices over $\mathcal{R}$ and the upper triangular matrix algebra $T_n(\mathcal{R})$ of all $n\times n$ upper triangular matrices over $\mathcal{R}$ are two special cases of block upper triangular matrix algebras. If $n\geq 2$ and $B^{\bar{d}}_n(\mathcal{R})\neq M_n(\mathcal{R})$, then $B^{\bar{d}}_n(\mathcal{R})$ is a triangular algebra and can be represented as $$ B^{\bar{d}}_n(\mathcal{R})=\left[ \begin{array} [c]{cc} B^{\bar{d}_1}_j(\mathcal{R}) & M_{j\times (n-j)}(\mathcal{R})\\ O_{(n-j)\times j} & B^{\bar{d}_2}_{n-j}(\mathcal{R})\\ \end{array} \right], $$ where $1\leq j < m$ and $\bar{d}_1\in \mathbb{N}^j, \bar{d}_2\in \mathbb{N}^{m-j}$.
\subsection{Nest algebras} \label{xxsec2.3}
Let $\mathbf{H}$ be a complex Hilbert space and $\mathcal{B}(\mathbf{H})$ be the algebra of all bounded linear operators on $\mathbf{H}$. Let $I$ be a index set. A \textit{nest} is a set $\mathcal{N}$ of closed subspaces of $\mathbf{H}$ satisfying the following conditions: \begin{enumerate} \item[(1)] $0, \mathbf{H}\in \mathcal{N}$; \item[(2)] If $N_1, N_2\in \mathcal{N}$, then either $N_1\subseteq N_2$ or $N_2\subseteq N_1$; \item[(3)] If $\{N_i\}_{i\in I}\subseteq \mathcal{N}$, then $\bigcap_{i\in I}N_i\in \mathcal{N}$; \item[(4)] If $\{N_i\}_{i\in I}\subseteq \mathcal{N}$, then the norm closure of the linear span of $\bigcup_{i\in I} N_i$ also lies in $\mathcal{N}$. \end{enumerate} If $\mathcal{N}=\{0, \mathbf{H}\}$, then $\mathcal{N}$ is called a trivial nest, otherwise it is called a non-trivial nest.
The \textit{nest algebra} associated with $\mathcal{N}$ is the set $$ \mathcal{T}(\mathcal{N})=\{\hspace{3pt} T\in
\mathcal{B}(\mathbf{H})\hspace{3pt}| \hspace{3pt} T(N)\subseteq N \hspace{3pt} {\rm for} \hspace{3pt} {\rm all} \hspace{3pt} N\in \mathcal{N}\} . $$ A nontrivial nest algebra is a triangular algebra. Indeed, if $N\in \mathcal{N}\backslash \{0, {\mathbf H}\}$ and $E$ is the orthogonal projection onto $N$, then $\mathcal{N}_1=E(\mathcal{N})$ and $\mathcal{N}_2=(1-E)(\mathcal{N})$ are nests of $N$ and $N^{\perp}$, respectively. Moreover, $\mathcal{T}(\mathcal{N}_1)=E\mathcal{T}(\mathcal{N})E, \mathcal{T}(\mathcal{N}_2)=(1-E)\mathcal{T}(\mathcal{N})(1-E)$ are nest algebras and $$ \mathcal{T}(\mathcal{N})=\left[ \begin{array} [c]{cc} \mathcal{T}(\mathcal{N}_1) & E\mathcal{T}(\mathcal{N})(1-E)\\ O & \mathcal{T}(\mathcal{N}_2)\\ \end{array} \right]. $$ Note that any finite dimensional nest algebra is isomorphic to a complex block upper triangular matrix algebra. We refer the reader to \cite{Davidson1988} for the theory of nest algebras.
\subsection{Matrix Incidence algebras} \label{xxsec2.4}
Let $\mathbb{K}$ be a field and $A$ be a unital algebra over $\mathbb{K}$. Let $X$ be a partially ordered set with the partial order $\leq$. We define the \textit{incidence algebra} of $X$ over $A$ as $$
I(X, A)=\{f: X\times X \longrightarrow A \hspace{2pt} | \hspace{2pt} f(x,y)=0 \hspace{4pt} {\rm if} \hspace{4pt} x\nleq y\} $$ with algebraic operation given by \begin{align*} (f+g)(x, y) & =f(x, y)+g(x, y),\\ (f * g)(x, y) & =\sum_{x \leq z\leq y}f(x, z)g(z, y), \\ (k \cdot f)(x, y)& =k \cdot f(x, y) \end{align*} for all $f, g\in I(X, A), k\in \mathbb{K}$ and $x, y, z\in X$. Obviously, $f$ is a $A$-valued function on $\{(x,y)\in X\times X \vert x\leq y\}$. The product $*$ is usually called \textit{convolution} in function theory. In particular, if $X$ be finite partially ordered set with $n$ elements, then $I(X, A)$ is isomorphic to a subalgebra of the algebra $M_n(A)$ of square matrices over $\mathbb{K}$ with elements $[a_{ij}]_{n\times n}\in M_n(A)$ satisfying $a_{ij}=0$ if $i\nleq j$, for some partial order $\leq$ defined in the partial order set (poset) $\{1, \cdots, n\}$ \cite[Proposition 1.2.4]{SpiegelDonnell1997}. More precisely, $I(X, A)$ is isomorphic to an upper triangular matrix algebra with entries $A$ or 0. We will call such incidence algebras \textit{matrix incidence algebras}. In fact, any incidence algebra arising from a finite partially ordered set is isomorphic to some matrix incidence algebra $I(X, A)$, where $\leq$ is consistent with the natural order. Nevertheless, we can not say that each matrix incidence algebra is a triangular algebra in which $M$ is a faithful $(A, B)$-bimodule in usual. Not all incidence algebras meet this condition. If $X$ is a finite partial ordered set which is connected, then each matrix incidence algebra $I(X, A)$ can be considered as a triangular algebra.
To illustrate this conclusion, let us see an intuitional example. Let $X=\{1, 2, 3, 4, 5, 6, 7\}$ be a partially ordered set and its relations generated by $$ \{1\leq 3, 2\leq 3, 3\leq 4, 4\leq 5, 5\leq 6, 5\leq 7\}. $$ We represent this partially ordered set $X$ by the following diagram $$ X=\left[ \begin{array}{c} \xymatrix{ 1 \ar[dr] & & & & 6\\ & 3 \ar[r] & 4 \ar[r] & 5 \ar[ur] \ar[dr] & \\ 2 \ar[ur] & & & & 7 } \end{array} \right]. $$ Then we have $$ I(X, A)\cong \left[ \begin{array}{ccccccc} A & 0 & A & A & A & A & A\\ 0 & A & A & A & A & A & A\\ 0 & 0 & A & A & A & A & A\\ 0 & 0 & 0 & A & A & A & A\\ 0 & 0 & 0 & 0 & A & A & A\\ 0 & 0 & 0 & 0 & 0 & A & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & A \end{array} \right] . $$ The incidence algebra of a partially ordered set (poset) $X$ is the algebra of functions from the segments of $X$ into an $\mathbb{K}$-algebra $A$, which extends the various convolutions in algebras of arithmetic functions. Incidence algebras, in fact, were first considered by Ward \cite{Ward1937} as generalized algebras of arithmetic functions. Rota and Stanley developed incidence algebras as the fundamental structures of enumerative combinatorial theory and allied areas of arithmetic function theory. The theory of M\"{o}bius functions, including the classical M\"{o}bius function of number theory and the combinatorial inclusion-exclusion formula, is established in the context of incidence algebras. We refer to the reader \cite{Stanley1997} for all these. On the other hand, the algebraic properties of incidence algebras are quite striking as well, including the fact that the lattice of ideals (in the finite-dimensional case) is distributive, and that the partial order can be recovered from the algebra. The latter has led to a complete description of the automorphisms and derivations of the algebra \cite{Stanley1970}.
In the theory of operator algebras, incidence algebras are refereed to as ``bigraph algebras" or ``finite dimensional CSL algebras". For a finite dimensional Hilbert space $\mathbf{H}$ and the algebra $\mathcal{B}(\mathbf{H})$ of all bounded linear operators on $\mathbf{H}$. A digraph algebra is a subalgebra $A$ of $\mathcal{B}(\mathbf{H})$ which contains a maximal abelian self-adjoint subalgebra $\mathcal{D}$ of $\mathcal{B}(\mathbf{H})$. Since $\mathcal{D}$ is maximal abelian, the invariant projections for $A$, ${\rm Lat}A$, are elements of $\mathcal{D}$ and so are mutually commuting. Thus $A$ is a CSL-algebra (The abbreviation CSL denotes `commutative subspace lattice'). Obviously, $A$ is finite dimensional; on the other hand, every finite dimensional CSL-algebra acts on a finite dimensional Hilbert space and contains a mass. The term digraph algebra refers to the fact that associated with $A$ there is a directed graph on the set of vertices $\{1, 2, \cdots, n\}$. This graph contains all the self loops. Then A contains the matrix unit $e_{ij}$ if and only if there is a (directed) edge from $j$ to $i$ in the digraph.
Let $I(X, A)$ be an incidence algebra of $X$ over $A$. The identity element $\epsilon$ of $I(X, A)$ is given by $\epsilon(x, y)=\delta_{xy}$ for all $x\leq y$, where $\delta_{xy}\in \{0, 1\}$ is the Kronecker sign. For each pair $x, y\in X$ with $x\leq y$ we define $\epsilon_{xy}(u, v)=\delta_{xu}\delta_{yv}$ for all $u\leq v$. Then $\epsilon_{xy}*\epsilon_{zu}=\delta_{yz}\epsilon_{xu}$ and $\epsilon_{xy}a=a\epsilon_{xy}$ for all $a\in A$. Let $1\leq n \leq \infty$, let $X=\{1, 2, \cdots, n\}$ if $n<\infty$, or $X=\{1, 2, \cdots, \}$ if $n=\infty$, and endow $X$ with the usual linear ordering. Then $I(X, A)$ can be identified with the upper triangular matrix algebra $T_n(A)$ by identifying $\epsilon_{xy}$ with the matrix $[\delta_{xi}\delta_{yj}]_{i, j=1}^n$. Note that the case $T_{\infty}(A)$ is of infinite matrices. As another extreme case, let $X=\{1, 2, \cdots, n\}$ with $1\leq n < \infty$. If $X$ has the pre-order $\leq^\prime$, where $i\leq^\prime j$ for each pair $(i, j)\in X\times X$, then $I(X, A)\cong M_n(A)$, the full matrix algebras of $n\times n$ matrices over $A$. We now deduce some results for the upper triangular matrix algebras $T_n(A)$ with $1\leq n \leq \infty$ and for the full matrix algebras $M_n(A)$ with $1\leq n < \infty$ from the results for incidence algebras.
\section{Lie Biderivations} \label{xxsec3}
This part is the main part of our work, this part is mainly research Lie biderivation of triangular algebras. In order to better explain our work, we prove the following important formulas.
\begin{lemma}\label{xxsec3.1} Let $\mathcal{A}$ be a associative algebra over a commutative ring $\mathcal{R}$
and $\phi: \mathcal{A}\times \mathcal{A}\rightarrow \mathcal{A}$ be a Lie biderivation on $\mathcal{A}$, then $\phi$ has the fowwing properties: $$ [\phi(x,a),[b,y]]+[\phi(x,b),[y,a]]=[\phi(y,a),[x,b]]+[\phi(y,b),[x,a]] $$ for all $a,b,x,y\in \mathcal{A}$. \end{lemma}
\begin{proof} Let $\phi: \mathcal{A}\times \mathcal{A}\rightarrow \mathcal{A}$ be a Lie biderivation of $\mathcal{A}$. For arbitrary $x,y,a,b\in \mathcal{A}$, let us compute $\phi([x,y],[a,b])$. Since $\phi$ is a Lie derivation with respective to the first component, we have $$ \begin{aligned} \phi([x,y],[a,b])&=[\phi(x,[a,b]),y]+[x,\phi(y,[a,b])]\\ &=[[a,\phi(x,b)]+[\phi(x,a),b],y]+[x,[\phi(y,a),b]+[a,\phi(y,b)]]\\ &=[a\phi(x,b)-\phi(x,b)a+\phi(x,a)b-b\phi(x,a),y]\\ &+[x,\phi(y,a)b-b\phi(y,a)+a\phi(y,b)-\phi(y,b)a]\\ &=(a\phi(x,b)-\phi(x,b)a+\phi(x,a)b-b\phi(x,a))y\\ &-y(a\phi(x,b)-\phi(x,b)a+\phi(x,a)b-b\phi(x,a))\\ &+x(\phi(y,a)b-b\phi(y,a)+a\phi(y,b)-\phi(y,b)a)\\ &-(\phi(y,a)b-b\phi(y,a)+a\phi(y,b)-\phi(y,b)a)x; \end{aligned} \eqno{(3.1)} $$ Likewise, the mapping $\phi$ is a Lie biderivation with respect to the second component as well, we have $$ \begin{aligned} \phi([x,y],[a,b])&=[\phi([x,y],a),b]+[a,\phi([x,y],b)]\\ &=[[\phi(x,a),y]+[x,\phi(y,a)],b]+[a,[\phi(x,b),y]+[x,\phi(y,b)]]\\ &=[\phi(x,a)y-y\phi(x,a)+x\phi(y,a)-\phi(y,a)x,b]\\ &+[a,\phi(x,b)y-y\phi(x,b)+x\phi(y,b)-\phi(y,b)x]]\\ &=(\phi(x,a)y-y\phi(x,a)+x\phi(y,a)-\phi(y,a)x)b\\ &-b(\phi(x,a)y-y\phi(x,a)+x\phi(y,a)-\phi(y,a)x)\\ &+a(\phi(x,b)y-y\phi(x,b)+x\phi(y,b)-\phi(y,b)x)\\ &-(\phi(x,b)y-y\phi(x,b)+x\phi(y,b)-\phi(y,b)x)a. \end{aligned} \eqno{(3.2)} $$
Comparing $(3.1)$ and $(3.2)$, one can obtain $$ [\phi(x,a),[b,y]]+[\phi(y,b),[a,x]]=[\phi(x,b),[a,y]]+[\phi(y,a),[x,b]] $$ For arbitrary $x,y,a,b\in \mathcal{A}$. \end{proof}
Let $M$ be a unital $(A,B)$-bimodule, the mapping $f:M\rightarrow M$ satisfying $f(am)=af(m)$ and $f(mb)=f(m)b$ for all $a\in A, m\in M, b\in B$ is called \text{bimodule homomorphism}. A bimodule homomorphism $f:M\rightarrow M$ is of the \text{standard form} if there exist $a_0\in\mathcal{ Z}(A),b_0\in \mathcal{Z}(B)$ such that $$ f(m)=a_0m+mb_0 \eqno{(3.3)} $$ for all $m\in M$.
Below we give the main theorem of this paper.
\begin{theorem}\label{xxsec3.2} Let $\mathcal{T}= \left[\smallmatrix A & M\\ O & B \endsmallmatrix \right]$ be a triangular algebra over a commutative ring $\mathcal{R}$ and let $\phi: \mathcal{T}\times \mathcal{T}\longrightarrow \mathcal{T}$ be a Lie biderivation. If the following conditions holds: \begin{itemize}
\item [(i)] $\pi_{A}(\mathcal{Z}(\mathcal{T}))=\mathcal{Z}(A)$ and $\pi_{B}(\mathcal{Z}(\mathcal{T}))=\mathcal{Z}(B)$;
\item [(ii)]at least one of the algebras $A$ and $B$ is noncommutative;
\item [(iii)]each bimodule homomorphism $\mathfrak{f}:M\rightarrow M$ is of the standard form;
\item [(iv)]if $\alpha a=0$, $\alpha\in \mathcal{Z}(A), 0\neq a\in A$, then $\alpha=0$. \end{itemize} Then every Lie biderivation $\phi: \mathcal{T}\times \mathcal{T}\rightarrow \mathcal{T}$ is of the form $$ \phi(x,y)=\lambda_0[x,y]+[x,[y,\phi(e,e)]]+\mu(x,y) $$ where for some $\lambda_0\in \mathcal{Z}(\mathcal{T})$ and $\mu:\mathcal{T}\times \mathcal{T}\rightarrow \mathcal{Z}(\mathcal{T})$ is a central mapping
for arbitrary $x,y\in \mathcal{T}$. \end{theorem}
In order to better prove the main theorem, we need to obtain the following series of lemmas.
\begin{lemma}\label{xxsec3.3} Let $\mathcal{T}= \left[\smallmatrix A & M\\ O & B \endsmallmatrix \right]$ be a triangular algebra over a commutative ring $\mathcal{R}$ and let $\phi: \mathcal{T}\times \mathcal{T}\longrightarrow \mathcal{T}$ be a Lie biderivation. \begin{enumerate}
\item[(1)] $\phi(0,x)=\phi(x,0)=0$;
\item[(2)] $\phi(1,x)=e\phi(1,x)e \oplus f\phi(1,x)f\in \mathcal{Z}(\mathcal{T})$ ~\text{and}~
$\phi(x,1)=e\phi(x,1)e \oplus f\phi(x,1)f\in \mathcal{Z}(\mathcal{T})$ ;
\item[(3)] $e\phi(e,e)f=-e\phi(f,e)f=-e\phi(e,f)f=e\phi(f,f)f$. \end{enumerate} for all $x\in \mathcal{T}$ \end{lemma}
\begin{proof}
\item[(1)] Since $\phi$ is a Lie derivation with respect to the first component, we have $$ \begin{aligned} \phi(0,x)&=\phi([0,0],x)\\ &=[0,\phi(0,x)]+[\phi(0,x),0]=0 \end{aligned} \eqno{(3.4)} $$ for all $x\in \mathcal{T}$.
\item[(2)] Since $\phi$ is a Lie derivation with respect to the first component and also use the relation $(3.4)$, we have $$ \begin{aligned} 0=\phi(0,x)&=\phi([1,y],x)\\ &=[\phi(1,x),y]+[1,\phi(y,x)]=[\phi(1,x),y], \end{aligned} $$ for arbitrary $x, y\in \mathcal{T}$. Because of the arbitrariness of element $y\in \mathcal{T}$, one can obtain $\phi(1,x)\in \mathcal{Z}(\mathcal{T})$. Furthermore, we have By an analogous manner of $\phi(1,x)$, we have $\phi(x,1)\in \mathcal{Z}(\mathcal{T})$ for all $x\in \mathcal{T}$.
\item[(3)] In view of $(2)$, one can obtain $e\phi(1,x)f=0$ for all $x\in \mathcal{T}$. Further using the relation $e+f=1$ and taking $x=e$ and $x=f$ respectively, we get $$ e\phi(e,e)f=-e\phi(f,e)f \ \ \text{and} \ \ e\phi(e,f)f=-e\phi(f,f)f . \eqno{(3.5)} $$ By an analogous manner, according to the relation $(2)$, we obtain $e\phi(x,1)f=0$ for all $x\in \mathcal{T}$. Furthermore, we have $$ e\phi(e,e)f=-e\phi(e,f)f \ \ \text{and} \ \ e\phi(f,e)f=-e\phi(f,f)f. \eqno{(3.6)} $$ Comparing $(3.5)$ with $(3.6)$, we have $$ e\phi(e,e)f=-e\phi(f,e)f=-e\phi(e,f)f=e\phi(f,f)f. $$ \end{proof}
\begin{lemma}\label{xxsec3.4} With notations as above, we have \begin{enumerate}
\item [(1)]$\phi(a,m)=\alpha_0am=-\phi(m,a)$;
\item [(2)]$\phi(b,m)=\alpha_0mb=-\phi(m,b)$ \end{enumerate} for all $a\in A, b\in B, m\in M$. \end{lemma}
\begin{proof} Since $\phi$ is a Lie derivation with respect to the second component, we have $$ \begin{aligned} \phi(a,m)=\phi(a,[e,m])&=[\phi(a,e),m]+[e,\phi(a,m)]\\ &=\phi(a,e)m-m\phi(a,e)+e\phi(a,m)-\phi(a,m)e \end{aligned} $$ for all $a\in A, m\in M$. Multiplying the above equation by $e$ on the left side and by $e$ on the right side, we have $e\phi(a,m)e=0$. Similarly, one can obtain $f\phi(a,m)f=0$ and $$ e\phi(a,e)m=m\phi(a,e)f, \eqno(3.7) $$ for all $a\in A, m\in M$. Based on the above three formulas, we can get $$ \phi(a,m)=e\phi(a,m)f \eqno(3.8) $$ for all $a\in A, \in M$.
Since $\phi$ is a Lie derivation with respect to the second component, we have $$ \begin{aligned} 0=\phi(a,[b,e])&=[\phi(a,b),e]+[b,\phi(a,e)]\\ &=\phi(a,b)e-e\phi(a,b)+b\phi(a,e)-\phi(a,e)b \end{aligned} $$ for all $a\in A,b\in B$. Multiplying the above equation by $f$ on the left side and $f$ on the right side, we have $b\phi(a,e)f=f\phi(a,e)b$, and then $$ f\phi(a,e)f\in \mathcal{Z}(B) \eqno(3.9) $$ for all $a\in A, b\in B$. Combining $(3.7)$ and $(3.9)$ together with the faithfulness of $A$-left module $M$, one can obtain $$ e\phi(a,e)e\oplus f\phi(a,e)f\in \mathcal{Z}(\mathcal{T}) \eqno(3.10) $$ for all $a\in A, m\in M$.
Similarly, we have $$ e\phi(a,f)e\oplus f\phi(a,f)f\in \mathcal{Z}(\mathcal{T}) \eqno(3.11) $$ for all $a\in A, m\in M$.
Since $\phi$ is a Lie derivation with respect to the first component, we have $$ \begin{aligned} 0=\phi(0,m)=\phi([a,e],m)&=[\phi(a,m),e]+[a,\phi(e,m)]\\ &=\phi(a,m)e-e\phi(a,m)+a\phi(e,m)-\phi(e,m)a \end{aligned} \eqno(3.12) $$ for all $a\in A, m\in M$. Multiplying in $(3.12)$ by $e$ on the left side and by $f$ on the right side, we have $$ e\phi(a,m)f=a\phi(e,m)f \eqno(3.13) $$ for all $a\in A, m\in M$. Considering $(3.8)$ and $(3.13)$, we conclude that $$ \phi(a,m)=a\phi(e,m)f\in M $$ for all $a\in A, m\in M$. In analogous manner, one can check that $$ \begin{aligned} \phi(m,a)&=a\phi(m,e)f~\text{and}~ e\phi(e,a)e \oplus f\phi(e,a)f\in \mathcal{Z}(\mathcal{T})~\text{and}~ e\phi(f,a)e\oplus f\phi(f,a)f\in \mathcal{Z}(\mathcal{T});\\ \phi(b,m)&=e\phi(f,m)b~\text{and}~ e\phi(b,e)e\oplus f\phi(b,e)f\in \mathcal{Z}(\mathcal{T})~\text{and}~e\phi(b,f)e\oplus f\phi(b,f)f\in \mathcal{Z}(\mathcal{T});\\ \phi(m,b)&=e\phi(m,f)b~\text{and}~ e\phi(e,b)e\oplus f\phi(e,b)f\in \mathcal{Z}(\mathcal{T})~\text{and}~e\phi(f,b)e\oplus f\phi(f,b)f\in \mathcal{Z}(\mathcal{T}) \end{aligned} \eqno(3.14) $$ for all $a\in A, b\in B, m\in M$.
Let us see a mapping $\mathfrak{h}:M\rightarrow M$ be defined by $\mathfrak{h}(m)=e\phi(e,m)f$ for all $m\in M$, then $\mathfrak{h}$ is a bimodule homomorphism as a left $A$-module and also right $B$-module for all $m\in M$.
Namely, for arbitrary $a\in A, b\in B, m\in M$, since $\phi$ is a Lie derivation with respect to the first argument, in light of the relation $(3.10)$ and $(3.14)$ we can have $$ \begin{aligned} \mathfrak{h}(am)&=e\phi(e,am)f\\ &=e\phi(e,[a,m])f\\ &=e([\phi(e,a),m]+[a,\phi(e,m)])f\\ &=e(\phi(e,a)m-m\phi(e,a)f+a\phi(m,e)-\phi(m,e)a)f\\ &=m(\eta(e\phi(e,a)e)-\phi(e,a)f)+a\phi(m,e)f\\ &=a\phi(m,e)f\\ &=a\mathfrak{h}(m) \end{aligned} $$ and $$ \begin{aligned} \mathfrak{h}(mb)&=e\phi(e,mb)f&\\ &=e\phi(e,[m,b])f\\ &=e([\phi(e,m),b]+[m,\phi(e,b)])\\ &=e(\phi(e,m)b-b\phi(e,m)+m\phi(e,b)-\phi(e,b)m)f\\ &=e\phi(e,m)b+m\phi(e,b)f-e\phi(e,b)m\\ &=e(\phi(e,m)b+m(f\phi(e,b)f-\eta(e\phi(e,b)e))f\\ &=e\phi(e,m)b\\ &=\mathfrak{h}(m)b. \end{aligned} $$
The assumption $\text{(iii)}$ implies that the bimodule homomorphism $\mathfrak{h}$ is of the \textbf{standard form} $$ \mathfrak{h}(m)=a_0m+mb_0 =e\phi(e,m)f $$ for some $a_0\in \mathcal{Z}(A)$ and $b_0\in \mathcal{Z}(B)$ and all $m\in M$. Now we use the assumption $\text{(i)}$ to see that $a_0\in \pi_{A}(\mathcal{Z}(\mathcal{T}))$ and $b_0\in \pi_{B}(\mathcal{Z}(\mathcal{T}))$. We may write $$ \mathfrak{h}(m)=\phi(e,m)=e\phi(e,m)f=(a_0+\eta^{-1}(b_0))m=\alpha_0m $$ for all $m\in M$, where $\alpha_0=a_0+\eta^{-1}(b_0)\in \pi_{A}(\mathcal{Z}(\mathcal{T}))$.
Likely, one can define a mapping $\mathfrak{g}:M\rightarrow M$ defined by $\mathfrak{g}(m)=e\phi(m,e)f$ for all $m\in M$, which is a bimodule homomorphism as a left $A$-module and also right $B$-module. So there exists $\beta_0\in \pi_{A}(\mathcal{Z}(\mathcal{T}))$ so that $\phi(m,e)=\beta_0 m$.
Let us next show that $$ \mathfrak{h}(m)=\alpha_0m=-\mathfrak{g}(m), ~\text{i.e.,} ~ e\phi(e,m)f==\alpha_0m=-e\phi(m,e)f $$ for all $m\in M$. We need to prove that $\alpha_0+\beta_0=0$.
According to the assumption $\text{(ii)}$, we may assume that $A$ is a noncommutative algebra. Choose $a,a^\prime\in A$ such that $[a,a^\prime]\neq 0$. Since $\phi(e,m)=\alpha_0m$ and $\phi(m,e)=\beta_0m$, we by Lemma \ref{xxsec3.1} get we have $$ [\varphi(a,a^\prime),[e,m]]+[\varphi(a,e),[a^\prime,m]]=[\varphi(m,e),[a,a^\prime]]+[\varphi(m,a^\prime),[a,e]] $$ for all $a,a^\prime\in A, m\in M$. In view of the relation $(3.10)$, we have $$ [\varphi(a,a^\prime),m]=-[a,a^\prime]\varphi(m,e)f =-[a,a^\prime]\alpha_0 m, \eqno{(3.15)} $$ for all $a,a^\prime\in A, m\in M$.
Adopting similar methods and using Lemma \ref{xxsec3.1}, we have $$ [\varphi(a,a^\prime),[m,e]]+[\varphi(a,m),[a^\prime,e]]=[\varphi(e,m),[a,a^\prime]]+[\varphi(e,a^\prime),[a,m]] $$ for all $a,a^\prime\in A, m\in M$. In view of the relation $(3.10)$, we obtain $$ [\varphi(a,a^\prime),m]=[a,a^\prime]\varphi(e,m)f=[a,a^\prime]\beta_0 m \eqno{(3.16)} $$ for all $a,a^\prime\in A, m\in M$.
It follows from the equalities $(3.15)$ and $(3.16)$ yields $(\alpha_0+\beta_0)[a,a^\prime]m=0$ for all $m\in M$. The faithfulness of the left $A$-module $M$ now implies $(\alpha_0+\beta_0)[a,a^\prime]=0$. Since $[a,a^\prime]\neq 0$ we conclude using the condition \text{(iv)} that $\alpha_0+\beta_0=0$. Considering $\alpha_0+\beta_0=0$ and $\varphi(f,m)+\varphi(e,m)=0$ together with $\varphi(m,e)+\varphi(m,f)=0$, we see that $$ \varphi(m,f)=\alpha_0 m=-\varphi(f,m) $$ for all $m\in M$.
Let $a\in A$, and $m\in M$ be arbitrary elements, we can achieve $$ \varphi(a,m)=a\varphi(e,m)f=\alpha_0 am. $$ This proves the first equality. The other three equations can be proven in an analogous manner. \end{proof}
\begin{lemma}\label{xxsec3.5} With notations as above, we have \begin{enumerate}
\item [(1)] $\phi(a,b)=e\phi(a,b)e-a\phi(e,e)b+f\phi(a,b)f$, where $e\phi(a,b)e\oplus f\phi(a,b)f \in \mathcal{Z}(\mathcal{T})$;
\item [(2)]
$\phi(b,a)=e\phi(b,a)e-a\phi(f,f)b+f\phi(b,a)f$, where $e\phi(b,a)e\oplus f\phi(b,a)f \in \mathcal{Z}(\mathcal{T})$ \end{enumerate} for all $a\in A, b\in B$. \end{lemma}
\begin{proof}
\item [(1)]Since $\phi$ is a Lie derivation with respect to the first component, we have $$ \begin{aligned} 0&=\phi([e,a],b)\\ &=[e,\phi(a,b)]+[\phi(e,b),a]\\ &=e\phi(a,b)-\phi(a,b)e+\phi(e,b)a-a\phi(e,b) \end{aligned} $$ for all $a\in A, b\in B$. Multiplying the above equation by $e$ on the left side and by $f$ on the right side, we can obtain $$ e\phi(a,b)f=a\phi(e,b)f \eqno{(3.17)} $$ for all $a\in A, b\in B$.
Since $\phi$ is a Lie derivation with respect to the second component, one can have $$ \begin{aligned} 0&=\phi([b_1,a],b_2)\\ &=[b_1,\phi(a,b_2)]+[\phi(b_1,b_2),a]\\ &=b_1\phi(a,b_2)-\phi(a,b_2)b_1+\phi(b_1,b_2)a-a\phi(b_1,b_2) \end{aligned} $$ for all $b_1, b_2\in B, a\in A$. Multiplying the above equation by $e$ on the left side and by $e$ on the right side, we can obtain $e\phi(b_1,b_2)a=a\phi(b_1,b_2)e$ for all $b_1, b_2\in B$. And then $$ e\phi(b_1,b_2)e \in \mathcal{Z}(A) \eqno{(3.18)} $$ for all $b_1, b_2\in B$. Multiplying the above equation by $e$ on the left side and by $f$ on the right side, we can obtain $$ e\phi(a,b_2)b_1=-a\phi(b_1,b_2)f; \eqno{(3.19)} $$ for all $b_1, b_2\in B, a\in A$. Multiplying the above equation by $f$ on the left side and by $f$ on the right side, we can obtain the relation $b_1\phi(a,b_2)f=f\phi(a,b_2)b_1$ for all $a\in A, b_1, b_2\in B$. And then $$ f\phi(a,b_2)f\in \mathcal{Z}(B) \eqno{(3.20)} $$ for all $a\in A, b_1, b_2\in B$.
Similarly, Since $\phi$ is a Lie derivation with respect to the second component, we have $$ \begin{aligned} 0&=\phi(a_1,[b,a_2])\\ &=[\phi(a_1,b),a_2]+[b,\phi(a_1,a_2)]\\ &=\phi(a_1,b)a_2-a_2\phi(a_1,b)+b\phi(a_1,a_2)-\phi(a_1,a_2)b \end{aligned} $$ for all $a_1, a_2\in A, b\in B$. Multiplying the above equation by $e$ on the left side and by $e$ on the right side, one can check $e\phi(a_1,b)a_2=a_2\phi(a_1,b)e$, ie., $$ e\phi(a_1,b)e \in \mathcal{Z}(A) \eqno{(3.21)} $$ for all $a_1, a_2\in A, b\in B$. Multiplying the above equation by $f$ on the left side and by $f$ on the right side, we can obtain $b\phi(a_1,a_2)f=f\phi(a_1,a_2)b$, and then $$ f\phi(a_1,a_2)f\in \mathcal{Z}(B) \eqno{(3.22)} $$ for all $a_1, a_2\in A, b\in B$. Multiplying the above equation by $e$ on the left side and $f$ on the right side, we can obtain $$ a_2\phi(a_1,b)f=-e\phi(a_1,a_2)b \eqno{(3.23)} $$ for all $a_1, a_2\in A, b\in B$. Combining $(3.19)$ and $(3.23)$ with the conclusion $(2)$ coming from Lemma \ref{xxsec3.3}, we have $$ e\phi(a,b)f=-a\phi(e,e)b $$ for all $a\in A, b\in B$. We therefore have $$ \phi(a,b)=e\phi(a,b)e-a\phi(e,e)b+f\phi(a,b)f $$
Now, we prove the following relation $$ e\phi(a,b)e\oplus f\phi(a,b)f \in \mathcal{Z}(\mathcal{T}) $$ for all $a\in A, b\in B$.
Namely, according to Lemma \ref{xxsec3.1}, we receive the relation $$ [\phi(a,b),[m,e]]+[\phi(a,m),[e,b]]=[\phi(e,b),[m,a]]+[\phi(e,m),[a,b]] $$ for all $a\in A, b\in B, m\in M$. In view of the relation $e\phi(e,b)e\oplus f\phi(e,b)f \in \mathcal{Z}(\mathcal{T})$, we can obtain $$ \begin{aligned} e\phi(a,b)m-m\phi(a,b)f&=-e\phi(e,b)am+am\phi(e,b)f\\ &=(\eta^{-1}(f\phi(e,b)f)-e\phi(e,b)e)am=0 \end{aligned} $$ for all $a\in A, b\in B, m\in M$. Using the relations $(3.20), (3.21)$ and the faithfulness of left $A$-module $M$, we have $$ e\phi(a,b)e\oplus f\phi(a,b)f\in \mathcal{Z}(\mathcal{T}) $$ for all $a\in A, b\in B$.
\item [(2)] By an analogous manner of $(1)$, we have $$ \phi(b,a)=e\phi(b,a)e-a\phi(f,f)b+f\phi(b,a)f, $$ where $e\phi(b,a)e\oplus f\phi(b,a)f \in \mathcal{Z}(\mathcal{T})$ for all $a\in A, b\in B$.
\end{proof}
\begin{lemma}\label{xxsec3.6} With notations as above, we have $$\phi(m,n)=0$$ for all $m,n\in M$. \end{lemma}
\begin{proof} For all $m,n\in M$, we have $$ \begin{aligned} \varphi(m,n)&=\varphi([e,m],n)\\ &=[\varphi(e,n),m]+[e,\varphi(m,n)]\\ &=\varphi(e,n)m-m\varphi(e,n)+e\varphi(m,n)+\varphi(m,n)e. \end{aligned} $$ Multiplying the above equation by $e$ on the left and by $e$ on the right side, we have $e\varphi(m,n)e=0$ for all $m\in M, n\in N$. Similarly, one can obtain $f\varphi(m,n)f=0$ for all $m\in M, n\in N$. By invoking to above relations, we immediately see that $$ \varphi(m,n)=e\varphi(m,n)f \in M \eqno{(3.24)} $$ for all $m\in M, n\in N$.
Fix a element $m\in M$, then the mapping $\mathfrak{k}:M\rightarrow M$ defined by $\mathfrak{k}(m)=\varphi(m,n)=e\varphi(m,n)f$ for all $n\in M$ is a bimodule homomorphism as a left $A$-module and also right $B$-module.
In fact, using Lemma \ref{xxsec3.4} and $(3.24)$, we have $$ \begin{aligned} \mathfrak{k}(an)&=e\varphi(m,[a,n])f\\ &=e([\varphi(m,a),n]+[a,\varphi(m,n)])f\\ &=e(\varphi(m,a)n-n\varphi(m,a)+a\varphi(m,n)-\varphi(m,n)a)f\\ &=a\varphi(m,n)f\\ &=a\mathfrak{k}(n) \end{aligned} $$ and $$ \begin{aligned} \mathfrak{k}(nb)&=e\varphi(m,nb)f\\ &=\varphi(m,[n,b])\\ &=e([\varphi(m,n),b]+[n,\varphi(m,b])\\ &=e(\varphi(m,n)b-b\varphi(m,n)+n\varphi(m,b)-\varphi(m,b)n)f\\ &=e\varphi(m,n)b\\ &=\mathfrak{k}(n)b \end{aligned} $$ for all $b\in B, m, n\in M$. For fixing $m\in M$, it follows from the assumption \text{(iii)} that there exists $\alpha_m\in \mathcal{Z}(A)$ such that $$ \varphi(m,n)=k(n)=\alpha_m n ~\text{for all} ~n\in M. \eqno{(3.25)} $$
Without loss of generality, we might assume that that $A$ is a noncommutative algebra, and let $a,a^\prime\in A$ be fixed elements such that $[a,a^\prime]\neq 0$. Using Lemma \ref{xxsec3.1}, we have $$ [\varphi(a,a^\prime), [n,m]]+[\varphi(a,n), [a^\prime,m]]=[\varphi(m,n),[a,a^\prime]]+[\varphi(m,a^\prime), [a,m]], $$ for all $m, n\in M$. Using $(3.24)$ and $(3.25)$, we may write $$ 0=[\varphi(m,n),[a,a^\prime]]=[a,a^\prime]\varphi(m,n)=[a,a^\prime]\alpha_m n $$ for all $m,n\in M$. The faithfulness of the left $A$-module implies $[a,a^\prime]\alpha_m=0$ for every $m\in M$. In view of the assumption $\text{(iv)}$, we obtain that $\alpha_m=0$ for all $m\in M$. We therefore say that $\varphi(m,n)=0$ for all $m,n\in M$. \end{proof}
\begin{lemma}\label{xxsec3.7} With notations as above, we have \begin{enumerate}
\item [(1)] For arbitrary $a_1,a_2\in A$, we have $$ \begin{aligned} \phi(a_1,a_2)&=e\phi(a_1,a_2)e+a_1a_2\phi(e,e)f+f\phi(a_1,a_2)f\\ &=e\phi(a_1,a_2)e+a_2a_1\phi(e,e)f+f\phi(a_1,a_2)f, \end{aligned} $$ where $f\phi(a_1,a_2)f \in \mathcal{Z}(A)$ and $ e\phi(a_1,a_2)e=\eta^{-1}(f\phi(a_1,a_2)f)-\alpha_0[a_1,a_2] $;
\item [(2)]For arbitrary $a_1,a_2\in A$, we have $$ \begin{aligned} \phi(b_1,b_2)&=e\phi(b_1,b_2)e+e\phi(e,e)b_1b_2+f\phi(b_1,b_2)f\\ &=e\phi(b_1,b_2)e+e\phi(e,e)b_1b_2+f\phi(b_1,b_2)f, \end{aligned} $$ where $e\phi(b_1,b_2)e \in \mathcal{Z}(A)$ and $ f\phi(b_1,b_2)f=\eta(e\phi(b_1,b_2)e)-\eta(\alpha_0)[b_1,b_2] $ for all $b_1,b_2\in B$. \end{enumerate} \end{lemma}
\begin{proof} The conclusion $(1)$ and conclusion $(2)$ can be obtained by the similar ways. For the sake of conciseness, we now only prove conclusion $(1)$.
\textbf{(1).} Combining $(3.19)$ and $(3.23)$ together with the relation $e\phi(1,x)f=0$ in Lemma \ref{xxsec3.3}, we obtain $$ e\phi(a_1,a_2)f=-a_2a_1\phi(e,f)f=a_2a_1\phi(e,e)f. \eqno{(3.26)} $$ for all $a_1,a_2\in A, x\in T$. In similar methods, we have $$ e\phi(a_1,a_2)f=a_1a_2\phi(e,e)f \eqno{(3.27)} $$ for all $a_1,a_2\in A$. In view of above relations $((3.26))$ and $(3.27)$, we arrive at $$ e\phi(a_1,a_2)f=a_2a_1\phi(e,e)f=a_1a_2\phi(e,e)f \eqno{(3.28)} $$ for all $a_1,a_2\in A$.
According to Lemma \ref{xxsec3.1}, we have $$ [\phi(a_1,a_2),[e,m]]+[\phi(a,e),[m,a_2]]=[\phi(m,a_2),[e,a]]+[\phi(m,e),[a,a_2]], $$ for all $a_1, a_2, m\in M$. In view of the relation , we receive at $$ [\phi(a_1,a_2),m]-[\phi(a,e),a_2m]=-[a,a_2]\phi(m,e) \eqno{(3.29)} $$ for all $a_1,a_2\in A, m\in M$ Combining $(3.29)$ and $(3.10)$ together with Lemma \ref{xxsec3.4}, we can achieve $$ (e\phi(a_1,a_2)e-\eta^{-1}(f\phi(a_1,a_2)f)-\alpha_0[a_1,a_2])m=0 $$ for all $a_1, a_2, m\in M$. The faithfulness of left $A$-module $M$ now implies $$ e\phi(a_1,a_2)e=\eta^{-1}(f\phi(a_1,a_2)f)+\alpha_0[a,a_2] $$ for all $a_1,a_2\in A$.
Based on the above process, one can check that $$ \phi(a_1,a_2)=\eta^{-1}(f\phi(a_1,a_2)f)+f\phi(a_1,a_2)f+a_1a_2\phi(e,e)f+\alpha_0[a,a_2] $$ for all $a_1,a_2\in A$.
\textbf{(2).} In analogous manner of $(1)$, we have $$ \phi(b_1,b_2)=e\phi(b_1,b_2)e+e\phi(e,e)b_1b_2+f\phi(b_1,b_2)f, $$ where $$ f\phi(b_1,b_2)f=\eta^{-1}(e\phi(b_1,b_2)e)+\eta(\alpha_0)[b_1,b_2] $$ for all $ b_1,b_2\in B$. \end{proof}
Let us now give the proof of our main theorem.
{\noindent}{\bf Proof of Theorem \ref{xxsec3.2}.}
At the end, in order to get the final main theorem, we need to obtain the following equation: $$ aa^\prime\phi(e,e)f-a\phi(e,e)b^\prime-a^\prime\phi(e,e)b+e\phi(e,e)b^\prime=[x,[y,\phi(e,e)]] $$ for all $x=a+m+b$ and $y=a^\prime+m^\prime+b^\prime$ and all $a,a^\prime\in A, b,b^\prime\in B, m,m^\prime\in M$.
In fact, let $x=\left[ \smallmatrix a & m\\
& b\\ \endsmallmatrix \right]\in \mathcal{T}$ and $y=\left[ \smallmatrix a^\prime & m^\prime\\
& b^\prime\\ \endsmallmatrix \right]\in \mathcal{T}$, Taking into account the relation $e\phi(e,e)e\oplus f\phi(e,e)f\in \mathcal{Z}(\mathcal{T})$, we arrive at $$ \begin{aligned} &[x,[y,\phi(e,e)]]\\ &=[\left[ \smallmatrix a & m\\
& b\\ \endsmallmatrix \right],[\left[ \smallmatrix a^\prime & m^\prime\\
& b^\prime\\ \endsmallmatrix \right],\left[ \smallmatrix e\phi(e,e)e & e\phi(e,e)f\\
& f\phi(e,e)f\\ \endsmallmatrix \right]]] \\ &=[\left[ \smallmatrix a & m\\
& b\\ \endsmallmatrix \right],\left[ \smallmatrix 0 & a^\prime\phi(e,e)f-e\phi(e,e)b^\prime\\
& 0\\ \endsmallmatrix \right]]\\ &=\left[ \smallmatrix 0 & a(a^\prime\phi(e,e)f-e\phi(e,e)b^\prime)-(a^\prime\phi(e,e)f-e\phi(e,e)b^\prime)b\\
& 0\\ \endsmallmatrix \right]\\ &=\left[ \smallmatrix 0 & aa^\prime\phi(e,e)f-a\phi(e,e)b^\prime-a^\prime\phi(e,e)b+e\phi(e,e)b^\prime\\
& 0\\ \endsmallmatrix \right], \end{aligned} $$ for all $a, a^\prime\in A, b, b^\prime\in B,m, m^\prime\in M$
Let $x=a+m+b$ and $y=a^\prime+m^\prime+b^\prime$, according to the bilinearity of the mapping $\phi$, we get the following decomposition form $$ \begin{aligned} \phi(x,y)=&\phi(a,a^\prime)+\phi(a,m^\prime)+\phi(a,b^\prime)\\ &+\phi(m,a^\prime)+\phi(m,m^\prime)+\phi(m,b^\prime)\\ &+\phi(b,a^\prime)+\phi(b,m^\prime)+\phi(b,b^\prime)\\ &=\alpha_0[a,a^\prime]-\alpha_0am^\prime+\alpha_0a^\prime m+\alpha_0 mb^\prime-\alpha_0 m^\prime b+\eta(\alpha_0)[b,b^\prime]\\ &+\eta^{-1}(f\phi(a_1,a_2)f)+f\phi(a_1,a_2)f+e\phi(b_1,b_2)e+\eta(e\phi(b_1,b_2)e)\\ &+e\phi(a,b)e+f\phi(a,b)f+e\phi(b,a)e+f\phi(b,a)f\\ &+aa^\prime\phi(e,e)f-a\phi(e,e)b^\prime-a^\prime\phi(e,e)b+e\phi(e,e)b^\prime\\ &=\lambda_0[x,y]+[x,[y,\phi(e,e)]]+\mu(x,y) \end{aligned} $$ where $\mu:\mathcal{T}\times \mathcal{T }\rightarrow \mathcal{Z}(\mathcal{T})$ is a central mapping such that $$ \begin{aligned} \mu(x,y)&=\eta^{-1}(f\phi(a_1,a_2)f)+f\phi(a_1,a_2)f+e\phi(b_1,b_2)e+\eta(e\phi(b_1,b_2)e)\\ &+e\phi(a,b)e+f\phi(a,b)f+e\phi(b,a)e+f\phi(b,a)f\\ &=[\left[ \smallmatrix \eta^{-1}(f\phi(a_1,a_2)f)+e\phi(b_1,b_2)e+e\phi(a,b)e+e\phi(b,a)e & 0\\
& f\phi(a_1,a_2)f+\eta(e\phi(b_1,b_2)e)+f\phi(a,b)f+f\phi(b,a)f\\ \endsmallmatrix \right]\\ &\in \mathcal{Z}(\mathcal{T}) \end{aligned} $$ for all $a,a^\prime\in A, b,b^\prime\in B, m,m^\prime\in M$ .
As a direct corollary of Theorem \ref{xxsec3.2}, we get Lie biderivations of the $(block)$ upper triangular matrix algebra and the nest algebra following from the main theorem.
\begin{corollary}\label{xxsec3.4} Let $C$ be a commutative domain with identity. If $n\geq 3$, then each Lie biderivation of $(block)$ upper triangular matrix algebra $B^{\bar{k}}_{n}(C)$ is the sum of an extremal biderivation and an inner biderivation. In particular, every biderivation of upper triangular matrix algebra $T_n(C)$ is the sum of an extremal biderivation and an inner biderivation and central mapping.
\end{corollary}
\begin{corollary}\label{xxsec3.5} Let $\mathcal{N}$ be a nest of a Hilbert space $H$, where dim$H\geq 3$. Then each Lie biderivation $\varphi$ of nest algebra $\mathcal{T}(\mathcal{N})$ is the sum of an extremal biderivation and an inner biderivation and and central mapping. \end{corollary}
\addtocontents{toc}{
\protect\settowidth{\protect\@tocsectionnumwidth}{}
\protect\addtolength{\protect\@tocsectionnumwidth}{0em}}
\end{document} | arXiv |
\begin{definition}[Definition:Cosine/Real Function]
The real function $\cos: \R \to \R$ is defined as:
{{begin-eqn}}
{{eqn | l = \cos x
| r = \sum_{n \mathop = 0}^\infty \paren {-1}^n \frac {x^{2 n} } {\paren {2 n!} }
| c =
}}
{{eqn | r = 1 - \frac {x^2} {2!} + \frac {x^4} {4!} - \frac {x^6} {6!} + \cdots + \paren {-1}^n \frac {x^{2 n} } {\paren {2 n}!} + \cdots
| c =
}}
{{end-eqn}}
\end{definition} | ProofWiki |
1 Definition of terminology
2 Why Does the Term Structure of Interest Rates Matter?
3 How Does the Term Structure of Interest Rates Work?
4 The U.S. Treasury Yield Curve
5 Description #Description
6 Error Term Use in a Formula
7 Noun[change]
8 The Outlook for the Overall Credit Market
10 What Is an Error Term?
10.1 Key Takeaways
11 Source #Source
12 Linear Regression, Error Term, and Stock Analysis
13 Understanding Term Structure Of Interest Rates
14 What Do Error Terms Tell Us?
Definition of terminology
§ 149. The term terminology in modern linguistics is used in different meanings. In accordance with the structure of this term (it is a combination of the word the term and the Greek logos — the word, the doctrine), it denotes the doctrine of terms, the section of linguistics (lexicology) dealing with the study of terms , or the corresponding scientific (scientific and applied) discipline. However, in this sense, the term is rarely used.
In recent times, some linguists have used the term «terminology» to describe this concept.
Terminology in linguistics is often called a set of terms used in a particular language or in a certain sphere of people's activities.
In the latter meaning, i.e. to denote the concepts of a particular area of knowledge or any field of activity, it is often used a composite term «terminological system» or the complex formation of the «terminology-based system» created on its base.
Why Does the Term Structure of Interest Rates Matter?
In general, when the term structure of interest rates curve is positive, this indicates that investors desire a higher rate of return for taking the increased risk of lending their money for a longer time period.
Many economists also believe that a steep positive curve means that investors expect strong future economic growth with higher future inflation (and thus higher interest rates), and that a sharply inverted curve means that investors expect sluggish economic growth with lower future inflation (and thus lower interest rates). A flat curve generally indicates that investors are unsure about future economic growth and inflation.
There are three central theories that attempt to explain why yield curves are shaped the way they are.
1. The «expectations theory» says that expectations of increasing short-term interest rates are what create a normal curve (and vice versa).
2. The «liquidity preference hypothesis» says that investors always prefer the higher liquidity of short-term debt and therefore any deviance from a normal curve will only prove to be a temporary phenomenon.
3. The «segmented market hypothesis» says that different investors adhere to specific maturity segments. This means that the term structure of interest rates is a reflection of prevailing investment policies.
Because the term structure of interest rates is generally indicative of future interest rates, which are indicative of an economy's expansion or contraction, yield curves and changes in these curves can provide a great deal of information. In the 1990s, Duke University professor Campbell Harvey found that inverted yield curves have preceded the last five U.S. recessions.
Changes in the shape of the term structure of interest rates can also have an impact on portfolio returns by making some bonds relatively more or less valuable compared to other bonds. These concepts are part of what motivate analysts and investors to study the term structure of interest rates carefully.
How Does the Term Structure of Interest Rates Work?
The term structure of interest rates shows the various yields that are currently being offered on bonds of different maturities. It enables investors to quickly compare the yields offered on short-term, medium-term and long-term bonds.
Note that the chart does not plot coupon rates against a range of maturities — that graph is called the spot curve.
The term structure of interest rates takes three primary shapes. If short-term yields are lower than long-term yields, the curve slopes upwards and the curve is called a positive (or «normal») yield curve. Below is an example of a normal yield curve:
If short-term yields are higher than long-term yields, the curve slopes downwards and the curve is called a negative (or «inverted») yield curve. Below is example of an inverted yield curve:
Finally, a flat term structure of interest rates exists when there is little or no variation between short and long-term yield rates. Below is an example of a flat yield curve:
It is important that only bonds of similar risk are plotted on the same yield curve. The most common type of yield curve plots Treasury securities because they are considered risk-free and are thus a benchmark for determining the yield on other types of debt.
The shape of the curve changes over time. Investors who are able to predict how term structure of interest rates will change can invest accordingly and take advantage of the corresponding changes in bond prices.
Term structure of interest rates are calculated and published by The Wall Street Journal, the Federal Reserve, and a variety of other financial institutions.
The U.S. Treasury Yield Curve
This yield curve is considered the benchmark for the credit market, as it reports the yields of risk-free fixed income investments across a range of maturities. In the credit market, banks and lenders use this benchmark as a gauge for determining lending and savings rates. Yields along the U.S. Treasury yield curve are primarily influenced by the Federal Reserve's federal funds rate. Other yield curves can also be developed based upon a comparison of credit investments with similar risk characteristics.
Most often, the Treasury yield curve is upward-sloping. One basic explanation for this phenomenon is that investors demand higher interest rates for longer-term investments as compensation for investing their money in longer-duration investments. Occasionally, long-term yields may fall below short-term yields, creating an inverted yield curve that is generally regarded as a harbinger of recession.
Description #Description
The usage of the get_term function is to apply filters to a term object. It is possible to get a term object from the database before applying the filters.
$term ID must be part of $taxonomy, to get from the database. Failure, might be able to be captured by the hooks. Failure would be the same value as $wpdb returns for the get_row method.
There are two hooks, one is specifically for each term, named 'get_term', and the second is for the taxonomy name, 'term_$taxonomy'. Both hooks gets the term object, and the taxonomy name as parameters. Both hooks are expected to return a Term object.
'get_term' hook – Takes two parameters the term Object and the taxonomy name. Must return term object. Used in get_term() as a catch-all filter for every $term.
'get_$taxonomy' hook – Takes two parameters the term Object and the taxonomy name. Must return term object. $taxonomy will be the taxonomy name, so for example, if 'category', it would be 'get_category' as the filter name. Useful for custom taxonomies or plugging into default taxonomies.
Error Term Use in a Formula
An error term essentially means that the model is not completely accurate and results in differing results during real-world applications. For example, assume there is a multiple linear regression function that takes the following form:
Y=αX+βρ+ϵwhere:α,β=Constant parametersX,ρ=Independent variablesϵ=Error term\begin{aligned} &Y = \alpha X + \beta \rho + \epsilon \\ &\textbf{where:} \\ &\alpha, \beta = \text{Constant parameters} \\ &X, \rho = \text{Independent variables} \\ &\epsilon = \text{Error term} \\ \end{aligned}Y=αX+βρ+ϵwhere:α,β=Constant parametersX,ρ=Independent variablesϵ=Error term
When the actual Y differs from the expected or predicted Y in the model during an empirical test, then the error term does not equal 0, which means there are other factors that influence Y.
Noun[change]
Singularterm
(countable) A term is a word or group of words, usually one with a special meaning in a particular field or area.
They didn't know the meaning of the technical terms the doctor was using.
What is the definition of this term?
At age 10, she only knew that it was a term of abuse, a swear word.
She spoke in glowing terms about the progress they had made.
(plural) You use «in terms of» to introduce a topic or subject.
Simply spending more money is not useful in terms of getting people back to work.
In terms of both numbers of people and size, California is one of the largest States in America.
(plural) If you talk about something in general/economic/human, etc. terms, you talk about it from that point of view.
The changes are great in dollar terms, but the human cost is too high.
(countable) A term is a period of time.
The President of Mexico is limited to serving a single term of six years.
He faces a maximum prison term of 15 years and $8.5 million in fines
The price may be up and down, but over the long term it will certainly rise.
The period of employment comes to term in three years.
She carried the baby to term.
(countable) A term is one of the times of year when school classes are held, usually September to December and January to April or June, also June to August.
(countable) (mathematics) A term is a number or symbol in a formula.
The equation only worked when we included an extra constant term.
(plural) If you compare money or numbers in real terms, you compare the value on an equal basis.
Today we are paying less for gasoline in real terms than we have at any time since 1918.
Although the absolute number of trucking accidents is up slightly, they are down 10 percent in the decade in real terms .
(plural) If you come to terms with something that is not nice, you accept it.
He hasn't come to terms with the fact that she's not coming back.
(usually plural) (law) The terms of a contract or purchase are its rules, conditions and limits.
The company has announced generally agreement on a new contract though the exact terms still have to be worked out.
Under the terms of the purchase, we get the house on July 14th.
(plural) If two or more people or groups are on good/equal/bad etc. terms, they have a relationship that is good/equal/bad etc.
(plural) If two or more people or groups are on speaking terms, they are not close but do speak to each other.
The Outlook for the Overall Credit Market
The term structure of interest rates and the direction of the yield curve can be used to judge the overall credit market environment. A flattening of the yield curve means longer-term rates are falling in comparison to short-term rates, which could have implications for a recession. When short-term rates begin to exceed long-term rates, the yield curve is inverted, and a recession is likely occurring or approaching.
When longer-term rates fall below shorter-term rates, the outlook for credit over the long term is weak. This is often consistent with a weak or recessionary economy, which is defined by two consecutive periods of negative growth in the gross domestic product (GDP). While other factors, including foreign demand for U.S. Treasuries, can also result in an inverted yield curve, historically, an inverted yield curve has been an indicator of an impending recession in the United States.
Tom is on good terms with John.Том в хороших отношениях с Джоном.
Apnoea is a medical term which comes from Greek; it literally means «without breath».Апноэ — это медицинский термин, происходящий из греческого языка, он буквально означает «без дыхания».
The president's term of office is four years.Президентский срок длится четыре года.
The second term came to an end.Второй семестр подошёл к концу.
Global warming, the term is so outdated, it's called climate change now.Глобальное потепление — термин столь устаревший, это теперь называется изменение климата.
It is impossible for me to finish my term paper by tomorrow.Просто невозможно закончить курсовую работу к завтрашнему дню!
I want to come to terms with him.Я хочу с ним договориться.
I'm on good terms with him.Я в хороших отношениях с ним.
Which is closer to Arabic in terms of sound, Spanish or Portuguese?Какой язык в плане звучания ближе к арабскому, испанский или португальский?
What are the terms of the contract?Каковы условия контракта?
According to the terms of the contract, your payment was due on May 31st.Согласно условиям контракта, вы должны были внести оплату до 31 мая.
You see everything in terms of money.Ты на всё смотришь через призму денег.
They settled on the terms of the contract.Они договорились обо всех пунктах контракта.
He thinks of everything in terms of profit.Он думает обо всём с точки зрения прибыли.
I am on good terms with him.Я с ним в хороших отношениях.
What Is an Error Term?
An error term is a residual variable produced by a statistical or mathematical model, which is created when the model does not fully represent the actual relationship between the independent variables and the dependent variables. As a result of this incomplete relationship, the error term is the amount at which the equation may differ during empirical analysis.
The error term is also known as the residual, disturbance, or remainder term, and is variously represented in models by the letters e, ε, or u.
An error term appears in a statistical model, like a regression model, to indicate the uncertainty in the model.
The error term is a residual variable that accounts for a lack of perfect goodness of fit.
Heteroskedastic refers to a condition in which the variance of the residual term, or error term, in a regression model varies widely.
Source #Source
File: wp-includes/taxonomy.php
function get_term( $term, $taxonomy = '', $output = OBJECT, $filter = 'raw' ) {
if ( empty( $term ) ) {
return new WP_Error( 'invalid_term', __( 'Empty Term.' ) );
if ( $taxonomy && ! taxonomy_exists( $taxonomy ) ) {
return new WP_Error( 'invalid_taxonomy', __( 'Invalid taxonomy.' ) );
if ( $term instanceof WP_Term ) {
$_term = $term;
} elseif ( is_object( $term ) ) {
if ( empty( $term->filter ) || 'raw' === $term->filter ) {
$_term = sanitize_term( $term, $taxonomy, 'raw' );
$_term = new WP_Term( $_term );
$_term = WP_Term::get_instance( $term->term_id );
$_term = WP_Term::get_instance( $term, $taxonomy );
if ( is_wp_error( $_term ) ) {
return $_term;
} elseif ( ! $_term ) {
// Ensure for filters that this is not empty.
$taxonomy = $_term->taxonomy;
* Filters a taxonomy term object.
* @since 4.4.0 `$_term` is now a `WP_Term` object.
* @param WP_Term $_term Term object.
* @param string $taxonomy The taxonomy slug.
$_term = apply_filters( 'get_term', $_term, $taxonomy );
* The dynamic portion of the filter name, `$taxonomy`, refers
* to the slug of the term's taxonomy.
$_term = apply_filters( "get_{$taxonomy}", $_term, $taxonomy );
// Bail if a filter callback has changed the type of the `$_term` object.
if ( ! ( $_term instanceof WP_Term ) ) {
// Sanitize term, according to the specified filter.
$_term->filter( $filter );
if ( ARRAY_A === $output ) {
return $_term->to_array();
} elseif ( ARRAY_N === $output ) {
return array_values( $_term->to_array() );
Linear Regression, Error Term, and Stock Analysis
Linear regression is a form of analysis that relates to current trends experienced by a particular security or index by providing a relationship between a dependent and independent variables, such as the price of a security and the passage of time, resulting in a trend line that can be used as a predictive model.
A linear regression exhibits less delay than that experienced with a moving average, as the line is fit to the data points instead of based on the averages within the data. This allows the line to change more quickly and dramatically than a line based on numerical averaging of the available data points.
Understanding Term Structure Of Interest Rates
Essentially, term structure of interest rates is the relationship between interest rates or bond yields and different terms or maturities. When graphed, the term structure of interest rates is known as a yield curve, and it plays a crucial role in identifying the current state of an economy. The term structure of interest rates reflects expectations of market participants about future changes in interest rates and their assessment of monetary policy conditions.
In general terms, yields increase in line with maturity, giving rise to an upward-sloping, or normal, yield curve. The yield curve is primarily used to illustrate the term structure of interest rates for standard U.S. government-issued securities. This is important as it is a gauge of the debt market's feeling about risk. The most frequently reported yield curve compares the three-month, two-year, five-year, 10-year, and 30-year U.S. Treasury debt. (Yield curve rates are usually available at the Treasury's interest rate web sites by 6:00 p.m. ET each trading day),
The term of the structure of interest rates has three primary shapes.
Upward sloping—long term yields are higher than short term yields. This is considered to be the «normal» slope of the yield curve and signals that the economy is in an expansionary mode.
Downward sloping—short term yields are higher than long term yields. Dubbed as an «inverted» yield curve and signifies that the economy is in, or about to enter, a recessive period.
Flat—very little variation between short and long term yields. Signals that the market is unsure about the future direction of the economy.
Term structure of interest rates, commonly known as the yield curve, depicts the interest rates of similar quality bonds at different maturities.
The term structure of interest rates reflects expectations of market participants about future changes in interest rates and their assessment of monetary policy conditions.
The most frequently reported yield curve compares the three-month, two-year, five-year, 10-year and 30-year U.S. Treasury debt.
What Do Error Terms Tell Us?
Within a linear regression model tracking a stock's price over time, the error term is the difference between the expected price at a particular time and the price that was actually observed. In instances where the price is exactly what was anticipated at a particular time, the price will fall on the trend line and the error term will be zero.
Points that do not fall directly on the trend line exhibit the fact that the dependent variable, in this case, the price, is influenced by more than just the independent variable, representing the passage of time. The error term stands for any influence being exerted on the price variable, such as changes in market sentiment.
The two data points with the greatest distance from the trend line should be an equal distance from the trend line, representing the largest margin of error.
If a model is heteroskedastic, a common problem in interpreting statistical models correctly, it refers to a condition in which the variance of the error term in a regression model varies widely.
База данных wordpress, описание, структура, таблицы
Year to date (ytd)
Таксономии в wordpress
Php serialize() function
Python strptime()
Php date and time
Floating stock
Getting started with python requests — get requests
Traceback в python
Обработчик шаблонов twig. быстрый старт
Плавное введение в natural language processing (nlp)
Datetimepicker: выберите дату и время
V-model
Query posts by custom fields
8 css фильтров для изображений
Drop if exists table or other objects in sql server
Реализация mvc паттерна на примере создания сайта-визитки на php
Jquery 3.2.0 is out! | CommonCrawl |
\begin{definition}[Definition:Side of Rational plus Medial Area]
Let $a, b \in \R_{>0}$ be in the forms:
:$a = \dfrac \rho {\sqrt {2 \left({1 + k^2}\right)} } \sqrt{\sqrt {1 + k^2} + k}$
:$b = \dfrac \rho {\sqrt {2 \left({1 + k^2}\right)} } \sqrt{\sqrt {1 + k^2} - k}$
where:
: $\rho$ is a rational number
: $k$ is a rational number whose square root is irrational
: $\lambda$ is a rational number whose square root is irrational.
Then $a + b$ is the '''side of the sum of (two) medial areas'''.
{{:Euclid:Proposition/X/40}}
\end{definition} | ProofWiki |
\begin{definition}[Definition:Euclid's Definitions - Book V/17 - Ratio Ex Aequali]
{{EuclidSaid}}
:''A '''ratio ex aequali''' arises when, there being several magnitudes and another set equal to them in multitude which makes two and two are in the same proportion, as the first is to the last among the first magnitudes, so is the first to the last among the second magnitudes;<br/>Or, in other words, it means taking the extreme terms by virtue of the removal of the intermediate terms.''
{{EuclidDefRef|V|17|Ratio Ex Aequali}}
\end{definition} | ProofWiki |
Spherical Segment Formula
If we cut a basketball with a pair of two knives parallely, then the solid that is defined by the cutting is the spherical segment. You can say that the spherical cap has been truncated, and so it can be called as a spherical frustum.Those spherical segment is called spherical zone, where we exclude the bases.
The formula that we use here are volume and surface area:
\[\large A=2\pi Rh\]
\[\large V=\frac{\pi h}{6}(3r_{1}^{2}+3r_{2}^{2}+h^{2})\]
Solved Example
Question: What will be the volume of a segment of a sphere,the radius of the base being 9.2 cms, the radius of sphere 11 cms and height is 7 cms ?
Using the formula: $V=\frac{\pi h}{6}(3r_{1}^{2}+3r_{2}^{2}+h^{2})$
$V=\frac{\pi \times 7}{6}(3\times 9.2^{2}+3\times 11^{2}+7^{2})$
$= 3.66 \times 665.92$
= 2441.70
Kinetic Friction Formula Standard Form Formula
Molecular Formula Of Urea Integers Formula
Newton Law Formula Orbital Velocity Derivation
Heisenberg Uncertainty Principle Equation Area Under The Curve Formula
Combined Gas Law Formula Osmotic Pressure Formula | CommonCrawl |
\begin{document}
\setcounter{footnote}{2} \title{Characterizing consensus in the Heard-Of model}
\begin{abstract}
The Heard-Of model is a simple and relatively expressive model of distributed
computation.
Because of this, it has gained a considerable attention of the verification community.
We give a characterization of all algorithms solving consensus in a fragment
of this model.
The fragment is big enough to cover many prominent consensus algorithms.
The characterization is purely syntactic: it is expressed in terms of some
conditions on the text of the algorithm.
One of the recent methods of verification of distributed algorithms is to
abstract an algorithm to the Heard-Of model and then to verify the abstract algorithm
using semi-automatic procedures.
Our results allow, in some cases, to avoid the second step in this methodology.
\end{abstract}
\section{Introduction}
Most distributed algorithms solving problems like consensus, leader election, set agreement, or renaming are essentially one iterated loop. Yet, their behavior is difficult to understand due to unbounded number of processes,
asynchrony, failures, and other aspects of the execution model. The general context of this work is to be able to say what happens when we change some of the parameters: modify an algorithm or the execution model. Ideally we would like to characterize the space of all algorithms solving a particular problem.
To approach this kind of questions, one needs to restrict to a well defined space of all distributed algorithms and execution contexts. In general this is an impossible requirement. Yet the distributed algorithms community has come up with some settings that are expressive enough to represent interesting cases and limited enough to start quantifying over ``all possible'' distributed algorithms~\cite{charron-heard-distributed09,WidSch:09,AguDelFau:12}.
In this work we consider the consensus problem in the Heard-Of model~\cite{charron-heard-distributed09}. \emph{Consensus problem} is a central problem in the field of distributed algorithms; it requires that all correct processes eventually decide on one of the initial values. \emph{The Heard-Of model} is a round- and message-passing-based model. It can represent many intricacies of various execution models and yet is simple enough to attempt to analyze it algorithmically~\cite{ChaDebMer:11,DebMer:12,DraHenVei:14,MarSprBas:17,Mar:17}. Initially, our goal was to continue the quest from~\cite{MarSprBas:17} of examining what is algorithmically possible to verify in the Heard-Of model. While working on this problem we have realized that a much more ambitious goal can be achieved: to give a simple, and in particular decidable, characterization of all consensus algorithms in well-defined fragments of the Heard-Of model.
The Heard-Of model is an open ended model: it does not specify what operations processes can perform and what kinds of communication predicates are allowed. Communication predicates in the Heard-Of model capture in an elegant way both synchrony degree and failure model. In this work we fix the set of atomic communication predicates and atomic operations. We opted for a set sufficient to express most prominent consensus algorithms (cf.\ Section~\ref{sec:examples}), but we do not cover all operations found in the literature on the Heard-Of model.
Our characterization of algorithms that solve consensus is expressed in terms of syntactic conditions both on the text of the algorithm, and in the constraints given by the communication predicate. It exhibits an interesting way all consensus algorithms should behave. One could imagine that there can be a consensus algorithm that makes processes gradually converge to a consensus: more and more processes adopting the same value. This is not the case. A consensus algorithm, in models we study here, should have a fixed number of crucial rounds where precise things are guaranteed to happen. Special rounds have been identified for existing algorithms~\cite{RutMilSch:10}, but not their distribution over different phases. Additionally, here we show that all algorithms should have this structure.
As an application of our characterization we can think of using it as an intermediate step in analysis of more complicated settings than the Heard-Of model. An algorithm in a given setting can be abstracted to an algorithm in the Heard-Of model, and then our characterization can be applied. Instead of proving the original algorithm correct it is enough to show that the abstraction is sound. For example, an approach reducing asynchronous semantics to round based semantics under some conditions is developed in~\cite{ChaChaMer:09}. A recent paper~\cite{DamDraMil:19} gives a reduction methodology in a much larger context, and shows its applicability. The goal language of the reduction is an extension of the Heard-Of model that is not covered by our characterization. As another application, our characterization can be used to quickly see if an algorithm can be improved by taking a less constrained communication predicate, by adapting threshold constants, or by removing parts of code (c.f.\ Section~\ref{sec:examples}).
\subsubsection*{Related work} The celebrated FLP result~\cite{FisLynPat:85} states that consensus is impossible to achieve in an asynchronous system, even in the presence of a single failure. There is a considerable literature investigating the models in which the consensus problem is solvable. Even closer in spirit to the present paper are results on weakest failure detectors required to solve the problem~\cite{ChaHadTou:96,FreGueKuz:11}. Another step closer are works providing generic consensus algorithms that can be instantiated to give several known concrete algorithms~\cite{MosRay:99,HurMosRay:02,GueRay:07,BieWidCha:07,SonRenSch:08,RutMilSch:10}. The present paper considers a relatively simple model, but gives a characterization result of all possible consensus algorithms.
The cornerstone idea of the Heard-Of model is that both asynchrony and failures can be modeled by the constraints on the message loss captured by a notion of communication predicates. This greatly simplifies the model that is essential for a kind of characterization we present here. Unavoidably, not all aspects of partial synchrony~\cite{DwoLynSto:88,CriFet:99} or failures~\cite{ChaTou:96} are covered by the model. For example, after a crash it may be difficult for a process to get into initial state, or in terms of the Heard-of model, do the same round as other processes~\cite{RenSchSch:15,ChaChaMer:09}. These observations just underline that there is no universal model for distributed algorithms. There exists several other proposals of relatively simple and expressible models~\cite{Gaf:98,WidSch:09,AguDelFau:12,RaySta:13}. The Heard-Of model, while not perfect, is in our opinion representative enough to study in more detail.
On the verification side there are at least three approaches to analysis of the Heard-Of or similar models. One is to use automatic theorem provers, like Isabelle~\cite{ChaMer:09,ChaDebMer:11,DebMer:12}. Another is deductive verification methods applied to annotated programs~\cite{DraHenZuf:16,DraHenVei:14}. The closest to this work is a model-checking approach~\cite{TsuSch:11,MarSprBas:17,Mar:17,AmiRubSto:18}. Particularly relevant here is the work of Maric et al.~\cite{MarSprBas:17}. who show cut-off results for a fragment of the Heard-Of model and then perform verification on a resulting finite state system. Our fragment of the Heard-Of model is incomparable with the one from that work, and arguably it has less restrictions coming from purely technical issues in proofs. While trying to extend the scope of automatic methods we have realized that we could actually bypass them completely and get a stronger characterization result.
Of course there are also other models of distributed systems that are considered in the context of verification. For example there has been big progress on verification of threshold automata~\cite{KukKonWid:18,KonLazVei:17,KonVeiWid:17,StoKonWid:19,BerKonLaz:19}. There are also other methods, as automatically generating invariants for distributed algorithms~\cite{KonVeiWid:15,GleBjoRyb:16,TauLosMcM:18}, or verification in Coq proof assistant~\cite{WilWooPan:15,WooWilAnt:16}.
\subsubsection*{Organization of the paper} In the next section we introduce the Heard-Of model and formulate the consensus problem. In the four consecutive sections we present the characterizations for the core model as well as for the extensions with timestamps, coordinators, as both timestamps and coordinators at the same time. We then give examples of algorithms that are covered by this model, and discuss their optimality given our characterization. The next for sections contain the proofs for the four characterizations.
\endinput
\section{Heard-Of model and the consensus problem}
In a Heard-Of model a certain number of processes execute the same code synchronously. An algorithm consists of a sequence of \emph{rounds}, every process executes the same round at the same time. The sequence of rounds, called \emph{phase}, is repeated forever. In a round every process sends the value of one of its variables to a communication medium, receives a multiset of values, and uses it to adopt a new value (cf.\ Figure~\ref{fig:schema}). \begin{figure}
\caption{A schema of an execution of a round and of a phase. In every round $i$
every process sends a value of its variable $x_i$, and sets its variable $x_{i+1}$
depending on the received multiset of values: $\mathsf{H}^j_i$.
At the beginning of the phase the value of $\mathit{inp}$
is sent, at some round $\mathit{inp}$ may be updated; we use $\mathbf{ir}$ for the index of
this round.
In the last round $\mathit{dec}$ may
be set. Both $\mathit{inp}$ and $\mathit{dec}$ are not updated if the value is $?$, standing
for undefined.}
\label{fig:schema}
\end{figure}
At the beginning every process has its initial value in variable $\mathit{inp}$. Every process is expected to eventually set its decision variable $\mathit{dec}$. Every round is communication closed meaning that a value sent in a round can only be received in the same round; if it is not received it is lost. A \emph{communication predicate} is used to express a constraint on acceptable message losses. Algorithm~\ref{alg:one-third} is a concrete simple example of a $2$-round algorithm.
We proceed with a description of the syntax and semantics of Heard-Of algorithms. Next we define the consensus problem. In later sections we will extend the core language with timestamps and coordinators.
\begin{algorithm}[H]\label{alg:one-third}
\Send{$(\mathit{inp})$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > \mathit{thr}_1 \cdot |\Pi|$}{$x_1:=\mathit{inp}:=\mathrm{smor}(\mathsf{H})$}
\lIf{$\mathtt{mult}(\mathsf{H}) \land |\mathsf{H}| > \mathit{thr}_1 \cdot |\Pi|$}{$x_1:=\mathit{inp}:=\mathrm{smor}(\mathsf{H})$}
}
\Send{$x_1$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > \mathit{thr}_2 \cdot |\Pi|$}{$\mathit{dec}:=\mathrm{smor}(\mathsf{H})$}
}
\BlankLine
\Cp{$\lF(\p^1 \land \lF\p^2)$}
\Where{$\p^1 := (\f_{=}\land\f_{\mathit{thr}_1},\true)$\quad and\quad $\p^2 := (\f_{\mathit{thr}_1},\f_{\mathit{thr}_2})$}
\caption{Parametrized OneThird algorithm~\cite{charron-heard-distributed09}, $\mathit{thr}_1, \mathit{thr}_2$ are constants from $(0,1)$} \end{algorithm}
\subsubsection*{Syntax} An algorithm has one \emph{phase} that consists of two or more rounds. In the first round each process sends the value of $\mathit{inp}$ variable, in the last round it can set the value of $\mathit{dec}$ variable. A phase is repeated forever, all processes execute the same round at the same time. A round $i$ is a send statement followed by a sequence of conditionals:
\RestyleAlgo{plain} \begin{algorithm}[H]
\Send{$x_{i-1}$}{
\lIf{$\mathit{cond}_i^1(\mathsf{H})$}{$x_i:=\mathtt{op}_i^1(\mathsf{H})$}
\vdots
\lIf{$\mathit{cond}_i^l(\mathsf{H})$}{$x_i:=\mathtt{op}_i^l(\mathsf{H})$}
} \end{algorithm} \noindent The variables are used in a sequence: first $x_0$, which is $\mathit{inp}$, is sent and $x_1$ is set, then $x_1$ is sent and $x_2$ is set, etc.\ (cf. Figure~\ref{fig:schema}). There should be exactly one round (before the last round) where $\mathit{inp}$ is updated; the conditional lines in this round are: \begin{align*}
\mathtt{if}\ \mathit{cond}^j_{\mathbf{ir}}(\mathsf{H})\ \mathtt{then}\ x_{\mathbf{ir}}:=\mathit{inp}:=\mathtt{op}^j_{\mathbf{ir}}(\mathsf{H}) \end{align*} Since this is a special round, we use the index $\mathbf{ir}$ to designate this round number. In the last round, only instructions setting variable $\mathit{dec}$ can be present: \begin{align*}
&\mathtt{if}\ \mathit{cond}^j_r(\mathsf{H})\ \mathtt{then}\ \mathit{dec}:=\mathtt{op}^j_r(\mathsf{H}) \end{align*} This is why a phase needs to have at least two rounds. Of course one can also have a syntax and a characterization for one round algorithms, but unifying the two hinders readability. Our fragment roughly corresponds to the fragment from~\cite{MarSprBas:17}, without extra restrictions but with a less liberty at the fork point.
As an example, consider Algorithm~\ref{alg:one-third}. It has two rounds, each begins with a $\mathtt{send}$ statement. In the first round both $x_1$ and $\mathit{inp}$ are set, in the second round $\mathit{dec}$ is set. The conditions talk about properties of the received $\mathsf{H}$ multiset, that we describe below.
As the above syntax suggests, in round $i$ every process first sends the value of
variable $x_{i-1}$, and then receives a multiset of values $\mathsf{H}$ that it uses to set the value of the variable $x_i$. The possible tests on the received set $\mathsf{H}$ are $\mathtt{uni}$, $\mathtt{mult}$, and $|\mathsf{H}|>\mathit{thr}\cdot |\Pi|$ saying respectively that: the multiset has only one value; has more than one value; and that is of size $>\mathit{thr}\cdot n$ where $n$ is the number of processes and $0\le\mathit{thr}<1$. The possible operations are $\min(\mathsf{H})$ resulting in the minimal value in $\mathsf{H}$, and $\mathrm{smor}(\mathsf{H})$ resulting in the minimal most frequent value in $\mathsf{H}$. For example, the first conditional line in Algorithm~\ref{alg:one-third} tests if there is only one value in $\mathsf{H}$, and if this value has multiplicity at least $\mathit{thr}_1\cdot n$ in $\mathsf{H}$; if so $\mathit{inp}$ and $x_1$ are set to this value, it does not matter if $\min$ or $\mathrm{smor}$ operation is used in this case.
In addition to description of rounds, an algorithm has also a \emph{communication predicate} putting constraints on the behavior of the communication medium. A \emph{communication predicate for a phase} with $r$ rounds is a tuple $\p=(\p_1,\dots,\p_r)$, where each $\p_l$ is a conjunction of atomic communication predicates that we specify later. A \emph{communication predicate for an algorithm} is \begin{equation*}
(\lG\overline{\p})\land(\lF(\p^1\land\lF(\p^2\land\dots (\lF\p^k)\dots))) \end{equation*} where $\overline{\p}$ and $\p^i$ are communication predicates for a phase. Predicate $\overline{\p}$ is \emph{global predicate}, and $\p^1\dots,\p^k$ are \emph{sporadic predicates}. So the global predicate specifies constraints on every phase of execution, while sporadic predicates specify a \emph{sequence} of special phases that should happen: first $\p_1$, followed later by $\p_2$, etc. We have two types of atomic communication predicates: $\f_=$ says that every process receives the same multiset; $\f_\mathit{thr}$ says that every process receives a multiset of size at least $\mathit{thr}\cdot n$ where $n$ is the number of processes. In Algorithm~\ref{alg:one-third} the global predicate is trivial, and we require two special phases. In the first of them, in its first round every process should receive exactly the same $\mathsf{H}$ multiset, and the multiset should contain values from at least $\mathit{thr}_1$ fraction of all processes.
\subsubsection*{Semantics} The values of variables come from a fixed linearly ordered set $D$. Additionally, we take a special value $? \notin D$ standing for undefined. We write $D_{?}$ for $D\cup\set{?}$.
We describe the semantics of an algorithm for $n$ processes. A \emph{state of an algorithm} is a pair of $n$-tuples of values; denoted $(f,d)$. Intuitively, $f$ specifies the value of the $\mathit{inp}$ variable for each process, and $d$ specifies the value of the $\mathit{dec}$ variable. The value of $\mathit{inp}$ can never be $?$, while initially the value of $\mathit{dec}$ is $?$ for every process. We denote by $\mathit{mset}(f)$ the multiset of values appearing in the tuple $f$, and by $\mathit{set}(f)$ the set of values in $f$. Only values of $\mathit{inp}$ and $\mathit{dec}$ survive between phases. All the other variables are reset to $?$ at the beginning of each phase.
There are two kinds of transitions: \begin{equation*}
(f,d)\act{\p} (f',d') \quad\text{and}\quad f\lact{\f}_i f' \ . \end{equation*} The first is a phase transition, while the second is a transition for round $i$. So in a transition of the second type $f$ describes the values of $x_i$, and $f'$ the values of $x_{i+1}$. Phase transition is labeled with a phase communication predicate, while a round transition has a round number and a conjunction of atomic predicates as labels.
Before defining these transitions we need to describe the semantics of communication predicates. At every round processes send values of their variable to a communication medium, and then receive a multiset of values from the medium (cf.\ Figure~\ref{fig:schema}). Communication medium is not assumed to be perfect, it can send a different multiset of values to every process, provided it is a sub-multiset of received values. An atomic communication predicate puts constraints on multisets that every process receives. So a predicate specifies constraints on a tuple of multisets $\vec \mathsf{H}=(\mathsf{H}_1,\dots,\mathsf{H}_n)$. Predicate $\f_=$ is satisfied if all the multisets are the same. Predicate $\f_\mathit{thr}$ requires that every multiset is bigger than $\mathit{thr}\cdot n$ for some number $0\le\mathit{thr} <1$. Predicate $\true$ is always satisifed. We write $\vec \mathsf{H}\sat\f$ when the tuple of multisets $\vec \mathsf{H}$ satisfies the conjunction of atomic predicates $\f$.
Once a process $p$ receives a multiset $\mathsf{H}_p$, it uses it to do an update of one of its variables. For this it finds the first conditional that $\mathsf{H}_p$ satisfies and performs the operation from the corresponding assignment.
A \emph{condition} is a conjunction of atomic conditions: $\mathtt{uni}$, $\mathtt{mult}$,
$|\mathsf{H}|>\mathit{thr}\cdot |\Pi|$.
A multiset $\mathsf{H}$ satisfies $\mathtt{uni}$ when it contains just one value; it satisfies $\mathtt{mult}$ if it contains more than one value. A multiset $\mathsf{H}$ satisfies $|\mathsf{H}|>\mathit{thr}\cdot |\Pi|$ when the size of $\mathsf{H}$ is bigger than $\mathit{thr}\cdot n$, where $n$ is the number of processes. Observe that only predicates of the last type take into account possible repetitions of the same value.
We can now define the \emph{update value} $\mathtt{update}_i(\mathsf{H})$, describing to which value the process sets its variable in round $i$ upon receiving the multiset $\mathsf{H}$. For this the process finds the first conditional statement in the sequence of instructions for round $i$ whose condition is satisfied by $\mathsf{H}-\set{?}$ and looks at the operation in the statement: \begin{itemize}
\item if it is $x:=\min(\mathsf{H})$ then $\mathtt{update}_i(\mathsf{H})$ is the minimal value in $\mathsf{H}-\set{?}$;
\item if it is $x:=\mathrm{smor}(\mathsf{H})$ then $\mathtt{update}_i(\mathsf{H})$ is the smallest most
frequent value in $\mathsf{H}-\set{?}$;
\item if no condition is satisfied then $\mathtt{update}_i(\mathsf{H})=?$. \end{itemize}
A transition~\label{page:round-transition} $f\lact{\f}_i f'$ is possible when there exists a tuple of multisets $(\mathsf{H}_1,\dots,\mathsf{H}_n)\sat\f$ such that for all $p=1,\dots,n$: $\mathsf{H}_p\incl \mathit{mset}(f)$, and $f'(p)=\mathtt{update}_i(\mathsf{H}_p)$. Observe that $?$ value in $\mathsf{H}_p$ is ignored by the $\mathtt{update}$ function, but not by the communication predicate.
Finally, a transition $(f,d)\act{\p} (f',d')$, for $\p=(\f_1,\dots,\f_n)$, is possible when there is a sequence \begin{equation*}
f_0\lact{\f_1}_1f_1\lact{\f_2}_2\cdots\lact{\f_{r-1}}_{r-1}f_{r-1} \lact{\f_r}_r f_r \end{equation*} with: \begin{itemize}
\item $f_0=f$;
\item $f'(p)=f_{\mathbf{ir}}(p)$ if $f_{\mathbf{ir}}(p)\not=?$, and $f'(p)=f(p)$ otherwise;
\item $d'(p)=d(p)$ if $d(p)\not=?$, and $d'(p)=f_r(p)$ otherwise. \end{itemize} This means that $\mathit{inp}$ is updated with the value from the input updating round $\mathbf{ir}$, but only if the update is not $?$. The value of $\mathit{dec}$ cannot be updated, it can only be set if it has not been set before. For setting the value of $\mathit{dec}$, the value from the last round is used.
An \emph{execution} is a sequence of phase transitions. An \emph{execution of an algorithm respecting a communication predicate} $(\lG\overline{\p})\land(\lF(\p^1\land\lF(\p^2\land\dots (\lF\p^k)\dots)))$ is an infinite sequence: \begin{equation*}
(f_0,d_0)\act{\overline{\p}}^* (f_1,d_1)\act{\p\land\p^1}(f'_1,d'_1)\cdots \act{\overline{\p}}^*(f_k,d_k)\act{\p\land\p^k}(f'_k,d'_k)\act{\overline{\p}}^\w\cdots
\end{equation*} where $\act{\overline{\p}}^*$ stands for a finite sequence of $\act{\overline{\p}}$ transitions, and $\act{\overline{\p}}^\w$ for an infinite sequence. For every execution there is some fixed $n$ standing for the number of processes, $f_0$ is any $n$-tuple of values without $?$, and $d_0$ is the $n$-tuple of $?$ values. Observe that the size of the first tuple determines the size of every other tuple. There is always a transition from every configuration, so an execution cannot block.
\begin{definition}[Consensus problem]\label{def:consensus} An algorithm has \emph{agreement property} if for every number of processes $n$, and for every state $(f,d)$ reachable by an execution of the algorithm, for all processes $p_1$ and $p_2$, either $d(p_1)=d(p_2)$ or one of the two values is $?$. An algorithm has \emph{termination property} if for every $n$, and for every execution there is a state $(f,d)$ on this execution with $d(p)\not=?$ for all $p=1,\dots,n$. An algorithm \emph{solves consensus} if it has agreement and termination properties. \end{definition}
\begin{remark} Normally, the consensus problem also requires irrevocability and integrity properties, but these are always guaranteed by the semantics: once set, a process cannot change its $\mathit{dec}$ value, and a variable can be set only to one of the values that has been received. \end{remark}
\begin{remark} The original definition of the Heard-Of model is open ended: it does not limit possible forms of a communication predicate, conditions, or operations. Clearly, for the kind of result we present here, we need to fix them. The original semantics uses process identifiers. We do not need them for the set of operations we consider here.
\end{remark}
\begin{remark} In the original definition processes are allowed to have identifiers. We do not need them for the set of operations we consider. Later we will add coordinators without referring to identifiers. This is a relatively standard way of avoiding identifiers while having reasonable expressivity. \end{remark}
\iffalse \section{A characterization for the core language}\label{sec:core}
We present a characterization of all the algorithms in our language that solve consensus. In later sections we will extend it to include timestamps and coordinators. As it will turn out, for our analysis we will need to consider only two values $a,b$ with a fixed order between them: we take $a$ smaller than $b$. This order influences the semantics of instructions: the result of $\min$ is $a$ on a multiset containing at least one $a$; the result of $\mathrm{smor}$ is $a$ on a multiset with the same number of $a$'s and $b$'s. Because of this asymmetry we focus on the number of $b$'s in a state. In our analysis we will consider tuples of the form $\mathit{bias}(\th)$ for $0<\th<1$\igw{allowed any $\th$, not only $\th\geq 1/2$}, i.e., a situation where we have $n$ processes, out of which $\th\cdot n$ of them have their variable set to $b$; and the remaining ones to $a$. The state with only $b$'s is called $\mathit{solo}$.
We show that there is essentially one way to solve consensus. The text of the algorithm together with the form of the global predicate determines a threshold $\bar{\thr}$.
We prove that from $\mathit{bias}(\th)$ with $\th\geq \bar{\thr}$, a phase transition under the global predicate cannot decrease the bias, meaning that the state of the $\mathit{inp}$ variable after executing the phase must be $\mathit{bias}(\th')$ for $\th'\geq \th$. Then we prove that in the fragment of the Heard-Of model we consider here, reaching consensus must go through three phases. First, there must be a weak-unifier phase guaranteeing that the bias after it is at least $\bar{\thr}$. This phase should be followed by a finalizer phase guaranteeing that all states with bias at least $\bar{\thr}$ get to $\mathit{solo}$. Finally, there should be a decider phase, which ensures that if the state of the $\mathit{inp}$ variable is $\mathit{solo}$, then all the $\mathit{dec}$ variables get set.
The only exception to this schema is that some phases can be merged together: weak-unifier can be also a finalizer, or finalizer can be also a decider, or there can be a phase merging all the three. In the rest of this section we give some observations and definitions in order to state the result fomally.
It is easy to see that in the core languge we can assume that the list of conditional instructions in each round can have at most one $\mathtt{uni}$ conditional followed by a sequence of $\mathtt{mult}$ conditionals with non-increasing thresholds: \begin{align*}
&\mathtt{if}\ \mathtt{uni}(\mathsf{H})\land |\mathsf{H}|>\thr_u^i\cdot |\Pi|\ \mathtt{then}\ x:=\mathtt{op}_u^i(\mathsf{H})\\
&\mathtt{if}\ \mathtt{mult}(\mathsf{H})\land |\mathsf{H}|>\thr_m^{i,1}\cdot |\Pi|\ \mathtt{then}\ x:=\mathtt{op}_m^i(\mathsf{H})\\
& \vdots\\
&\mathtt{if}\ \mathtt{mult}(\mathsf{H})\land |\mathsf{H}|>\thr_m^{i,k}\cdot |\Pi|\ \mathtt{then}\ x:=\mathtt{op}_m^i(\mathsf{H}) \end{align*} We use superscript $i$ to denote the round number: so $\thr_u^1$ is a threshold associated to $\mathtt{uni}$ instruction in the first round, etc. We will use the above indexing scheme for threshold constants, but we do not assume that every round must have a $\mathtt{uni}$ or $\mathtt{mult}$ instruction.
We fix a \emph{communication predicate}\igw{used an accent for the global predicate}: \begin{equation*}
G\overline{\p}\land (F(\p^1\land F(\p^2\dots(F\p^k)\dots))) \end{equation*} In the above, $\overline{\p}$ is called the \emph{global predicate}, and $\{\p^i\}_{1 \le i \le k}$ are the \emph{sporadic predicates}. We use $\p$ to range over these predicates. Without loss of generality we can assume that every sporadic predicate implies the global predicate and so $\overline{\p}\land \p^i$ is equivalent to $\p^i$. Recall that each $\overline{\p},\p^1,\dots,\p^k$ is an $r$-tuple of conjunctions of atomic predicates. We write $\p\!\!\downharpoonright_i$ for the $i$-th element of the tuple and so $\p$ is $(\p\!\!\downharpoonright_1,\dots,\p\!\!\downharpoonright_r)$. By $\mathit{thr}_i(\p)$ we denote the threshold constant appearing in the predicate $\p\!\!\downharpoonright_i$, i.e., if $\p\!\!\downharpoonright_i$ has $\f_\mathit{thr}$ as a conjunct, then $\mathit{thr}_i(\p) = \mathit{thr}$. We call $\p\!\!\downharpoonright_i$ an \emph{equalizer} if it has $\f_=$ as a conjunct. In this case we also say that $\p$ has an equalizer.
\begin{remark} Given a global predicate $\overline{\p}$ we can remove $\mathtt{mult}$ instructions that will never be executed because there is an instruction with a bigger threshold that is bound to be executed. This applies only to the first round as only in the first round $\mathsf{H}$ set is guaranteed not to have $?$ value. To see why we can do this, suppose the instructions in the first round are as above and $\mathit{thr}_1(\overline{\p})>\thr_m^{1,2}$. Then only the first two $\mathtt{mult}$ instructions can be executed under the predicate $\p$, because a process will always receive a multiset of size at least $\thr_m^{1,2}$. So, given a global predicate $\overline{\p}$, we can remove from the first round the $\mathtt{mult}$ instructions that will never be executed. This implies that we can assume: \begin{equation}\label{eq:syntactic-property}
\thr_m^{1,k}\ge\mathit{thr}_1(\overline{\p})\qquad \thr_u^1\ge\mathit{thr}_1(\overline{\p}) \end{equation} \end{remark}
\begin{proviso}\label{proviso} We adopt the following three additional restrictions: \begin{itemize}
\item We require that the global predicate does not have an
equalizer.
If the global predicate has an equalizer then it is quite easy to construct an
algorithm for consensus because equalizer guarantees that in a given round all
the processes receive the same value.
The characterization below can be extended to this case but would require to
mention it separately in all the statements.
\item \igw{To remove this we need a new constraint on constants and I have no
force to do this} We also assume that
$\thr_u^{\mathbf{ir}+1} \geq \mathit{thr}_{\mathbf{ir}+1} (\overline{\p})$; recall that
$\mathbf{ir}$ is a round where $\mathit{inp}$ variable can be updated.
This assumption is less harmless than the previous one, as there are non-trivial
algorithms violating it.
Yet, if an algorithm is correct and does
not satisfy this condition then we can always increase $\thr_u^{\mathbf{ir}+1}$ to $\mathit{thr}_{\mathbf{ir}+1}(\p)$
and it will still be correct.
\item We assume that there is no $\mathtt{mult}$ instruction in the round
$\mathbf{ir}+1$\igw{This one is removed without much pain}. We
prove in the appendix that such an instruction can be removed without making an
algorithm incorrect (this is also true for the extensions with timestamps and
coordinators).
\end{itemize} \end{proviso}
In order to state our characterizations we need several notions
\begin{definition}\label{def:preserving-and-solo-safe}
A round $i$ is \emph{preserving} w.r.t.\ $\p$ iff $\mathit{thr}_i(\p)<\thr_u^i$ or
$\mathit{thr}_i(\p)<\thr_m^{i,k}$; otherwise it is \emph{non-preserving}.
It is \emph{solo safe} w.r.t.\ $\p$ if $\mathit{thr}_i(\p)\geq
\thr_u^i$. \end{definition}
A preserving round can produce $?$ value out of $f$ without $?$ value, this allows not to update $\mathit{inp}$ in the phase. A solo safe round cannot alter $\mathit{solo}$ state: if $\mathit{solo} \lact{\f_i} f$ then $f=\mathit{solo}$.
\begin{definition}\label{def:border-threshold}
The \emph{border threshold} is $\bar{\thr}=
\max(1-\thr_u^1,1-\thr_m^{1,k}/2)$. \end{definition} Observe that $\bar{\thr}>1/2$ as $\thr_m^{1,k}<1$.
We can now give formal definitions of the concepts discussed at the beginning of the section. \begin{definition}\label{def:predicates}
A predicate $\p$ is a
\begin{itemize}
\item \emph{Decider}, if all rounds are solo safe w.r.t. $\p$.
\item \emph{Unifier}, if
\begin{itemize}
\item $\mathit{thr}_1(\p)\geq \thr_m^{1,k}$ and either $\mathit{thr}_1(\p)\geq \thr_u^1$ or
$\mathit{thr}_1(\p)\geq \bar{\thr}$,
\item there is $1 \le i \le \mathbf{ir}$ such that $\p\!\!\downharpoonright_i$ is an equalizer,
\item rounds $2,\dots,i$ are
non-preserving w.r.t.\ $\p$, and rounds $i+1,\dots \mathbf{ir}$ are solo-safe w.r.t.\
$\p$.
\end{itemize}
\end{itemize} \end{definition}
\textbf{Remark:} Finalizer is not need since\igw{Justification for the simplification} \begin{itemize}
\item If $\thr_m^{1,k}\geq \thr_u^1$ then weak unifier is also a strong unifier.
\item If $\thr_m^{1,k}< \thr_u^1$ then decider is also a finalizer. \end{itemize}
We also list some syntactic properties of algorithms that, as we will see later, imply the agreement property. \begin{definition}\label{def:structure}
We put together some structural properties of an algorithm
\begin{enumerate}
\item First round has a $\mathtt{mult}$ instruction.
\item Every round has a $\mathtt{uni}$ instruction.
\item One among rounds $2,\dots,\mathbf{ir}+1$ does not have a $\mathtt{mult}$ instruction.\igw{new condition}
\item In the first round the operations in all $\mathtt{mult}$ instructions are $\mathrm{smor}$
\item $\thr_m^{1,k}/2 \geq 1-\thr_u^{\mathbf{ir}+1}$, and $\thr_u^1 \geq 1-\thr_u^{\mathbf{ir}+1}$,
\end{enumerate} \end{definition}
With these definitions we can state our characterization:
\begin{theorem}\label{thm:core}
An algorithm in the core language solves consensus iff it has the structural properties from
Definition~\ref{def:structure}, and it satisfies the condition:
\begin{description}
\item[T1]
There are $i\leq j$ such that $\p^i$ is a unifier and $\p^j$ is a decider.
\end{description} \end{theorem}
A two value principle is a corollary from the proof of the above theorem: an algorithm solves consensus iff it solves consensus for two values. Indeed, it turns out that it is enough to work with three values $a,b$, and $?$.
The proof considers separately safety and liveness aspects of the consensus problem.
\fi
\section{A characterization for the core language}\label{sec:core}
We present a characterization of all the algorithms in our language that solve consensus. In later sections we will extend it to include timestamps and coordinators. As it will turn out, for our analysis we will need to consider only two values $a,b$ with a fixed order between them: we take $a$ smaller than $b$. This order influences the semantics of instructions: the result of $\min$ is $a$ on a multiset containing at least one $a$; the result of $\mathrm{smor}$ is $a$ on a multiset with the same number of $a$'s and $b$'s. Because of this asymmetry we mostly focus on the number of $b$'s in a tuple. In our analysis we will consider tuples of the form $\mathit{bias}(\th)$ for $\th<1$, i.e., a tuple where we have $n$ processes (for some large enough $n$), out of which $\th\cdot n$ of them have their value set to $b$; and the remaining ones to $a$. The tuple containing only $b$'s (resp.\ only $a$'s) is called $\mathit{solo}$ (resp.\ $\mathit{solo}^{a}$).
We show that there is essentially one way to solve consensus. The text of the algorithm together with the form of the global predicate determines a threshold $\bar{\thr}$. We prove that in the language we consider here, there should be a \emph{unifier phase} which guarantees that the tuple of $\mathit{inp}$ values after the phase belongs to one of the following four types: $\mathit{solo}$, $\mathit{solo}^{a}$, $\mathit{bias}(\th)$, or $\mathit{bias}(1-\th)$ where $\th \ge \bar{\thr}$. Intuitively, this means that there is a dominant value in the tuple. This phase should be followed by a \emph{decider phase} which guarantees that if the tuple of $\mathit{inp}$ is of one of the above mentioned types, then all the processes decide. While this ensures termination, agreement is ensured by proving that some simple structural properties on the algorithm should always hold. In the rest of this section we give some observations and definitions in order to state the result formally.
Before stating the characterization, we will make some observations that allow us to simplify the structure of an algorithm, and in consequence simplify the statements.
It is easy to see that in our languge we can assume that the list of conditional instructions in each round can have at most one $\mathtt{uni}$ conditional followed by a sequence of $\mathtt{mult}$ conditionals with non-increasing thresholds: \begin{align*}
&\mathtt{if}\ \mathtt{uni}(\mathsf{H})\land |\mathsf{H}|>\thr_u^i\cdot |\Pi|\ \mathtt{then}\ x:=\mathtt{op}_u^i(\mathsf{H})\\
&\mathtt{if}\ \mathtt{mult}(\mathsf{H})\land |\mathsf{H}|>\thr_m^{i,1}\cdot |\Pi|\ \mathtt{then}\ x:=\mathtt{op}_m^i(\mathsf{H})\\ & \vdots\\
&\mathtt{if}\ \mathtt{mult}(\mathsf{H})\land |\mathsf{H}|>\thr_m^{i,k}\cdot |\Pi|\ \mathtt{then}\ x:=\mathtt{op}_m^i(\mathsf{H}) \end{align*} We use superscript $i$ to denote the round number: so $\thr_u^1$ is a threshold associated to $\mathtt{uni}$ instruction in the first round, etc. If round $i$ does not have a $\mathtt{uni}$ instruction, then $\thr_u^1$ will be $-1$. For the sake of brevity, $\thr_m^{i,k}$ will always denote the minimal threshold appearing in any of the $\mathtt{mult}$ instructions in round $i$ and $-1$ if no $\mathtt{mult}$ instructions exist in round $i$.
We fix a \emph{communication predicate}: \begin{equation*}
(\lG\overline{\p})\land(\lF(\p^1\land\lF(\p^2\land\dots (\lF\p^k)\dots))) \end{equation*}
\label{assumptions-com-predicate}Without loss of generality we can assume that every sporadic predicate implies the global predicate; in consequence, $\overline{\p}\land \p^i$ is equivalent to $\p^i$. Recall that each of $\overline{\p},\p^1,\dots,\p^k$ is an $r$-tuple of conjunctions of atomic predicates. We write $\p\!\!\downharpoonright_i$ for the $i$-th element of the tuple and so $\p$ is $(\p\!\!\downharpoonright_1,\dots,\p\!\!\downharpoonright_r)$. By $\mathit{thr}_i(\p)$ we denote the threshold constant appearing in the predicate $\p\!\!\downharpoonright_i$, i.e., if $\p\!\!\downharpoonright_i$ has $\f_{thr}$ as a conjunct, then $\mathit{thr}_i(\p) = thr$, if it has no such conjunct then $\mathit{thr}_i(\p) = -1$ just to avoid treating this case separately. We call $\p\!\!\downharpoonright_i$ an \emph{equalizer} if it has $\f_=$ as a conjunct. In this case we also say that $\p$ has an equalizer.
Recall (cf.~page~\ref{page:round-transition}) that a transition $f\lact{\p}_i f'$ for a round $i$ under a phase predicate $\p$ is possible when there is a tuple of multisets $(\mathsf{H}_1,\dots,\mathsf{H}_n)\sat\p\!\!\downharpoonright_i$ such that for all $p=1,\dots,n$: $\mathsf{H}_p\in\mathit{mset}(f)$ and $f'(p)=\mathtt{update}_i(\mathsf{H}_p)$.
\begin{definition}
We write $d\in\operatorname{fire}_i(f,\p)$ if there
there is $f'$ such that $f\lact{\f}_i f'$ and $d=f'(p)$ for some $p$. \end{definition}
\begin{definition}\label{def:preserving-and-solo-safe}
A round $i$ is \emph{preserving} w.r.t.\ $\p$ iff one of the three conditions
hold: (i) it does not have an $\mathtt{uni}$ instruction, (ii) it does not have a $\mathtt{mult}$
instruction, or (iii) $\mathit{thr}_i(\p)< \max(\thr_u^i,\thr_m^{i,k})$.
Otherwise the round is \emph{non-preserving}.
The round is \emph{solo safe} w.r.t.\ $\p$ if $0\leq \thr_u^i\leq \mathit{thr}_i(\p)$. \end{definition}
If $i$ is a preserving round, then there exists a tuple $f$ having no $?$ value, such that we can produce $?$ out of $f$ after round $i$, this allows us to not update $\mathit{inp}$ in the phase with such a round, i.e., to preserve the old values. If $i$ is a non-preserving round no such tuple exists. A solo safe round cannot alter the $\mathit{solo}$ state. These two properties are stated formally in the next lemma that follows directly from the definitions.
\begin{lemma}\label{lem:preserving}
A round $i$ is preserving w.r.t.\ $\p$ iff there is a tuple $f$
such that $? \notin \mathit{set}(f)$ and $?\in \operatorname{fire}_i(f,\p)$.
If a round $i$ is solo safe and $\mathit{solo}\lact{\p}_i f$ then $f$ is $\mathit{solo}$. \end{lemma}
\begin{remark}
Given a global predicate $\overline{\p}$ we can remove $\mathtt{mult}$
instructions that will never be executed because there is an instruction with
a bigger threshold that is bound to be executed.
To see this, suppose rounds $1,\dots,i-1$ are non-preserving under $\overline{\p}$.
By Lemma~\ref{lem:preserving}, if $f \lact{\overline{\p}\ \!\!\downharpoonright_1}_1 f_1 \lact{\overline{\p}\
\!\!\downharpoonright_2}_2 \dots \lact{\overline{\p}\ \!\!\downharpoonright_{i-1}}_{i-1} f_{i-1}$ and $f$ contains no $?$
then $f_{i-1}$ contains no $?$ as well.
Hence, no heard-of multi-set $\mathsf{H}$ constructed from $f_{i-1}$ can have $?$ value.
Consequently, if round $i$ is such that, say,
$\mathit{thr}_i(\overline{\p}) > \thr_m^{i,2}$ then we can be sure that
only the first two $\mathtt{mult}$ instructions in round $i$ can be executed
under the predicate $\overline{\p}$: a process will always receive at least
$\thr_m^{i,2}$ fraction of values, and as there will be no $?$ value among them
the second threshold constraint will be satisfied.
This implies that we can adopt the following assumption. \end{remark}
\begin{assumption} \label{assumption} For every round $i$, if rounds $1,\dots,i-1$ are non-preserving under $\overline{\p}$ then \begin{equation}\label{eq:syntactic-property}
\begin{cases}
\thr_u^i\ge\mathit{thr}_i(\overline{\p}) &\qquad \text{if round $i$ has $\mathtt{uni}$ instruction}\\
\thr_m^{i,k}\ge\mathit{thr}_i(\overline{\p}) &\qquad \text{if round $i$ has $\mathtt{mult}$ instruction}
\end{cases} \end{equation} \end{assumption} We put some restrictions on the form of algorithms we consider in our characterization. They greatly simplify the statements, and as we argue, are removing cases that are not that interesting anyway.
\begin{proviso}\label{proviso} We adopt the following additional syntactic restrictions:
\begin{itemize}
\item We require that the global predicate does not have an
equalizer.
\item We assume that there is no $\mathtt{mult}$ instruction in the round $\mathbf{ir}+1$.
\end{itemize} \end{proviso} Concerning the first of the above requirements, if the global predicate has an equalizer then it is quite easy to construct an algorithm for consensus because equalizer guarantees that in a given round all the processes receive the same value. The characterization below can be extended to this case but would require to mention it separately in all the statements. Concerning the second requirement. We prove in Lemma~\ref{lem:no-mult-sequence} that if such a $\mathtt{mult}$ instruction exists then either the algorithm violates consensus, or the instruction will never be fired in any execution of the algorithm and so can be removed without making an algorithm incorrect.
In order to state our characterization we need to give formal definitions of concepts we have discussed at the beginning of the section.
\begin{definition}\label{def:border-threshold}
The \emph{border threshold} is $\bar{\thr}=
\max(1-\thr_u^1,1-\thr_m^{1,k}/2)$. \end{definition} Observe that $\bar{\thr}>1/2$ as $\thr_m^{1,k}<1$.
\begin{definition}\label{def:predicates}
A predicate $\p$ is a
\begin{itemize}
\item \emph{Decider}, if all rounds are solo safe w.r.t. $\p$
\item \emph{Unifier}, if the three conditions hold:
\begin{itemize}
\item $\mathit{thr}_1(\p)\geq \thr_m^{1,k}$ and either $\mathit{thr}_1(\p)\geq \thr_u^1$ or
$\mathit{thr}_1(\p)\geq \bar{\thr}$,
\item there exists $i$ such that $1 \le i \le \mathbf{ir}$ and
$\p\!\!\downharpoonright_i$ is an equalizer,
\item rounds $2,\dots,i$ are
non-preserving w.r.t.\ $\p$ and rounds $i+1,\dots \mathbf{ir}$ are solo-safe w.r.t.\
$\p$
\end{itemize}
\end{itemize} \end{definition}
Finally, we list some syntactic properties of algorithms that, as we will see later, imply the agreement property. \begin{definition}\label{def:structure}
An algorithm is \emph{syntactically safe} when:
\begin{enumerate}
\item First round has a $\mathtt{mult}$ instruction.
\item Every round has a $\mathtt{uni}$ instruction.
\item In the first round the operation in every $\mathtt{mult}$ instruction is $\mathrm{smor}$.
\item $\thr_m^{1,k}/2 \geq 1-\thr_u^{\mathbf{ir}+1}$, and $\thr_u^1 \geq 1-\thr_u^{\mathbf{ir}+1}$.
\end{enumerate} \end{definition}
Recall that $\p^1,\dots,\p^k$ are the set of sporadic predicates from the communication predicate. Without loss of generality we can assume that there is at least one sporadic predicate, at a degenerate case it is always possible to take a sporadic predicate that is the same as the global predicate. With these definitions we can state our characterization: \eject
\begin{theorem}\label{thm:core}
Consider algorithms in the core language satisfying syntactic constraints from
Assumption~\ref{assumption} and Proviso~\ref{proviso}.
An algorithm solves consensus iff it is syntactically safe according to
Definition~\ref{def:structure}, and it satisfies the condition:
\begin{description}
\item[T]
There is $i\leq j$ such that $\p^i$ is a unifier and $\p^j$ is a decider.
\end{description} \end{theorem}
A two value principle is a corollary from the proof of the above theorem: an algorithm solves consensus iff it solves consensus for two values. Indeed, it turns out that it is enough to work with three values $a,b$, and $?$.
The proof considers separately safety and liveness aspects of the consensus problem.
\begin{lemma}
An algorithm violating structural properties from
Definition~\ref{def:structure} cannot solve consensus.
An algorithm with the structural properties has the agreement property. \end{lemma}
\begin{lemma}
An algorithm with the structural properties from Definition~\ref{def:structure}
has the termination property iff it satisfies condition T from
Theorem~\ref{thm:core}.
\end{lemma}
\section{A characterization for algorithms with timestamps}
We extend our characterization to algorithms with timestamps. Now, variable $\mathit{inp}$ stores not only the value but also a timestamp, that is the number of the last phase at which $\mathit{inp}$ was updated. These timestamps are used in the first round, as a process considers only values with the most recent timestamp. The syntax is the same as before except that we introduce a new operation, called $\mathrm{maxts}$, that must be used in the first round and nowhere else. So the form of the first round becomes\label{shape-for-timestamps}:
\RestyleAlgo{plain} \begin{algorithm}[H]
\Send{$(\mathit{inp},ts)$}{
\lIf{$\mathit{cond}_1^1(\mathsf{H})$}{$x_1:=\mathrm{maxts}(\mathsf{H})$}
\vdots
\lIf{$\mathit{cond}_1^l(\mathsf{H})$}{$x_1:=\mathrm{maxts}(\mathsf{H})$}
} \end{algorithm}
The semantics of transitions for rounds and phases needs to take into account timestamps. The semantics changes only for the first round; its form becomes $(f,t)\lact{\f}f'$, where $t$ is a vector of timestamps ($n$-tuple of natural numbers). Timestamps are ignored by communication predicates and conditions, but are used in the update operation. The operation $\mathrm{maxts}(\mathsf{H})$ returns the smallest among values with the most recent timestamp in $\mathsf{H}$.
The form of a phase transition changes to $(f,t,d)\act{\p} (f',t',d')$. Value $t(p)$ is the timestamp of the last update of $\mathit{inp}$ of process $p$ (whose value is $f(p)$). We do not need to keep timestamps for $d$ since the value of $\mathit{dec}$ can be set only once. Phase transitions are defined as before, taking into account the above mentioned change for the first round transition, and the fact that in the round $\mathbf{ir}$ when $\mathit{inp}$ is updated then so is its timestamp. Some examples of algorithms with timestamps are presented in Section~\ref{sec:examples}.
As in the case of the core language, without loss of generality we can assume conditions from Assumption~\ref{assumption} from the assumption on page~\pageref{assumption}. Concerning Proviso~\ref{proviso} on page~\pageref{proviso}, we assume almost the same conditions, but now the second one refers to the round $\mathbf{ir}$ and not to the round $\mathbf{ir}+1$, and is a bit stronger.
\begin{proviso}\label{ts-proviso}
We adopt the following syntactic restrictions:
\begin{itemize}
\item We require that the global predicate does not have an
equalizer.
\item We assume that there is no $\mathtt{mult}$ instruction in the round $\mathbf{ir}$,
and that $\thr_u^{\mathbf{ir}} \ge 1/2$.
\end{itemize} \end{proviso} We prove (Lemma~\ref{lem:ts-min}) that if these two assumptions do not hold then either the algorithm violates consensus, or we can remove the $\mathtt{mult}$ instruction and increase $\thr_u^\mathbf{ir}$ without making an algorithm incorrect.
Our characterization resembles the one for the core language. The structural conditions get slightly modified: the condition on constants is weakened, and there is no need to talk about $\mathrm{smor}$ operations in the fist round. \begin{definition}\label{def:ts-structure}
An algorithm is \emph{syntactically t-safe} when:
\begin{enumerate}
\item Every round has a $\mathtt{uni}$ instruction.
\item First round has a $\mathtt{mult}$ instruction.
\item $\thr_m^{1,k}\geq 1-\thr_u^{\mathbf{ir}+1}$ and $\thr_u^1 \geq 1-\thr_u^{\mathbf{ir}+1}$.
\end{enumerate} \end{definition}
We consider the same shape of a communication predicate as in the case of the core language: \begin{equation*}
(\lG\overline{\p})\land(\lF(\p^1\land\lF(\p^2\land\dots (\lF\p^k)\dots))) \end{equation*} We also adopt the same straightforward simplifying assumptions about the predicate as on page~\pageref{assumptions-com-predicate}.
A characterization for the case with timestamps uses a stronger version of a unifier that we define now. The intuition is that we do not have $\bar{\thr}$ constant because of $\mathrm{maxts}$ operations in the first round. In other words, the conditions are the same as before but when taking $\bar{\thr}>1$. \begin{definition}\label{def:strong-unifier}
A predicate $\p$ is a \emph{strong unifier} $\p$ if it is a unifier in a sense
of Definition~\ref{def:predicates} and $\thr_u^1\leq\mathit{thr}_1(\p)$. \end{definition} Modulo the above two changes, the characterization stays the same.
\begin{theorem}\label{thm:ts}
Consider algorithms in the language with timestamps satisfying syntactic constraints from
Assumption~\ref{assumption} and Proviso~\ref{ts-proviso}.
An algorithm satisfies consensus iff it is
syntactically t-safe according to the structural properties from
Definition~\ref{def:ts-structure}, and it satisfies:
\begin{description}
\item[sT]
There are $i\leq j$ such that $\p^i$ is a strong unifier and $\p^j$ is a decider.
\end{description} \end{theorem}
\section{A characterization for algorithms with coordinators}
We consider algorithms equipped with coordinators. The novelty is that we can now have rounds where there is a unique process that receives values from other processes, as well as rounds where there is a unique process that sends values to other processes. For this we extend the syntax by introducing a round type that can be: $\mathtt{every}$, $\mathtt{lr}$ (leader receive), or $\mathtt{ls}$ (leader-send): \begin{itemize} \item A round of type $\mathtt{every}$ behaves as before. \item In a round of type $\mathtt{lr}$ only one arbitrarily selected process receives values. \item In a round of type $\mathtt{ls}$, the process selected in the immediately preceding $\mathtt{lr}$ round sends its value to all other processes. \end{itemize} If an $\mathtt{ls}$ round is not preceded by an $\mathtt{lr}$ round then an arbitrarily chosen process sends its value. We assume that every $\mathtt{lr}$ round is immediately followed by an $\mathtt{ls}$ round, because otherwise the $\mathtt{lr}$ round would be useless. We also assume that $\mathit{inp}$ and $\mathit{dec}$ are not updated during $\mathtt{lr}$ rounds, as only one process is active in these rounds.
For $\mathtt{ls}$ rounds we introduce a new communication predicate. The predicate $\f_{\mathtt{ls}}$ says that the leader successfully sends its message to everybody; it makes sense only for $\mathtt{ls}$ rounds.
These extensions of the syntax are reflected in the semantics.
For convenience we introduce two new names for tuples: $\mathit{one}^{b}$ is a tuple where all the entries are $?$ except for one entry which is $b$; similarly for $\mathit{one}^{a}$. Abusing the notation we also write $\mathit{one}^{?}$ for $\mathit{solo}^{?}$, namely the tuple consisting only of $?$ values.
If $i$-th round is of type $\mathtt{lr}$, we have a transition $f\lact{\p}_i \mathit{one}^{d}$ for every $d\in\operatorname{fire}_i(f,\p)$. In particular, if $?\in\operatorname{fire}_i(f,\p)$ then $f\lact{\f}_i\mathit{solo}^{?}$ is possible.
Suppose $i$-th round is of type $\mathtt{ls}$. If $\p\!\!\downharpoonright_i$ contains $\f_{\mathtt{ls}}$ as a conjunct then \begin{align*}
\mathit{one}^{d}\lact{\p}_i&\mathit{solo}^d && \text{if round $(i-1)$ is of type $\mathtt{lr}$} \\
f\lact{\p}_i&\mathit{solo}^d\ \text{for $d\in \mathit{set}(f)$}&& \text{otherwise} \end{align*} When $\p\!\!\downharpoonright_i$ does not contain $\f_{\mathtt{ls}}$ then independently of the type of the round $(i-1)$ we have $f\lact{\p}_i f'$ for every $d\in\mathit{set}(f)$ and $f'$ such that $\mathit{set}(f')\incl \set{d,?}$.
We consider the same shape of a communication predicate as in the case of the core language: \begin{equation*}
(\lG\overline{\p})\land(\lF(\p^1\land\lF(\p^2\land\dots (\lF\p^k)\dots))) \end{equation*} We also adopt the same straightforward simplifying assumptions about the predicate as on page~\pageref{assumptions-com-predicate}.
The semantics allows us to adopt some more simplifying assumptions about the syntax of the algorithm, and the form of the communication predicate.
\begin{assumption}\label{assumption-ls-lr} We assume that $\mathtt{ls}$ rounds do not have a $\mathtt{mult}$ instruction. Indeed, from the above semantics it follows that $\mathtt{mult}$ instruction is never used in a round of type $\mathtt{ls}$. It also does not make much sense to use $\f_{\mathtt{ls}}$ in rounds other than of type $\mathtt{ls}$. So to shorten some definitions we require that $\f_{\mathtt{ls}}$ can appear only in communication predicates for $\mathtt{ls}$-rounds. For similar reasons we require that $\f_=$ predicate is not used in $\mathtt{ls}$-rounds. As we have observed in the first paragraph, we can assume that neither round $\mathbf{ir}$ nor the last round are of type $\mathtt{lr}$. \end{assumption}
The notions of preserving and solo-safe rounds get extended to incorporate the new syntax
\begin{definition}
A round of type $\mathtt{ls}$ is \emph{c-solo-safe} w.r.t.\ $\p$ if $\p_i$ has
$\f_{\mathtt{ls}}$ as a conjunct, it is \emph{c-preserving} otherwise.
A round of type other than $\mathtt{ls}$ is \emph{c-preserving} or \emph{c-solo-safe}
w.r.t\ $\p$ if it is so in the sense of
Definition~\ref{def:preserving-and-solo-safe}. \end{definition}
\begin{definition}
A \emph{c-equalizer} is a conjunction containing a term of the form $\f_=$ or $\f_{\mathtt{ls}}$. \end{definition}
\begin{proviso}\label{c-proviso}
We assume the same proviso as on page~\pageref{proviso}, but using the concepts of c-equalizers instead of equalizers. \end{proviso} To justify the proviso we prove that $\mathtt{mult}$ instruction in round ${\mathbf{ir}+1}$ cannot be useful; cf.\ Lemma~\ref{lem:co-no-mult-fo}.
Assumption on page~\pageref{assumption} is also updated to using the notion of c-preserving instead of preserving. We restate it for convenience.
\begin{assumption}\label{co-assumption} For every round $i$, if rounds $1,\dots,i-1$ are non-c-preserving under $\overline{\p}$ then \begin{equation}\label{eq:c-syntactic-property}
\begin{cases}
\thr_u^i\ge\mathit{thr}_i(\overline{\p}) &\qquad \text{if round $i$ has $\mathtt{uni}$ instruction}\\
\thr_m^{i,k}\ge\mathit{thr}_i(\overline{\p}) &\qquad \text{if round $i$ has $\mathtt{mult}$ instruction}
\end{cases} \end{equation} \end{assumption}
Finally, the above modifications imply modifications of terms from Definition~\ref{def:predicates}.
\begin{definition}\label{def:c-predicates}
A predicate $\p$ is called a
\begin{itemize}
\item \emph{c-decider}, if all rounds are c-solo safe w.r.t.\ $\p$.
\item \emph{c-unifier}, if
\begin{itemize}
\item $\mathit{thr}_1(\p)\geq \thr_m^{1,k}$ and either $\mathit{thr}_1(\p)\geq \thr_u^1$ or
$\mathit{thr}_1(\p)\geq \bar{\thr}$,
\item there exists $i$ such that $1 \le i \le \mathbf{ir}$ and
$\p\!\!\downharpoonright_i$ is an c-equalizer,
\item rounds $2,\dots,i$ are
non-c-preserving w.r.t.\ $\p$ and rounds $i+1,\dots \mathbf{ir}$ are c-solo-safe
w.r.t.\ $\p$.
\end{itemize}
\end{itemize} \end{definition}
With these modifications, we get an analog of Theorem~\ref{thm:core} for the case with coordinators subject to the modified provisos as explained above.
\begin{theorem}\label{thm:coordinators}
Consider algorithms in the language with timestamps satisfying syntactic constraints from
Assumptions~\ref{assumption-ls-lr}, \ref{co-assumption} and Proviso~\ref{c-proviso}.
An algorithm satisfies consensus iff the first round and
the $(\mathbf{ir}+1)^{th}$ round are not of type $\mathtt{ls}$, it is syntactically safe according
to Definition~\ref{def:structure}, and it satisfies the condition:
\begin{description}
\item[cT]
There are $i\leq j$ such that $\p^i$ is a c-unifier and $\p^j$ is a c-decider.
\end{description} \end{theorem}
\section{A characterization for algorithms with coordinators and timestamps}
Finally, we consider the extension of the core language with both coordinators and with timestamps. Formally, we extend the coordinator model with timestamps in the same way we have extended the core model. So now $\mathit{inp}$ variables store pairs (value, timestamp), and all the instructions in the first round are $\mathrm{maxts}$ (cf. page~\pageref{shape-for-timestamps}).
\begin{proviso}\label{ts-co-proviso}
We assume the same proviso as for timestamps: Proviso~\ref{ts-proviso} on page~\pageref{ts-proviso}, but using the notion of c-equalizer. \end{proviso} As in the previous cases we justify our proviso by showing that the algorithm violating the second condition would not be correct or the condition could be removed (Lemma~\ref{lem:co-ts-no-mult-fo}).
The characterization is a mix of conditions from timestamps and coordinator cases.
\begin{definition}
A predicate $\p$ is a strong c-unifier if it is a c-unifier (cf.~Definition~\ref{def:strong-unifier})
and $\thr_u^1 \le \mathit{thr}_1(\p)$. \end{definition}
\begin{theorem}\label{thm:ts-coordinators}
Consider algorithms in the language with timestamps satisfying syntactic constraints from
Assumptions~\ref{assumption-ls-lr}, \ref{co-assumption} and Proviso~\ref{ts-co-proviso}.
An algorithm satisfies consensus iff the first
round and the $(\mathbf{ir}+1)^{th}$ round are not of type $\mathtt{ls}$, it has the structural properties from Definition~\ref{def:ts-structure}, and it satisfies:
\begin{description}
\item[scT] There are $i\leq j$ such that $\p^i$ is a strong c-unifier and $\p^j$ is a c-decider.
\end{description} \end{theorem}
\section{Examples}\label{sec:examples}
We apply the characterizations from the previous sections to some consensus algorithms studied in the literature, and their variants. We show some modified versions of these algorithms, and some impossibility results, that are easy to obtain thanks to our characterization.
Finally, we a show an algorithm that is new as far as we can tell. It is obtained by eliminating timestamps from a version of Paxos algorithm, and using bigger thresholds instead.
\subsection{Core language}
First, we can revisit the parametrized Algorithm~\ref{alg:one-third} from page~\pageref{alg:one-third}. This is an algorithm in the core language, and it depends on two thresholds. Theorem~\ref{thm:core} implies that it solves consensus iff $\mathit{thr}_1/2\geq 1-\mathit{thr}_2$. In case of $\mathit{thr}_1=\mathit{thr}_2=2/3$ we obtain the well known OneThird algorithm. But, for example, $\mathit{thr}_1=1/2$ and $thr_2=3/4$ are also possible solutions for this inequality. So Algorithm~\ref{alg:one-third} solves consensus for these values of thresholds.
Because of the conditions on constants, $\thr_m^{1,k}/2\geq 1-\thr_u^{\mathbf{ir}+1}$ coming from Definition~\ref{def:structure}, it is not possible to have an algorithm in the core language where all constants are at most $1/2$. This answers a question from~\cite{charron-heard-distributed09} for the language we consider here.
The above condition on constants is weakened to $\thr_m^{1,k}\geq 1-\thr_u^{\mathbf{ir}+1}$ when we have timestamps. In this case indeed it is possible to use only $1/2$ thresholds. An algorithm from~\cite{Mar:17} is discussed later in this section.
We can go further with a parametrization of the OneThird algorithm. The one below is a general form of an algorithm with at most one $\mathtt{mult}$ instruction and two phases.
\RestyleAlgo{ruled} \begin{algorithm}[H]\label{alg:one-third-more-parameters}
\Send{$(\mathit{inp})$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > \thr_u^1 \cdot |\Pi|$}{$x_1:=\mathit{inp}:=\mathrm{smor}(\mathsf{H})$}
\lIf{$\mathtt{mult}(\mathsf{H}) \land |\mathsf{H}| > \thr_m^1 \cdot |\Pi|$}{$x_1:=\mathit{inp}:=\mathrm{smor}(\mathsf{H})$}
}
\Send{$x_1$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > \thr_u^2 \cdot |\Pi|$}{$\mathit{dec}:=\mathrm{smor}(\mathsf{H})$}
}
\BlankLine
\Cp{$\lF(\p^1 \land \lF\p^2)$}
\caption{Parametrized OneThird algorithm~\cite{charron-heard-distributed09}, $\thr_u^1$, $\thr_m^1, \thr_u^2$ are constants from $(0,1)$} \end{algorithm}
Let us list all the constraints on the constants that would make this algorithm solve consensus. Observe, that if we want an algorithm with 2 rounds, by structural constraints from Definition~\ref{def:structure}, there must be $\mathtt{mult}$ instruction in the first round and there cannot be $\mathtt{mult}$ instruction in the second round. The operation in the $\mathtt{mult}$ instruction must be $\mathrm{smor}$. Both rounds need to have $\mathtt{uni}$ instruction.
The structural constraints from Definition~\ref{def:structure} imply \begin{equation*}
\thr_m^1/2\geq 1-\thr_u^2 \quad\text{and}\quad \thr_u^1\geq 1-\thr_u^2 \end{equation*} Recall that the formula for border threshold is \begin{equation*}
\bar{\thr}= \max(1-\thr_u^1,1-\thr_m^{1,k}/2) \end{equation*}
We will consider only the case when the global predicate is $(\true,\true)$ so there are no constraints coming from the proviso. Let us see what can be $\p^1$ and $\p^2$ so that we have a unifier and a decider.
Decider is a simpler one. We need to have $\p^2:=(\f_{\thr_u^1},\f_{\thr_u^2})$, or some bigger thresholds.
For a unifier we need $\p^1:=(\f_{=}\land\f_{\th},\true)$, but we may have a choice for $\th$ with respect to the constants $\thr_u^1, \thr_m^1, \thr_u^2$. \begin{itemize}
\item Suppose $\thr_u^1\leq \thr_m^1$. Then the constraints on unifier reduce to $\th\geq\thr_m^1$.
\item Suppose $\thr_u^1>\thr_m^1$ and all the constrains for the algorithm to
solve consensus are satisfied. Then we can
decrease $\thr_u^1$ to $\thr_m^1$, and they will be still satisfied. Actually,
one can show that one can decrease $\thr_u^1$ to $\thr_m^1/2$. \end{itemize}
To sum up, the constraints are $\th\geq\thr_m^1$, $\thr_u^1=\thr_m^1/2\geq 1 -\thr_u^2$. If we want to keep the constants as small as possible we take $\th=\thr_m^1$. We get the best constraints as a function of $\thr_m^1$: \begin{equation*}
\p^1=(\f_{=}\land\f_{\thr_m^1},\true)\qquad \p^2=(\thr_m^1/2,1-\thr_m^1) \end{equation*}
\begin{remark}
The notion of a unifier (Definition~\ref{def:predicates}) suggests that there
are two types of algorithms for consensus.
The first type has a round that guarantees that every process has the same
value (a unifier with $\thr_u^1\leq \thr_m^{1,k}$), and a later round that makes all the processes decide
(decider).
The second type has a weaker form of unifier ($\thr_u^1> \thr_m^{1,k}$) that only
guarantees bias between values to be above $\bar{\thr}$ (or below $1-\bar{\thr}$).
Then the decider is stronger and makes every process decide even if not all
processes have the same value.
We do not see algorithms of the second type in the literature, and indeed
the characterization says why.
The second type appears when $\thr_u^1> \thr_m^{1,k}$, but our characterization
implies that in this case we can decrease $\thr_u^1$ to $\thr_m^{1,k}/2$ and the
algorithm will be still correct.
So unless there are some constrains external to the model, algorithms with a
weaker form of unifier are not interesting. \end{remark}
\subsection{Timestamps}
We start with a timestamp algorithm from~\cite{Mar:17} that uses only $1/2$ thresholds.
\begin{algorithm}[H]\label{alg:new-timestamp}
\Send{$(\mathit{inp},ts)$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$x_1:=\mathrm{maxts}(\mathsf{H})$}
\lIf{$\mathtt{mult}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$x_1:=\mathrm{maxts}(\mathsf{H})$}
}
\Send{$x_1$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$x_2:=\mathit{inp}:=\mathrm{smor}(\mathsf{H})$}
}
\Send{$x_2$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$\mathit{dec}:=\mathrm{smor}(\mathsf{H})$}
}
\BlankLine
\Cp{$\lF(\p^1)$ where $\p^1 := (\f_=\land\f_{1/2},\ \f_{1/2},\ \f_{1/2})$}
\caption{A timestamp algorithm from~\cite{Mar:17}}
\end{algorithm}
This algorithm is correct by Theorem~\ref{thm:ts}.
The theorem also says that the communication predicate can be weakened to
$\lF(\p^1\land\lF\p^2)$ where $\p^1=(\f_=\land\f_{1/2},\ \f_{1/2},\ \true)$
and $\p^2=(\f_{1/2},\ \f_{1/2},\ \f_{1/2})$.
If we do not want to have $\f_=$ requirement on the first round where we check
timestamps, we can consider the following modification of the above.
\begin{algorithm}[H]\label{alg:mod-new-timestamp}
\Send{$(\mathit{inp},ts)$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$x_1:=\mathrm{maxts}(\mathsf{H})$}
\lIf{$\mathtt{mult}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$x_1:=\mathrm{maxts}(\mathsf{H})$}
}
\Send{$x_1$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$x_2:=\mathrm{smor}(\mathsf{H})$}
\lIf{$\mathtt{mult}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$x_2:=\mathrm{smor}(\mathsf{H})$}
}
\Send{$x_2$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$x_3:=\mathit{inp}:=\mathrm{smor}(\mathsf{H})$}
}
\Send{$x_3$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$\mathit{dec}:=\mathrm{smor}(\mathsf{H})$}
}
\BlankLine
\Cp{$\lF(\p^1\land\lF\p^2)$}
\Where{$\p^1=(\f_{1/2},\ \f_=\land\f_{1/2}, \ \f_{1/2}, \ \true)$
and $\p^2=(\f_{1/2},\ \f_{1/2},\ \f_{1/2}, \ \f_{1/2})$}
\caption{A modification of a timestamp algorithm from~\cite{Mar:17}}
\end{algorithm}
Note that by Theorem~\ref{thm:ts}, when we move the equalizer to the
second round, there necessarily has to be a $\mathtt{mult}$ instruction in the
second round.
\subsection{Timestamps and coordinators}
When we have both timestamps and coordinators, we get variants of Paxos
algorithm.
\begin{algorithm}[H]\label{alg:paxos}
\Send{$(\mathit{inp},ts)$ $\mathtt{lr}$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$x_1:=\mathrm{maxts}(\mathsf{H})$}
\lIf{$\mathtt{mult}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$x_1:=\mathrm{maxts}(\mathsf{H})$}
}
\Send{$x_1$ $\mathtt{ls}$}{
\lIf{$\mathtt{uni}(\mathsf{H})$}{$x_2:=\mathit{inp}:=\mathrm{smor}(\mathsf{H})$}
}
\Send{$x_2$ $\mathtt{lr}$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$x_3:=\mathrm{smor}(\mathsf{H})$}
}
\Send{$x_3$ $\mathtt{ls}$}{
\lIf{$\mathtt{uni}(\mathsf{H})$}{$dec:=\mathrm{smor}(\mathsf{H})$}
}
\BlankLine
\Cp{$\lF(\p^1)$ where $\p^1 := (\f_{1/2},\ \f_{\mathtt{ls}},\
\f_{1/2},\ \f_\mathtt{ls})$}
\caption{Paxos algorithm} \end{algorithm} The algorithm is correct by Theorem~\ref{thm:ts-coordinators}. One can observe that without modifying the code there is not much room for improvement in this algorithm. A decider phase is needed to solve consensus, and $\p_1$ is a minimal requirement for a decider phase.
A possible modification is to change the thresholds in the first round to, say, $1/3$ and in the third round to $2/3$ (both in the algorithm and in the communication predicate).
Chandra-Toueg algorithm in the Heard-Of model is actually syntactically the same
as four round Paxos~\cite{Mar:17}.
The communication predicate is even stronger so it clearly satisfies our constraints.\\
The next example is a three round version of Paxos algorithm.
\begin{algorithm}[H]\label{alg:paxos-three}
\Send{$(\mathit{inp},ts)$ $\mathtt{lr}$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$x_1:=\mathrm{maxts}(\mathsf{H})$}
\lIf{$\mathtt{mult}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$x_1:=\mathrm{maxts}(\mathsf{H})$}
}
\Send{$x_1$ $\mathtt{ls}$}{
\lIf{$\mathtt{uni}(\mathsf{H})$}{$x_2:=\mathit{inp}:=\mathrm{smor}(\mathsf{H})$}
}
\Send{$x_2$ $\mathtt{every}$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$\mathit{dec}:=\mathrm{smor}(\mathsf{H})$}
}
\BlankLine
\Cp{$\lF(\p^1)$ where $\p^1 := (\f_{1/2},\ \f_{\mathtt{ls}},\ \f_{1/2})$}
\caption{Three round Paxos algorithm}
\end{algorithm}
The algorithm is correct by Theorem~\ref{thm:ts-coordinators}.
Once again it is possible to change constants in the first round to $1/3$ and in
the last round to $2/3$ (both in the algorithm and in the communication predicate).
\subsection{Coordinators without timestamps}
One can ask if it is possible to have an algorithm with coordinators without
timestamps.
Here is a possibility that resembles three round Paxos:
\begin{algorithm}[H]\label{alg:paxos-coordinators}
\Send{$(\mathit{inp})$ $\mathtt{lr}$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 2/3 \cdot |\Pi|$}{$x_1:=\mathrm{smor}(\mathsf{H})$}
\lIf{$\mathtt{mult}(\mathsf{H}) \land |\mathsf{H}| > 2/3 \cdot |\Pi|$}{$x_1:=\mathrm{smor}(\mathsf{H})$}
}
\Send{$x_1$ $\mathtt{ls}$}{
\lIf{$\mathtt{uni}(\mathsf{H})$}{$x_2:=\mathit{inp}:=\mathrm{smor}(\mathsf{H})$}
}
\Send{$x_2$ $\mathtt{every}$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 2/3 \cdot |\Pi|$}{$\mathit{dec}:=\mathrm{smor}(\mathsf{H})$}
}
\BlankLine
\Cp{$\lF(\p)$\quad where $\p := (\f_{2/3},\f_{\mathtt{ls}},\f_{2/3})$}
\caption{Three round coordinator algorithm}
\end{algorithm}
The algorithm solves consensus by Theorem~\ref{thm:coordinators}.
The constants are bigger than in Paxos because we do not have timestamps:
the constraints on constants come from
Definition~\ref{def:structure}, and not from Definition~\ref{def:ts-structure}.
The advantage is that we do not need time-stamps, while keeping the same
structure as for three-round Paxos.
We can parametrize this algorithm in the same way as we did for
Algorithm~\ref{alg:one-third-more-parameters}.
\section{Proof of the characterization for the core language} In this section we prove Theorem~\ref{thm:core}, namely a characterization of algorithms in the core language that solve consensus.
We fix a \emph{communication predicate} \begin{equation*}
(\lG\overline{\p})\land(\lF(\p^1\land\lF(\p^2\land\dots (\lF\p^k)\dots))) \end{equation*} Recall that each of $\overline{\p},\p^1,\dots,\p^k$ is an $r$-tuple of atomic predicates. We write $\p\!\!\downharpoonright_i$ for the $i$-th element of the tuple. So $\p$ is $(\p\!\!\downharpoonright_1,\dots,\p\!\!\downharpoonright_r)$. Often we will write $\p_i$ instead of $\p\!\!\downharpoonright_i$, in particular when $\p_i$ appears as a subscript; for example $f\lact{\p_i} f'$. If $\f$ is a conjunction of atomic predicates, then by $\mathit{thr}(\f)$ we denote the threshold constant appearing in $\f$, i.e., if $\f$ has $\f_{thr}$ as a conjunct then $\mathit{thr}(\f) = thr$, if it has no such conjunct then $\mathit{thr}(\f) = -1$.
\begin{definition}
We define several tuples of values.
All these tuples will be $n$-tuples for some fixed but large enough $n$
and will be over $\set{a, b}$, or $\set{?,b}$ or $\set{?,a}$.
For $\th < 1$, the tuple $\mathit{bias}(\th)$ is a tuple containing only $a$'s and
$b$'s with $|b|=\th\cdot n$.
Tuple $\mathit{bias}(1/2)$ is also called $\mathit{spread}$ to emphasize that there is the
same number of $a$'s and $b$'s.
A tuple consisting only of $b$'s is denoted $\mathit{solo}$.
Similarly, $\mathit{bias}^{?}(\th)$ is a tuple over $\set{?,b}$ with $|b|=\mathit{thr}\cdot
n$ and $\mathit{bias}^{?}_a(\th)$ is a tuple over $\set{?,a}$ with $|a| = \mathit{thr} \cdot n$. We also write $\mathit{solo}^?$ for a tuple consisting only of $?$'s.
Finally, we write $\mathit{solo}^a$ for a tuple consisting only of $a$'s. \end{definition}
\noindent\textbf{Notations:} \begin{itemize}
\item For a tuple of values $f$ and a predicate $\p$ we write $\operatorname{fire}_i(f,\p)$
instead of $\operatorname{fire}_i(f,\p\!\!\downharpoonright_i)$.
Similarly we write $\mathit{thr}_i(\p)$ for $\mathit{thr}(\p\!\!\downharpoonright_i)$.
\item If $f, f'$ are tuples of values, we write $f\act{\p}f'$ instead of
$(f,\mathit{solo}^{?})\act{\p}(f',\mathit{solo}^{?})$.
\end{itemize}
Recall that the border threshold for an algorithm, by Definition~\ref{def:border-threshold}, is \begin{equation*} \bar{\thr}= \max(1-\thr_u^1,1-\thr_m^{1,k}/2) \end{equation*} Observe that $\bar{\thr}>1/2$ as $\thr_m^{1,k}<1$.
The proof of Theorem~\ref{thm:core} is divided into three parts. First we show that if an algorithm does not satisfy the structural properties then it violates agreement. Then we restrict our attention to algorithms with the structural properties. We show that if condition T holds, then consensus is satisfied. Finally, we prove that if condition T does not hold then the algorithm does not have the termination property.
To simplify the statements of the lemmas, we adopt the following convention. If some condition is proved as necessary for consensus, then for the forthcoming lemmas, that condition is assumed. For example, in Lemma~\ref{lem:no-uni}, we prove that all rounds should have a $\mathtt{uni}$ instruction. Hence after Lemma~\ref{lem:no-uni}, it it implicitly assumed that all algorithms considered have a $\mathtt{uni}$ instruction in every round.
\subsubsection*{Part 1: Structural properties}
\begin{lemma}\label{lem:no-mult}
If no $\mathtt{mult}$ instruction is present in the first round then the algorithm may not terminate. \end{lemma} \begin{proof}
Suppose no $\mathtt{mult}$ instruction is present in the first round.
It is easy to verify that for every predicate $\p$, we have
$\mathit{spread} \lact{\p_1}_1 \mathit{solo}^?$ and $\mathit{solo}^? \lact{\p_i}_i \mathit{solo}^?$ for $i > 1$. Hence we have the phase transition $\mathit{spread} \act{\p} \mathit{spread}$.
\end{proof}
\begin{lemma}\label{lem:no-uni}
If there is a round without a $\mathtt{uni}$ instruction then the algorithm does not terminate. \end{lemma} \begin{proof}
Let $i$ be the round without a $\mathtt{uni}$ instruction. It is easy
to verify that for every predicate $\p$, we have $\mathit{solo} \lact{\p_j}_j \mathit{solo}$ for $j < i$, $\mathit{solo} \lact{\p_i}_i \mathit{solo}^?$ and $\mathit{solo}^? \lact{\p_j}_j \mathit{solo}^?$ for $j > i$. Hence we get the phase
transition $\mathit{solo} \act{\p} \mathit{solo}$.
\end{proof}
Before considering the remaining structural requirements we state some useful lemmas.
\begin{lemma}\label{lem:spread}
Suppose all $\mathtt{mult}$ instructions in the first round have $\mathrm{smor}$ as the
operation. Then for every predicate $\p$ we have $\set{a,b}\incl\operatorname{fire}_1(\mathit{spread},\p)$. \end{lemma} \begin{proof}
From $\mathit{spread}$, it is easy to see that we can construct a multiset $\mathsf{H}$ containing more $a$'s
than $b$'s such that the size of $\mathsf{H}$ is bigger than $\mathit{thr}_1(\p)$ and
$\thr_m^{1,1}$.
Similarly we can construct a multiset having more $b$'s than $a$'s.
This then implies that $\set{a,b} \incl \operatorname{fire}_1(\mathit{spread},\p)$.
\end{proof}
\begin{lemma}\label{lem:bias-preserving}
If a round $i$ is preserving w.r.t.\ $\p$ then
$\set{b,?}\in\operatorname{fire}_i(\mathit{bias}(\th),\p)$ for all sufficiently big $\th$.
Similarly $\set{a,?} \in \operatorname{fire}_i(\mathit{bias}(\th),\p)$ for all sufficiently small $\th$. \end{lemma} \begin{proof}
Let $\th > \max(\thr_u^i,\mathit{thr}_i(\p))$.
Because of the $\mathtt{uni}$ instruction, it is then clear that $b \in \operatorname{fire}_i(\mathit{bias}(\th),\p)$.
Since the round is preserving (and since $\mathtt{uni}$ instructions are present in every round),
either there is no $\mathtt{mult}$ instruction in round $i$ or $\mathit{thr}_i(\p)<\thr_u^i$, or
$\mathit{thr}_i(\p)<\thr_m^{i,k}$.
In the first case, let $\mathsf{H}$ be the entire tuple.
In the second case, let $\mathsf{H}$ be a multi-set consisting only of $b$'s but of size smaller than $\thr_u^i$ and bigger than $\mathit{thr}_i(\p)$.
In the third case, let $\mathsf{H}$ be a multi-set of size smaller
than $\thr_m^{i,k}$ (and bigger than $\mathit{thr}_i(\p)$) with at least one $a$, and one $b$. In all the cases, it is clear
that $? = \mathtt{update}_i(\mathsf{H})$ and so $? \in \operatorname{fire}_i(\mathit{bias}(\th),\p)$.
We can argue similarly for the other case as well. \end{proof}
\begin{lemma}\label{lem:ab-if-mult}
For every predicate $\p$, for every round $i$ with $\mathtt{mult}$ instruction, there is
a threshold $\th\geq 1/2$ such that $\set{a,b}\incl \operatorname{fire}_i(\mathit{bias}(\th),\p)$. \end{lemma} \begin{proof}
Let $I$ be the $\mathtt{mult}$ instruction in round $i$ with the biggest
threshold.
This threshold is called $\thr_m^{i,1}$ in our notation.
If the operation of $I$ is $\mathrm{smor}$ then we take $\th=1/2$ and argue
similar to the proof of Lemma~\ref{lem:spread}.
If the operation of $I$ is $\min$ then we take $\th>\max(\mathit{thr}_i(\p), \thr_u^1,1/2)$.
Because of the $\mathtt{uni}$ instruction, we can
get $b$ by sending a multi-set $\mathsf{H}$ consisting of all the $b$'s in $\mathit{bias}(\th)$.
Further because of the instruction $I$, if we send the entire tuple as a
multi-set, we get $a$. \end{proof}
\begin{lemma}\label{lem:a-if-min}
Suppose the first round has a $\mathtt{mult}$ instruction with
$\min$ as operation. Then $a\in \operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$ for
every $\th > 0$. \end{lemma} \begin{proof}
Let the $j^{th}$ $\mathtt{mult}$ instruction be the instruction
with the $\min$ operation. Let $\mathsf{H}$ be any multiset containing
at least one $a$ and one $b$ and is of size just above $\thr_m^{1,j}$.
By observation~(\ref{eq:syntactic-property}) we have
$\thr_m^{1,j} \ge \mathit{thr}_1(\overline{\p})$ and so we have that $\mathsf{H} \models \overline{\p}_1$ and
$a = \mathtt{update}_1(\mathsf{H})$. \end{proof}
The next sequence of lemmas tells us what can happen in a sequence of rounds.
\begin{lemma}\label{lem:any-qbias-from-qbias}
Suppose none of $\p_k,\dots,\p_l$ is an equalizer. If $\set{b,?}\incl
\operatorname{fire}_k(f,\p_k)$ then for every $\th'$ we have
$f\lact{\p_k}_k\dots\lact{\p_l}_l \mathit{bias}^{?}(\th')$. Similarly, for $b$ replaced
by $a$, and $\mathit{bias}^{?}(\th')$ replaced by $\mathit{bias}^{?}_a(\th')$. \end{lemma} \begin{proof}
The proof is by induction on $k-l$.
If $k=l$ then the lemma is clearly true since we can produce both $b$ and $?$
values, and $\p_k$ is not an equalizer.
For the induction step, consider the last round $l$, and let $\th''=\thr_u^l+\e$ for
some small $\e$.
We have $b\in\operatorname{fire}_l(\mathit{bias}^{?}(\th''),\p_l)$ because of the $\mathtt{uni}$ instruction.
We can also construct a multiset $\mathsf{H}$ from $\mathit{bias}^{?}(\th'')$
of size $1-\e' > \mathit{thr}(\p_l)$ for some small $\e' > \e$
containing $\th''-\e'$ fraction of $b$'s and $1-\th''$ fraction of
$?$.
This multiset shows that $?\in\operatorname{fire}_l(\mathit{bias}^{?}(\th),\p_l)$.
So from $\mathit{bias}^{?}(\th'')$ in round $l$ we can get $\mathit{bias}^{?}(\th')$ for any $\th'$.
The induction assumption gives us $f\lact{\p_k}_k\dots\lact{\p_{l-1}}_{l-1}
\mathit{bias}^{?}(\th'')$, so we are done. \end{proof}
\begin{lemma}\label{lem:any-frequency}
Suppose none of $\p_k,\dots,\p_l$ is an equalizer, and all rounds $k\dots l$
have $\mathtt{mult}$ instructions.
If $\mathit{set}(f')\incl \operatorname{fire}_k(f,\p_k)$ and $? \notin \mathit{set}(f')$
then $f\lact{\p_k}_k\dots\lact{\p_l}_l f'$ is possible.
\end{lemma} \begin{proof}
We proceed by induction on $l-k$.
The lemma is clear when $k = l$. Suppose $k \neq l$.
Consider three cases:
Suppose $f'$ is $\mathit{solo}^a$ or $\mathit{solo}$. By induction hypothesis
we can reach $f'$ after round $l-1$. Since round $l$
has a $\mathtt{uni}$ instruction it is clear that $f' \lact{\p_l}_l f'$.
Suppose $a,b \in \mathit{set}(f')$.
Lemma~\ref{lem:ab-if-mult} says that there is $\th$ for which
$\set{a,b}\in\operatorname{fire}_l(\mathit{bias}(\th),\p_l)$.
Hence $\mathit{bias}(\th)\lact{\p_l}_l f'$.
By induction hypothesis we can reach $\mathit{bias}(\th)$ after round $l-1$. \end{proof}
\begin{lemma}\label{lem:any-qbias-from-bias}
Suppose none of $\p_k,\dots,\p_l$ is an equalizer, and some round $k,\dots,l$
does not have a $\mathtt{mult}$ instruction.
For every $\th$ and every $f$ such that $\set{a,b}\in
\operatorname{fire}_k(f,\p_k)$ we have $f\lact{\p_k}_k\dots\lact{\p_l}_l \mathit{bias}^{?}(\th)$,
and $f\lact{\p_k}_k\dots\lact{\p_l}_l \mathit{bias}^{?}_a(\th)$. \end{lemma} \begin{proof}
Let $i$ be the first round without $\mathtt{mult}$ instruction.
Using Lemma~\ref{lem:any-frequency}, from the tuple $f$ at round $k$, we can arrive at round $i$
with the tuple $\mathit{bias}(\th)$ for any $\th$.
We choose $\th$ according to Lemma~\ref{lem:bias-preserving} so that
$\set{b,?}\incl \operatorname{fire}_i(\mathit{bias}(\th),\p_i)$.
Then we can apply Lemma~\ref{lem:any-qbias-from-qbias} to prove the claim.
The reasoning for $\mathit{bias}^{?}_a$ is analogous. \end{proof}
\begin{lemma}\label{lem:no-mult-sequence}
If round $\mathbf{ir}+1$ contains a $\mathtt{mult}$ instruction then the algorithm
does not satisfy agreement, or the $\mathtt{mult}$ instruction can be removed without altering the
correctness of the algorithm. \end{lemma}
\begin{proof}
Suppose round $\mathbf{ir}+1$ contains a $\mathtt{mult}$ instruction
The first case is when there does not exist any tuple $f$ and an execution
$f \lact{\overline{\p}_1}_1 f_1 \dots \lact{\overline{\p}_{\mathbf{ir}-1}}_{\mathbf{ir}-1} f_{\mathbf{ir}-1}$
such that $a,b \in \operatorname{fire}_\mathbf{ir}(f_{\mathbf{ir}-1},\overline{\p})$. It is then clear
that the $\mathtt{mult}$ instructions in round $\mathbf{ir}+1$ will never be fired
and so we can remove all these instructions in round $\mathbf{ir}+1$.
So it remains to examine the case when there exists a tuple $f$ with
$f \lact{\overline{\p}_1}_1 f_1\lact{\overline{\p}_2} \cdots\lact{\overline{\p}_{\mathbf{ir}-1}} f_{\mathbf{ir}-1}$ such that
$a,b \in \operatorname{fire}_{\mathbf{ir}}(f_{\mathbf{ir}-1},\overline{\p})$.
In this case we get
$f_{\mathbf{ir}-1} \lact{\overline{\p}_{\mathbf{ir}}}_{\mathbf{ir}} \mathit{bias}(\th)$ for arbitrary $\th$.
Let $I$ be the $\mathtt{mult}$ instruction in round $\mathbf{ir}+1$ with the
highest threshold value.
Recall that, by proviso from page~\ref{proviso}, $\overline{\p}$ is not an equalizer.
We consider two sub-cases:
Suppose $I$ has $\mathrm{smor}$ as its operation.
Then we consider $f_{\mathbf{ir}-1} \lact{\overline{\p}_{\mathbf{ir}}}_{\mathbf{ir}} \mathit{spread}$.
As $I$ has $\mathrm{smor}$ as operation, from $\mathit{spread}$ we can construct a multiset $\mathsf{H}$ containing more $a$'s
than $b$'s such that the size of $\mathsf{H}$ is bigger than $\mathit{thr}_{\mathbf{ir}+1}(\overline{\p})$ and $\thr_m^{\mathbf{ir}+1,1}$.
Similarly we can construct a multiset having more $b$'s than $a$'s.
Hence we get $\mathit{spread} \lact{\overline{\p}_{\mathbf{ir}+1}}_{\mathbf{ir}+1} \mathit{bias}(\th')$
for arbitrary $\th'$.
If all rounds after $\mathbf{ir}+1$ have $\mathtt{mult}$ instructions, then we can
apply Lemma~\ref{lem:any-frequency} to conclude that we can
reach the tuple $\mathit{spread}$ after round $r$, thereby deciding
on both $a$ and $b$ and violating agreement.
Otherwise we can use Lemma~\ref{lem:any-qbias-from-bias} to conclude that we can
reach the tuple $\mathit{spread}^?$ after round $r$ and hence
make half the processes decide on $b$.
Notice that after this phase the state of the algorithm is $(\mathit{spread},\mathit{spread}^?)$.
We know, by Lemma~\ref{lem:no-mult} that the first round has a $\mathtt{mult}$ instruction.
This instruction has $\mathrm{smor}$ or $\min$ as its operation,
it is clear that in either case, $a \in \operatorname{fire}_1(\mathit{spread},\overline{\p})$ and
so we can get $\mathit{spread} \lact{\overline{\p}_1}_1 \mathit{solo}^a$ and
$\mathit{solo}^a \lact{\overline{\p}_i}_i \mathit{solo}^a$ for $i > 1$, thereby making the
rest of the undecided processes decide on $a$. Hence
agreement is violated.
Suppose $I$ has $\min$ as its operation. Then we consider
$f_{\mathbf{ir}-1} \lact{\overline{\p}_{\mathbf{ir}}}_{\mathbf{ir}} \mathit{bias}(\th)$ where $\th > \max(\thr_u^{\mathbf{ir}+1},\thr_u^{1},\mathit{thr}_{\mathbf{ir}+1}(\overline{\p} ))$ is sufficiently big.
It is clear that $b \in \operatorname{fire}_{\mathbf{ir}+1}(\mathit{bias}(\th),\overline{\p})$.
Further if we take our multi-set $\mathsf{H}$ to be $\mathit{bias}(\th)$ itself,
then (because of the instruction $I$) we have $a \in \operatorname{fire}_{\mathbf{ir}+1}(\mathit{bias}(\th),\overline{\p})$.
Hence we get $\mathit{bias}(\th) \lact{\overline{\p}_{\mathbf{ir}+1}}_{\mathbf{ir}+1} \mathit{bias}(\th')$
for arbitrary $\th'$.
As in the previous case, either this immediately allows us to conclude that agreement is violated, or this allows us to make half the processes decide on $a$. In the latter case,
note that the state of the algorithm after this
phase will be $(\mathit{bias}(\th),\mathit{spread}^?_a)$.
Since $\th \ge \thr_u^1$ and since $\thr_u^1 \ge \mathit{thr}_1(\overline{\p})$ by observation~(\ref{eq:syntactic-property}), it follows
that $b \in \operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$.
Hence we can get $\mathit{solo}$ as the tuple after the first round and decide
on $b$, as in the previous case.
\end{proof}
\begin{lemma}\label{lem:no-min}
If the first round has $\mathtt{mult}$ instruction with
$\min$ as the operation then the algorithm does not satisfy agreement. \end{lemma}
\begin{proof}
Suppose that indeed the first round does have a $\mathtt{mult}$ instruction with
$\min$ operation.
Thanks to our proviso, the global predicate does not have an equalizer, hence we
can freely apply Lemmas~\ref{lem:any-frequency} and~\ref{lem:any-qbias-from-bias}.
We use Lemma~\ref{lem:ab-if-mult} to find $\th$ with $\set{a,b}\incl
\operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$.
We consider two cases.
If all the rounds $2,\dots,\mathbf{ir}$ have a $\mathtt{mult}$ instruction then
Lemma~\ref{lem:any-frequency} allows us to get $\mathit{bias}(\th')$, for arbitrary
$\th'$, after round $\mathbf{ir}$.
By Lemma~\ref{lem:no-mult-sequence} there is no $\mathtt{mult}$ instruction in round
$\mathbf{ir}+1$.
By Lemma~\ref{lem:bias-preserving} there is $\th'$ such that $\set{b,?}\incl
\operatorname{fire}_{\mathbf{ir}+1}(\mathit{bias}(\th'),\overline{\p})$.
Using Lemma~\ref{lem:any-qbias-from-qbias} we can make some process decide on $b$, while
keeping the other processes undecided.
Hence the state of the algorithm after this phase is $(bias(\th'),\mathit{spread}^?)$.
By Lemma~\ref{lem:a-if-min}, $a\in\operatorname{fire}_1(\mathit{bias}(\th'),\overline{\p})$, and so we can get
$\mathit{solo}^a$ as the tuple after the first round and make all the other processes decide on $a$.
The second case is when one of the rounds $2,\dots,\mathbf{ir}$ does not have a $\mathtt{mult}$
instruction.
For arbitrary $\th'$, Lemma~\ref{lem:any-qbias-from-bias} allows us to get
$\mathit{bias}^{?}(\th')$ after round $\mathbf{ir}$.
As in the above case, we use it to decide on $b$ for some process while
leaving other undecided.
In the next phase we make other processes decide on $a$. \end{proof}
\iffalse \begin{proof} Suppose that indeed in the first round we have a $\mathtt{mult}$ instruction with $\min$ operation. We execute the phase under the global predicate $\p$. Thanks to our proviso, global predicate does not have an equalizer, hence we can freely apply Lemmas~\ref{lem:any-frequency} and~\ref{lem:any-bias}. Also recall that by our proviso, we have $\thr_u^{\mathbf{ir}+1} \ge \mathit{thr}_{\mathbf{ir}+1}(\p)$.
We start with the state of the algorithm being $(\mathit{bias}(\th),\mathit{solo}^?)$ for some sufficiently big $\th$. By Lemma~\ref{lem:bias-fire} we have $\set{a,b}\incl \operatorname{fire}_1(\mathit{bias}(\th),\p)$. We then use Lemma~\ref{lem:any-frequency} or Lemma~\ref{lem:any-bias} to get $\mathit{bias}(\th_{\mathbf{ir}+1})$ or $\mathit{bias}^{?}(\th_{\mathbf{ir}+1})$ after round $\mathbf{ir}$; where we take $\th_{\mathbf{ir}+1}=\thr_u^{\mathbf{ir}+1}+\e$ for some small $\e$.
So at the round $\mathbf{ir}+1$ we can arrive with either $\mathit{bias}(\th_{\mathbf{ir}+1})$ or $\mathit{bias}^{?}(\th_{\mathbf{ir}+1})$. In either case, if we send to a process, a multi-set $\mathsf{H}$ to be of size $\th_{\mathbf{ir}+1}$ containing only $b$'s then, because of our proviso $\thr_u^{\mathbf{ir}+1} \ge \mathit{thr}_{\mathbf{ir}+1}(\p)$, we have that the process will set the value of its $x_{\mathbf{ir}+1}$ variable to $b$. Further, in the former case, because there is no $\mathtt{mult}$ instruction in round $\mathbf{ir}+1$, if we send to a process, the entire multi-set $\mathit{bias}(\th_{\mathbf{ir}+1})$ then the process does nothing. In the latter case, if we send to a process, a multi-set $\mathsf{H}$ of size $\th_{\mathbf{ir}+1}$ but with some fraction of $?$'s and less than $\thr_u^{\mathbf{ir}+1}$ fraction of $b$'s then no instruction gets satisfied and so the process does nothing.
Hence we can get $\mathit{bias}^{?}(\th)$ (for any $\th$) as the tuple after round $\mathbf{ir}+1$. Lemma~\ref{lem:bias-propagation} then allows us to set the $\mathit{dec}$ variable of some process to $b$, and leave the other $\mathit{dec}$ variables unset. Further, we can arrange the execution in such a way that the the state of the variable $\mathit{inp}$ is set to $\mathit{bias}(\max(\th,\th_{\mathbf{ir}+1}))$.
In the next phase, Lemma~\ref{lem:bias-fire} says that $\set{a,b}\in \operatorname{fire}_1(\mathit{bias}(\max(\th,\th_{\mathbf{ir}+1})),\p)$. Hence we can get $\mathit{solo}^a$ as the tuple after the first round and since $\mathit{solo}^a \lact{\p_i}_i \mathit{solo}^a$ for any $i$, we can make the rest of the undecided processes decide on $a$, thus violating agreement. \end{proof} \fi
\begin{lemma}\label{lem:constants}
If property of constants from Definition~\ref{def:structure} is not satisfied then the algorithm does not satisfy
agreement. \end{lemma}
\iffalse
\begin{proof} We consider an execution of a phase under the global predicate and so we can freely use Lemmas~\ref{lem:any-frequency} and~\ref{lem:any-bias}. We have seen in Lemma~\ref{lem:no-min} that in the first round all the $\mathtt{mult}$ instructions must be $\mathrm{smor}$. We start with the state $(\mathit{spread},\mathit{solo}^?)$. Recall that according to our proviso $\thr_u^{\mathbf{ir}+1}\ge\p_{\mathbf{ir}+1}(\th)$.
Suppose $\thr_u^{\mathbf{ir}+1}< 1/2$. By Lemma~\ref{lem:spread} we have $a,b \in \operatorname{fire}_1(\mathit{spread},\p)$. Hence we can use Lemma~\ref{lem:any-frequency} or Lemma~\ref{lem:any-bias} to arrive at either $\mathit{spread}$ or $\mathit{spread}^?$ before round $\mathbf{ir}+1$. In either case, it is clear that $b \in \operatorname{fire}_{\mathbf{ir}+1}(\mathit{spread},\p)$. In the former case, if we take our multi-set $\mathsf{H}$ to be the entire tuple $\mathit{spread}$, then since there is no $\mathtt{mult}$ instruction in round $\mathbf{ir}+1$ (Lemma~\ref{lem:no-mult-fo}), it follows that $? = \operatorname{fire}_{\mathbf{ir}+1}(\mathit{spread},\p)$. In the latter case, just taking our multi-set to contain all the $?$'s from $\mathit{spread}^?$ ensures that $? \in \operatorname{fire}_{\mathbf{ir}+1}(\mathit{spread}^?,\p)$. Hence we have $b,? \in \operatorname{fire}_{\mathbf{ir}+1}(\mathit{spread}^?,\p)$ and since round $\mathbf{ir}+1$ does not have a $\mathtt{mult}$ instruction (Lemma~\ref{lem:no-mult-fo}), by using Lemma~\ref{lem:any-bias} we can make half the processes decide on $b$. Further, we can arrange the execution such that the state of $\mathit{inp}$ after the execution is $\mathit{spread}$. By Lemma~\ref{lem:spread} we have $a \in \operatorname{fire}_1(\mathit{spread},\p)$ and hence we can get $\mathit{solo}^a$ after the first round and use this to make the (undecided) processes decide on $a$. Thus agreement is violated.
Suppose we have $\thr_u^{\mathbf{ir}+1} \ge 1/2$.
Let us fix $\th_{\mathbf{ir}+1} = \thr_u^{\mathbf{ir}+1}+\e$ for some small $\e$. Thanks to Lemmas~\ref{lem:any-frequency} and~\ref{lem:any-bias} we can then get $\mathit{bias}(\th_{\mathbf{ir}+1})$ or $\mathit{bias}^{?}(\th_{\mathbf{ir}+1})$ after round $\mathbf{ir}$. Using the same argument as in the previous lemma, we can set $\mathit{dec}$ variable of some process to $b$, and leave other $\mathit{dec}$ variables unset. We can arrange execution in such a way that the state after the phase is $\mathit{bias}(\max(1/2,\th_{\mathbf{ir}+1})) = \mathit{bias}(\th_{\mathbf{ir}+1})$.
If $\thr_m^{1,k}/2<1-\thr_u^{\mathbf{ir}+1}$ then in the first round of the next phase consider the $\mathsf{H}$ set containing all the $a$'s in $\mathit{bias}(\th_{\mathbf{ir}+1})$ and the number of $b$'s smaller by $\e$ than the number of $a$'s. The size of this set is $(1-\th_{\mathbf{ir}+1})+(1-\th_{\mathbf{ir}+1}-\e)=2(1-\thr_u^{\mathbf{ir}+1})-3\e$. For a suitably small $\e$, this quantity is bigger than $\thr_m^{1,k}$ which by observation~(\ref{eq:syntactic-property}) is bigger than $\mathit{thr}_1(\p)$. So we can get $\mathit{solo}^a$ as the tuple after the first round and then use this to make the undecided processes decide on $a$.
If $\thr_u^1<1-\thr_u^{\mathbf{ir}+1}$, then just take $\mathsf{H}$ set consisting only of $a$'s. The size of this set is bigger than $\thr_u^1$ which by observation~(\ref{eq:syntactic-property}) is bigger than $\mathit{thr}_1(\p)$. Hence, we can get $\mathit{solo}^a$ as the tuple after the first round and use this to make the undecided processed decide on $a$. \end{proof} \fi
\begin{proof}
We consider an execution of a phase under the
global predicate and so we can freely use Lemmas~\ref{lem:any-frequency}
and~\ref{lem:any-qbias-from-bias}.
We have seen in Lemma~\ref{lem:no-min} that in the first round all the
$\mathtt{mult}$ instructions must be $\mathrm{smor}$. We start with the state
$(\mathit{spread},\mathit{solo}^?)$.
We consider two cases.
\textbf{First case :} There are no preserving rounds before
round $\mathbf{ir}+1$. Hence every round before $\mathbf{ir}+1$ has a $\mathtt{mult}$ instruction. By Lemma~\ref{lem:any-frequency} from $\mathit{spread}$
we can get $\mathit{bias}(\th)$ (for any $\th$) as the tuple before
round $\mathbf{ir}+1$. Choose $\th = \thr_u^{\mathbf{ir}+1} + \e$ for some
small $\e$.
By Lemma~\ref{lem:no-mult-sequence} we know
that round $\mathbf{ir}+1$ does not have any $\mathtt{mult}$ instruction.
This implies that $? \in \operatorname{fire}_{\mathbf{ir}+1}(\mathit{bias}(\th),\overline{\p})$.
Further, by observation~(\ref{eq:syntactic-property}) we know
that $\thr_u^{\mathbf{ir}+1} \ge \mathit{thr}_{\mathbf{ir}+1}(\overline{\p})$.
Therefore, $b \in \operatorname{fire}_{\mathbf{ir}+1}(\mathit{bias}(\th),\overline{\p})$. Hence
$\set{b,?} \subseteq \operatorname{fire}_{\mathbf{ir}+1}(\mathit{bias}(\th),\overline{\p})$.
\textbf{Second case: } There is a round $j < \mathbf{ir}+1$ such
that round $j$ is preserving. Let $j$ be the first such round.
By Lemma~\ref{lem:any-frequency} from $\mathit{spread}$ we can get
$\mathit{bias}(\thr_u^j+\e')$ (for some small $\e'$) before round $j$.
Since round $j$ is preserving it follows that either round $j$ has no $\mathtt{mult}$ instruction or $\mathit{thr}_j(\overline{\p}) < \max(\thr_u^j,\thr_m^{j,k})$.
It is then clear that $? \in \operatorname{fire}_j(\mathit{bias}(\thr_u^j+\e'))$.
It is also clear that $b \in \operatorname{fire}_j(\mathit{bias}(\thr_u^j+\e'))$.
Notice that by Lemma~\ref{lem:any-qbias-from-bias} we can
get $\mathit{bias}^{?}(\th)$ (for any $\th$) as the tuple before round
$\mathbf{ir}+1$. Choose $\th = \thr_u^{\mathbf{ir}+1} + \e$ for some small
$\epsilon$.
It is clear that we can construct a multi-set $\mathsf{H}$ of size $1-\epsilon$
consisting of $\thr_u^{\mathbf{ir}+1}$ fraction of $b$'s and
the remaining as $?$'s from the tuple $\mathit{bias}(\th)$. Notice that $\mathsf{H}$
does not satisfy any instructions and (for a small enough $\e$)
is bigger than $\mathit{thr}_{\mathbf{ir}+1}(\overline{\p})$.
Further by sending the entire tuple as a multi-set we get that
$b \in \operatorname{fire}_{\mathbf{ir}+1}(\mathit{bias}^{?}(\th),\overline{\p})$. Hence
$\set{b,?} \subseteq \operatorname{fire}_{\mathbf{ir}+1}(\mathit{bias}^{?}(\th),\overline{\p})$.\\
In both cases, we can then use Lemma~\ref{lem:any-qbias-from-bias}
to ensure that half the processes remain undecided and half the processes decide on $b$.
Further, in both cases, we can arrange the execution in such a way that the state after this phase is either $(\mathit{bias}(\th),\mathit{spread}^?)$ or $(\mathit{spread},\mathit{spread}^?)$.
If the state is $(\mathit{spread},\mathit{spread}^?)$ then by Lemma~\ref{lem:spread} $a \in \operatorname{fire}_1(\mathit{spread},\overline{\p})$
and so in the next phase we can get $\mathit{solo}^{a}$ as the tuple after the first
round and make the other processes decide on $a$.
In the remaining case we consider separately the two conditions on constants
that can be violated.
If $\thr_m^{1,k}/2<1-\thr_u^{\mathbf{ir}+1}$ then in the first round of the next phase
consider the $\mathsf{H}$ set containing all the $a$'s in $\mathit{bias}(\th)$ and the number of $b$'s
smaller by $\e$ than the number of $a$'s.
The size of this set is $(1-\th)+(1-\th-\e)=2(1-\thr_u^{\mathbf{ir}+1})-3\e$.
For a suitably small $\e$, this quantity is bigger than $\thr_m^{1,k}$
which by observation~(\ref{eq:syntactic-property}) is bigger
than $\mathit{thr}_1(\p)$.
So we can get $\mathit{solo}^a$ as the tuple after the first round and then use this to make the undecided processes decide on $a$.
If $\thr_u^1<1-\thr_u^{\mathbf{ir}+1}$, then just take $\mathsf{H}$ set in $\mathit{bias}(\th)$
consisting only of all the $a$'s.
Once again for a small enough $\e$, the size of this set is bigger than $\thr_u^1$ which by
observation~(\ref{eq:syntactic-property}) is bigger than $\mathit{thr}_1(\p)$.
Hence, we can get $\mathit{solo}^a$ as the tuple after the first round
and use this to make the undecided processes decide on $a$. \end{proof}
\begin{lemma}\label{lem:agreement}
If all the structural properties are satisfied then the algorithm satisfies agreement. \end{lemma}
\begin{proof}
It is clear that the algorithm satisfies agreement when the state of
the $\mathit{inp}$ variable
is either $\mathit{solo}$ or $\mathit{solo}^a$. Suppose
we have an execution $(\mathit{bias}(\theta),d) \act{\overline{\p}}\cdots\act{\overline{\p}}
(\mathit{bias}(\theta'),d')$
such that $(\mathit{bias}(\theta'),d')$ is the first state in this execution
with a process $p$ which has decided on a value.
We consider the case when $a$ is this value.
The other case is analogous.
Since
$\thr_m^{1,k}/2 \geq 1-\thr_u^{\mathbf{ir}+1}$ we have that $\thr_u^{\mathbf{ir}+1} \ge 1/2$. Further round $\mathbf{ir}+1$ does not have a $\mathtt{mult}$ instruction.
It then follows directly from the semantics that if $q$ is a process
then either $d'(q) = a$ or $d'(q) = ?$.
Further notice that since $a$ was decided by some process, it has to be
the case that at least $\thr_u^{\mathbf{ir}+1}$ processes have $a$ as their $\mathit{inp}$ value. Hence $\theta' < 1 - \thr_u^{\mathbf{ir}+1}$.
Since $\theta' < 1 - \thr_u^{\mathbf{ir}+1} \le \thr_u^1$, it follows that $b$ cannot be fired from $\mathit{bias}(\theta')$ using the $\mathtt{uni}$ instruction in the first round.
Since $\theta' < 1 - \thr_u^{\mathbf{ir}+1} \le \thr_m^{1,k}/2$ and since every $\mathtt{mult}$ instruction in the first round has $\mathrm{smor}$ as its operator, it follows that
$b$ cannot be fired from $\mathit{bias}(\theta')$ using the $\mathtt{mult}$ instruction as well.
Hence the number of $b$'s in the $\mathit{inp}$ tuple can only decrease from this point onwards and so it follows that no process from this point onwards can decide on $b$.
The same argument applies if there are more than two values. \end{proof}
\subsubsection*{Part 2: termination} We consider only two values $a,b$. It is direct form the arguments below that the termination proof also works if there are more values.
\begin{lemma}\label{lem:fire-one}
For the global predicate $\overline{\p}$: $a\in\operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$ iff $\th<\bar{\thr}$.
(Similarly $b\in\operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$ iff $1-\bar{\thr}<\th$). \end{lemma} \begin{proof}
In order for a multi-set $\mathsf{H}$ to be such that $a = \mathtt{update}_1(\mathsf{H})$
there are two possibilities: (i) it should be of size $>\thr_u^1$ and contain only $a$'s, or (ii) of size
$>\thr_m^{1,k}$ and contain at least as many $a$'s as $b$'s.
Recall that by observation~(\ref{eq:syntactic-property}) on page~\pageref{eq:syntactic-property} we have
$\thr_u^1 \ge \mathit{thr}_1(\overline{\p})$ and $\thr_m^{1,k} \ge \mathit{thr}_1(\overline{\p})$.
The number of $a$'s in $\mathit{bias}(\th)$ is $1-\th$.
So the first case is possible only iff $1-\th>\thr_u^1$, i.e., when $\th<1-\thr_u^1$.
Further if $\th < 1-\thr_u^1$, then we can send a set $\mathsf{H}$ consisting only of
$a$'s, such that $|\mathsf{H}| > \thr_u^1 \ge \mathit{thr}_1(\overline{\p})$
and so $a \in \operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$.
The second case, is possible only if $1-\th>\thr_m^{1,k}/2$, or equivalently, $\th<1-\thr_m^{1,k}/2$. Further if $\th < 1-\thr_m^{1,k}/2$,
then we can send a set $\mathsf{H}$ of size $\thr_m^{1,k}$ consisting of
both $a$'s and $b$'s, which will ensure that $a \in \operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$.
To sum up, $a \in \operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$ iff $\th<\bar{\thr}$.
The proof for $b \in \operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$ iff $1-\bar{\thr} < \th$ is similar. \end{proof}
\begin{corollary}\label{cor:no-a-above-bthr}
For every predicate $\p$, if $\th\geq\bar{\thr}$ then $a\not\in\operatorname{fire}_1(\mathit{bias}(\th),\p)$.
Similarly if $\th\leq 1-\bar{\thr}$ then $b \not \in\operatorname{fire}_1(\mathit{bias}(\th),\p)$. \end{corollary} \begin{proof}
We have assumed that every predicate implies the global predicate, so every
$\mathsf{H}$ set that is admissible w.r.t.\ some predicate, is also admissible
w.r.t.\ the global predicate.
Lemma~\ref{lem:fire-one}, says that $a$ cannot be obtained under the global predicate if $\th \geq \bar{\thr}$.
Similar proof holds for the other claim as well. \end{proof}
\begin{lemma}\label{lem:unifier}
Suppose $\p$ is a unifier and $\mathit{bias}(\th)\act{\p}f$.
If $\thr_u^1\leq \thr_m^{1,k}$ or $1-\bar{\thr}\le\th\le\bar{\thr}$
then $f=\mathit{solo}$ or $f=\mathit{solo}^a$. \end{lemma} \begin{proof}
We first show that the value $?$ cannot be produced in the first round.
Since $\p$ is a unifier we have $\mathit{thr}_1(\p) \geq \thr_m^{1,k}$.
If $\mathit{thr}_1(\p) \geq \thr_u^1$ then we are done.
Otherwise $\mathit{thr}_1(\p)< \thr_u^1$, implying $\mathit{thr}_1(\p)\geq \bar{\thr}$, by the definition
of unifier.
We consider $1-\bar{\thr}\le\th\le\bar{\thr}$, and the tuple $\mathit{bias}(\th)$.
In this case, every heard-of multiset $\mathsf{H}$ strictly bigger than the
threshold $\bar{\thr}$ (and hence bigger than $\mathit{thr}_1(\p)$)
must contain both $a$ and $b$.
Since there is a $\mathtt{mult}$ instruction in the first round
(and since $\mathit{thr}_1(\p) \ge \thr_m^{1,k}$), the first
round cannot produce $?$, i.e., after the first round
the value of the variable $x_1$ of each process is either $a$ or $b$.
Let $i$ be the round such that $\p_i$ is an equalizer
and rounds $2,\dots,i$ are non-preserving.
This round exists by the definition of a unifier.
Thanks to above, we know that after the first round no process
has $?$ as their $x_1$ value. Since rounds $2,\dots,i$ are non-preserving, it follows that
till round $i$ we cannot produce $?$ under the predicate $\p$.
Because $\p_i$ has an equalizer, after round $i$ we either have the tuple $\mathit{solo}$ or $\mathit{solo}^a$.
This tuple stays till round $\mathbf{ir}$ as the rounds $i+1,\dots,\mathbf{ir}$ are solo-safe. \end{proof}
Observe that if rounds $\mathbf{ir}+1,\dots,r$ of a unifier $\p$ are solo-safe then $\p$ is also a decider and all processes decide. Otherwise some processes may not decide. So unifier by itself is not sufficient to guarantee termination.
\begin{lemma}\label{lem:decider}
If $\p$ is a decider and $(\mathit{solo},\mathit{solo}^?)\act{\p}(f',d')$
then $(f',d') = (\mathit{solo},\mathit{solo})$.
Similarly, if $(\mathit{solo}^a,\mathit{solo}^?) \act{\p}(f',d')$ then $(f',d') =
(\mathit{solo}^a,\mathit{solo}^a)$.
In case $\thr_m^{1,k}\leq \thr_u^1$, for every $\th\geq\bar{\thr}$: if
$(\mathit{bias}(\th),\mathit{solo}^?)\act{\p}(f',d')$ then $(f',d')=(\mathit{solo},\mathit{solo})$
and for every $\th\leq 1-\bar{\thr}$: if
$(\mathit{bias}(\th),\mathit{solo}^?)\act{\p}(f',d')$ then $(f',d')=(\mathit{solo}^a,\mathit{solo}^a)$. \end{lemma} \begin{proof}
The first two statements are direct from the definition as all the rounds in a
decider are solo safe.
We only prove the third statement, as the proof of the fourth statement is similar.
For the third statement, by Corollary~\ref{cor:no-a-above-bthr} after the first round we
cannot produce $a$'s under $\p_1$.
Because the first round is solo-safe, we get
$\thr_u^1\leq\mathit{thr}_1(\p)$; and since $\thr_m^{1,k}\leq \thr_u^1$, we get $\thr_m^{1,k}\leq \mathit{thr}_1(\p)$.
Hence, the first round cannot produce $?$ neither.
This means that from $\mathit{bias}(\th)$ as the input tuple, we can
only get $\mathit{solo}$ as the tuple after the first round under the
predicate $\p_1$.
Since rounds $2,\dots,r$ are solo-safe it follows that
all the processes decide on $b$ in round $r$. \end{proof}
We are now ready to show one direction of Theorem~\ref{thm:core}.
\begin{lemma}\label{main-positive}
If an algorithm in a core language has structural properties from
Definition~\ref{def:structure}, and satisfies condition T then it solves consensus. \end{lemma} \begin{proof}
Lemma~\ref{lem:agreement} says that the algorithm satisfies agreement.
If condition T holds, there is a unifier followed by a decider.
If $\thr_u^1\leq \thr_m^{1,k}$ then after a unifier the $\mathit{inp}$ tuple becomes
$\mathit{solo}$ or $\mathit{solo}^{a}$ thanks to Lemma~\ref{lem:unifier}.
After a decider all processes decide thanks to Lemma~\ref{lem:decider}.
Otherwise $\thr_m^{1,k}< \thr_u^1$.
If before the unifier the $\mathit{inp}$ tuple was $\mathit{bias}(\th)$ with $1-\bar{\thr}\le\th\le\bar{\thr}$ then after the
unifier $\mathit{inp}$ becomes $\mathit{solo}$ or $\mathit{solo}^{a}$ thanks to Lemma~\ref{lem:unifier}.
We once again conclude as above.
If $\th>\bar{\thr}$ (or $\th < 1-\bar{\thr}$) then by Corollary~\ref{cor:no-a-above-bthr}, the number of $b$'s (resp. number of $a$'s) can only
increase after this point. Hence till the decider,
the state of the $\mathit{inp}$ tuple remains as $\mathit{bias}(\th')$ with
$\th' > \bar{\thr}$ (resp. $\th' < 1-\bar{\thr}$).
After a decider all processes decide thanks to Lemma~\ref{lem:decider}. \end{proof}
\subsubsection*{Part 3: non-termination}
\begin{lemma}\label{lem:not-decider}
If $\p$ is a not a decider then
$\mathit{solo}\act{\p}\mathit{solo}$ and $\mathit{solo}^a \act{\p} \mathit{solo}^a$; namely, no process may decide.
\end{lemma} \begin{proof}
If $\p$ is not a decider then there is a round, say $i$, that is not solo-safe.
By definition this means $\mathit{thr}_i(\p)< \thr_u^i$.
It is then easy to verify that for $j < i$, $\mathit{solo} \lact{\p_j}_j \mathit{solo}$,
$\mathit{solo} \lact{\p_i}_i \mathit{solo}^?$ and $\mathit{solo}^? \lact{\p_k}_k \mathit{solo}^?$ for $k > i$. Hence this
ensures that no process decides during this phase.
Similar proof holds when the $\mathit{inp}$ tuple is $\mathit{solo}^a$.
\end{proof}
\begin{lemma}\label{lem:global-th}
For the global predicate $\overline{\p}$: if $1/2\leq\th<\bar{\thr}$, then
$\mathit{bias}(\th)\act{\overline{\p}}\mathit{bias}(\th')$ for every $\th'\geq 1/2$. \end{lemma} \begin{proof}
We first observe that $a,b \in \operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$.
Indeed, by Lemma~\ref{lem:fire-one}, $a\in \operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$.
Further since $1/2 \le \th$ and every $\mathtt{mult}$ instruction
in the first round has $\mathrm{smor}$ as operator, it
follows that $b \in \operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$.
Recall that by our proviso, the global predicate is not an equalizer.
Suppose there are $\mathtt{mult}$ instructions in rounds $2,\dots,\mathbf{ir}$.
Then Lemma~\ref{lem:any-frequency} allows us to get
$\mathit{bias}(\th')$ as the tuple after round $\mathbf{ir}$.
Moreover, our proviso from page~\pageref{proviso} says that there is no
$\mathtt{mult}$ instruction in round $\mathbf{ir}+1$.
So we can get $\mathit{solo}^{?}$ as the tuple after round $\mathbf{ir}+1$ by sending the whole multiset. We can then propagate the tuple $\mathit{solo}^{?}$ all
the way till the last round.
This ensures that no process decides and we are done in this case.
Otherwise there is a round $j$ such that $2 \le j \le \mathbf{ir}$ and $j$ does not have any $\mathtt{mult}$ instruction.
By Lemma~\ref{lem:any-qbias-from-bias}
we can get $\mathit{bias}^{?}(\th'')$ as well as $\mathit{bias}^{?}_a(\th'')$ (for any $\th''$) after
round $\mathbf{ir}$.
There are two cases depending if $\th'\geq\th$ or not.
If $\th'\geq \th$, then we consider the tuple $\mathit{bias}^{?}(\th'')$ for some $1/2\le \th''<
\min(\thr_u^{\mathbf{ir}+1}-\e,\th)$, (where $\e$ is some small number).
Notice that by Lemma~\ref{lem:constants} we have $\thr_m^{1,k}/2 \geq 1-\thr_u^{\mathbf{ir}+1}$ and since
$\thr_m^{1,k} < 1$, this implies that $\thr_u^{\mathbf{ir}+1} > 1/2$, and so such a $\th''$ exists.
It is clear that $? \in \operatorname{fire}_{\mathbf{ir}+1}(\mathit{bias}^{?}(\th''),\overline{\p})$
and so we can get $\mathit{solo}^{?}$ as the tuple after round $\mathbf{ir}+1$ thereby
ensuring that no process decides.
To terminate, we need to arrange this execution so that the state of $\mathit{inp}$
becomes $\mathit{bias}(\th')$ after this phase.
Since $\th''\ge1/2$ we have enough $b$'s to change $\th'-\th$ fraction of $a$'s
to $b$'s.
We leave the other values unchanged.
This changes the state of $\mathit{inp}$ from $\mathit{bias}(\th)$
to $\mathit{bias}(\th')$.
Suppose $\th' < \th$.
By Lemma~\ref{lem:any-qbias-from-bias}, after round $\mathbf{ir}$ we can reach the
tuple
$\mathit{bias}^{?}_a(\th'')$ for $\th''=\th-\th'$.
Arguing as before, we can ensure that the
state of the $\mathit{inp}$ can be converted to $\mathit{bias}(\th')$. We just have
to show that all processes can choose to not decide in the last
round.
We observe that $\th'' \le \thr_u^{\mathbf{ir}+1}$.
Indeed since $\th < \bar{\thr} \le 1$ and $\th' \geq 1/2$, it
follows that $\th'' < 1/2 \le \thr_u^{\mathbf{ir}+1}$, where the last inequality
follows from the discussion in the previous paragraph.
Now, as $\th'' \le \thr_u^{\mathbf{ir}+1}$, if we send
the entire tuple $\mathit{bias}_a^?(\th'')$ to every process, we get $\mathit{solo}^{?}$ as the tuple after
round $\mathbf{ir}+1$, hence making the processes not decide on anything in the last
round.
\end{proof}
\begin{lemma}\label{lem:not-uni}
If $\p$ is not a unifier then
\begin{equation*}
\mathit{bias}(\th)\act{\p}\mathit{bias}(\th) \quad\text{for some $1/2\leq\th<\bar{\thr}$.}
\end{equation*} \end{lemma} \begin{proof}
We examine all the reasons why $\p$ may not be a unifier.
First let us look at conditions on constants.
If $\mathit{thr}_1(\p)<\thr_m^{1,k}$ then let $\th=1/2$. In the first round, we can then send to every process a multi-set
$\mathsf{H}$ with both $a$'s and $b$'s, and of size in between $\mathit{thr}_1(\p)$ and $\thr_m^{1,k}$.
This allows us to get $\mathit{solo}^{?}$ as the tuple after the first round, and
ensures that neither the $\mathit{inp}$ tuple nor the $\mathit{dec}$ tuple gets
updated in this phase.
Suppose $\mathit{thr}_1(\p)<\thr_u^1$ and $\mathit{thr}_1(\p) < \bar{\thr}$.
Let $\e$ be such that $\mathit{thr}_1(\p) + \e < \min(\thr_u^1,\bar{\thr})$
and let $\th = \max(\mathit{thr}_1(\p)+\e,1/2)$.
In the first round, by sending to every process a fraction of $(\mathit{thr}_1(\p)+\e)$ $b$'s from $\mathit{bias}(\th)$ we get $\mathit{solo}^{?}$ as the tuple after the first round
and that allows us to conclude as before.
The second reason is that there is no equalizer in $\p$ up to round $\mathbf{ir}$.
We take $\th=1/2$.
By Lemmas~\ref{lem:spread} and~\ref{lem:no-min}, we have
$a,b\in\operatorname{fire}_1(\mathit{spread},\p)$.
If all the rounds $1,\dots,\mathbf{ir}$ have a $\mathtt{mult}$ instruction then
Lemma~\ref{lem:any-frequency} allows us to get $\mathit{spread}$ as the tuple after round $\mathbf{ir}$.
Lemma~\ref{lem:no-mult-sequence} says that there cannot be a $\mathtt{mult}$ instruction
in round $\mathbf{ir}+1$, so by sending the whole multiset in this round, we get
$\mathit{solo}^{?}$ as the tuple after round $\mathbf{ir}+1$. This
ensures that no process decides in this phase.
The other case is when there is a round among $1,\dots,\mathbf{ir}$ without the $\mathtt{mult}$ instruction.
Lemma~\ref{lem:any-qbias-from-bias} allows to get $\mathit{solo}^{?}$ after round $\mathbf{ir}$ and
so neither $\mathit{inp}$ nor $\mathit{dec}$ of any process gets updated.
The last reason is that there is a round before an equalizer that is
preserving, or a round after the equalizer that is not solo-safe.
In both cases we can get $\mathit{solo}^{?}$ as the tuple at round $\mathbf{ir}$ and conclude as before. \end{proof}
\iffalse \begin{proof} \textbf{Main non-termination} We show that if one of the conditions T1 or T2 does not hold then the algorithm does not terminate. We recall that the communication predicate is: \begin{equation*} G\p\land (F(\p^1\land F(\p^2\dots(F\p^k)\dots))) \end{equation*} and that we have assumed that the global predicate implies all sporadic predicates. This means, for example, that if the global predicate is a decider then all sporadic predicates are deciders.
We construct an execution \begin{equation*} f_1\act{\p_1}f'_1\act{\p}f_2\act{\p_2}f'_2\act{\p}\dots \act{\p_k}f'_k\act{\p} f'_k \end{equation*} where every second arrow is a transition with the global predicate. The last transition is a transition on the global predicate and a self loop. Recall that we write $f\act{\p}f'$ for $(f,\mathit{solo}^{?})\act{\p}(f',\mathit{solo}^{?})$, so indeed the run as above is a witness to non-termination.
We examine several cases.
If none of $\p_1,\dots,\p_k,\p$ is a decider, then we take $f_i=f'_i=\mathit{solo}$ for all $i=1,\dots,k$. By Lemma~\ref{lem:not-decider} we get the desired execution.
Suppose the global predicate $\p$ is a decider.
Since the global predicate implies every predicate, all the predicates are also deciders. We consider three cases: Suppose there are no finalizers. For all $i=1,\dots,k$ take $f_i=f'_i=\mathit{bias}(\th)$ for some $\th>\thr_u^1$. By Lemma~\ref{lem:not-finalizer} we get the desired execution. Suppose the global predicate $\p$ is a finalizer. Then every other predicate is also a finalizer. Hence, by our assumption none of the predicates are weak-unifiers. By Lemma~\ref{lem:not-thr-uni}, for every $\p_i$, $i=1,\dots,k$, there is $1/2\leq\th_i<\bar{\thr}$ such that $\mathit{bias}(\th_i)\act{\p_i}\mathit{bias}(\th_i)$. So taking $f_i=f'_i=\mathit{bias}(\th_i)$ gives us $f_i\act{\p_i}f'_i$. Since $\th_i<\bar{\thr}$, for the global predicate $\p$ we get $f'_i\act{\p}f_{i+1}$ by Lemma~\ref{lem:global-th}. Thus we have an execution till $f'_k=\mathit{bias}(\th_k)$ and then we have $f'_k \act{\p} f'_k$ thanks to Lemma~\ref{lem:global-th}. The last sub-case is when the global predicate is not a finalizer, but some sporadic predicate is. Let $\p_j$ be the last finalizer. Hence, $\p_1,\dots,\p_j$ are not weak-unifiers. As before, using Lemma~\ref{lem:not-thr-uni} and Lemma~\ref{lem:global-th} we construct an execution till $f'_j = \mathit{bias}(\th_j)$ for some $1/2 \le \th_j < \bar{\thr}$. We can then use Lemma~\ref{lem:global-th} to have $f'_j \act{\p} f_{j+1}$ where $f_{j+1} = \mathit{bias}(\th)$ for some $\th > \thr_u^1$. Now applying Lemma~\ref{lem:not-finalizer} we get the desired execution from $f_{j+1}$.
Otherwise, let the global predicate not be a decider and let $l$ be the index of the last decider in the sequence $\p_1,\dots,\p_k$.
By assumption, this implies that none of $\p_1,\dots,\p_{l}$ is a strong unifier. Recall that the global predicate $\p$ cannot be a strong unifier, and not even a weak unifier, since by our proviso $\p$ does not have an equalizer.
Suppose there is no finalizer in the sequence $\p_1,\dots,\p_{l}$, and the global predicate $\p$ is not a finalizer either. For all $i=1,\dots,k$ take $f_i=f'_i=\mathit{bias}(\th)$ for some $\th>\thr_u^1$. By Lemma~\ref{lem:not-finalizer} and Lemma~\ref{lem:not-decider} we get the desired execution.
Suppose the global predicate $\p$ is a finalizer. Hence every other predicate is also a finalizer. Then our assumption implies that none of the predicates are weak-unifier. By Lemma~\ref{lem:not-thr-uni}, for every $\p_i$, $i=1,\dots,k$, there is $1/2\leq\th_i<\bar{\thr}$ such that $\mathit{bias}(\th_i)\act{\p_i}\mathit{bias}(\th_i)$. So taking $f_i=f'_i=\mathit{bias}(\th_i)$ gives us $f_i\act{\p_i}f'_i$. Since $\th_i<\bar{\thr}$, for the global predicate $\p$ we get $f'_i\act{\p}f_{i+1}$ by Lemma~\ref{lem:global-th}. Thus we have an execution till $f'_k=\mathit{bias}(\th_k)$ and then we have $f'_k \act{\p} f'_k$ thanks to Lemma~\ref{lem:global-th}.
The last case is when the global predicate is not a finalizer, but there is a finalizer among $\p_1,\dots,\p_{l}$. Say the last one is $\p_j$. This implies that none of $\p_1,\dots,\p_j$ are weak-unifiers. In the same way as above we construct an execution till $f'_j=\mathit{bias}(\th_j)$ for some $1/2\leq\th_j<\bar{\thr}$. Take $\th>\thr_u^1$, and set $f_{j+1},f'_{j+1},\dots,f'_{k-1},f_k$ to be $\mathit{bias}(\th)$. Observe that $f'_j\act{\p} f_{j+1}$ is possible by Lemma~\ref{lem:global-th} since $\th_j<\bar{\thr}$. Since none of $\p_{j+1},\dots,\p_l,\p$ are finalizers, Lemma~\ref{lem:not-finalizer} gives us an execution to $f'_l$. As none of $\p_{l+1},\dots,\p_k,\p$ are deciders, Lemma~\ref{lem:not-decider} gives us the rest of execution. \end{proof} \fi
The next lemma gives the main non-termination argument. \begin{lemma}
If the structural conditions from Definition~\ref{def:structure} hold, but condition T
does not hold then the algorithm does not terminate. \end{lemma} \begin{proof}
We recall that the communication predicate is:
\begin{equation*}
(\lG\overline{\p})\land(\lF(\p^1\land\lF(\p^2\land\dots (\lF\p^k)\dots)))
\end{equation*}
and that we have assumed that the global predicate implies all sporadic predicates.
This means, for example, that if the global predicate is a decider then all
sporadic predicates are deciders.
We construct an execution
\begin{equation*}
f_1\act{\p^1}f'_1\act{\overline{\p}}f_2\act{\p^2}f'_2\act{\overline{\p}}\dots \act{\p^k}f'_k\act{\overline{\p}} f'_k
\end{equation*}
where every second arrow is a transition on the global predicate.
The last transition on the global predicate is a self-loop.
Recall that we write $f\act{\p}f'$ for $(f,\mathit{solo}^{?})\act{\p}(f',\mathit{solo}^{?})$, so
indeed the run as above is a witness to non-termination.
We examine several cases.
If none of $\p^1,\dots,\p^k,\overline{\p}$ is a decider, then we take $f_i=f'_i=\mathit{solo}$ for
all $i=1,\dots,k$.
By Lemma~\ref{lem:not-decider} we get the desired execution.
Suppose the last decider in the sequence $\p^1,\dots,\p^k$ is $\p^l$.
(Notice that if the global predicate $\p$ is a decider then $l=k$).
By our assumption, none of $\p^1,\dots,\p^l$ are unifiers.
By Lemma~\ref{lem:not-uni}, for every $\p^i$, $i=1,\dots,l$, there is
$1/2\leq\th_i<\bar{\thr}$ such that $\mathit{bias}(\th_i)\act{\p^i}\mathit{bias}(\th_i)$.
So we take $f_i=f'_i=\mathit{bias}(\th_i)$.
We can then use Lemma~\ref{lem:global-th} to get $f'_i \act{\overline{\p}} f_{i+1}$, for
all $i=1,\dots,l-1$.
This gives us an execution up to $f'_l$.
To complete the execution we consider two cases.
If $l = k$, then by Lemma~\ref{lem:global-th} we have
$f'_k\act{\overline{\p}}f'_k$ and so we are done.
Otherwise $l<k$, and we use Lemma~\ref{lem:global-th} to get
$f'_l\act{\overline{\p}}\mathit{solo}$.
We set $f_j=f'_j=\mathit{solo}$ for $j>l$.
Since $l < k$, neither the global predicate, nor any one of $\p^{l+1},\dots,\p^k$ are
deciders, and so by Lemma~\ref{lem:not-decider} we get
the desired execution. \end{proof}
\section{Proofs for algorithms with timestamps}
We prove the characterization from Theorem~\ref{thm:ts}. Recall that in this extension we add timestamps to the $\mathit{inp}$ variable, i.e., timestamps are sent along with $\mathit{inp}$ and are updated whenever $\mathit{inp}$ is updated. The semantics of rounds is different only in the first round where we have $(f_0,t) \lact{}_1 f_1$ instead of the $f_0 \lact{}_1 f_1$ in the core language. Further, whenever the $\mathit{inp}$ of a process is updated, the timestamp is updated as well. (In particular, if the value of $\mathit{inp}$ of a process was $a$ and later it was updated to $a$ once again, then in principle the value of $\mathit{inp}$ does not change but the time stamp is updated).
\begin{definition}
We introduce some abbreviations for tuples of values with timestamps.
For every tuple of values $f$, and every $i \in \mathbb{N}$ define
$(f,i)$ to be a $\mathit{inp}$-timestamp tuple where the value of $\mathit{inp}$ for process
$p$ is $f(p)$ and the value of the timestamp is $i$. So, for example,
$(\mathit{spread},0)$ denotes the tuple where the value of $\mathit{inp}$ for half of the
process is $a$, for the other half it is $b$, and the
timestamp for every process is $0$. \end{definition}
Similarly to the core case, we give the proof of Theorem~\ref{thm:ts} in three parts: we deal first with structural properties and then with termination, and non-termination.
\subsubsection*{Part 1: Structural properties for timestamps} The structure of the argument is similar to the core case.
\begin{lemma}\label{lem:ts-no-mult}
If no $\mathtt{mult}$ instruction in the first round then the algorithm does not have
termination property. \end{lemma} \begin{proof}
It is easy to verify that $(\mathit{spread},0) \act{\p} (\mathit{spread},0)$ is a phase transition, for
every communication predicate $\p$. \end{proof}
\begin{lemma}\label{lem:ts-no-uni}
If there is a round without uni instruction then the algorithm does not have
termination property. \end{lemma} \begin{proof}
Let $l$ be the first round without uni instruction. If $l \le \mathbf{ir}$,
we have $(\mathit{solo}_a,0)\act{\p}(\mathit{solo}_a,0)$ for every communication predicate.
Otherwise we get $(\mathit{solo}_a,i) \act{\p} (\mathit{solo}_a,i+1)$ for every communication predicate. \end{proof}
The next Lemma points out a crucial difference with the case without timestamps (cf.~Lemma~\ref{lem:fire-one})
\begin{lemma}\label{lem:ts-bias-fire}
For every
$\p$, we have $\set{a,b}\incl\operatorname{fire}_1((\mathit{bias}(\th),i),\p)$ for every $i$
and sufficiently big $\th$. \end{lemma}
\begin{proof}
We let $\th = \max(\thr_u^1,\mathit{thr}_1(\p)) + \e$ for small enough $\e$.
So $b \in \operatorname{fire}_1((\mathit{bias}(\th),i),\p)$, when we take $\mathsf{H}$ to contain all
the $b$'s.
Since all the $\mathtt{mult}$ instructions have $\mathrm{maxts}$ as their operator,
it follows that if we take a multi-set $\mathsf{H}$ consisting of all the values in
the tuple then $a = \mathtt{update}_1(\mathsf{H})$. \end{proof}
Since the semantics of the rounds remains the same except for the first one, Lemmas~\ref{lem:any-qbias-from-qbias},~\ref{lem:any-frequency} and~\ref{lem:any-qbias-from-bias} apply for timestamp algorithms for $k>2$. For the first round, we get the following reformulations.
\begin{lemma}\label{lem:ts-any-frequency}
Suppose rounds $1\dots l$ all have $\mathtt{mult}$
instructions
and none of $\p_1,\dots,\p_l$ is an equalizer.
If $\mathit{set}(f')\incl \operatorname{fire}_1((f,t),\p_1)$ and $? \notin \mathit{set}(f')$
then
$(f,t)\lact{\p_1}_1\dots\lact{\p_l}_l f'$ is possible. \end{lemma} \begin{proof}
Same as that of Lemma~\ref{lem:any-frequency}. \end{proof}
\begin{lemma}\label{lem:ts-any-bias}
Suppose none of $\p_1,\dots,\p_l$ is an equalizer, and some round $1,\dots,l$
does not have a $\mathtt{mult}$ instruction.
For every $\th$ and every $(f,t)$ such that $\set{a,b}\in
\operatorname{fire}_1((f,t),\p_1)$ or $\set{b,?}\in\operatorname{fire}_1((f,t),\p_1)$ we have
$(f,t)\lact{\p_1}_1\dots\lact{\p_l}_l \mathit{bias}^{?}(\th)$. \end{lemma} \begin{proof}
The same argument as for Lemmas~\ref{lem:any-qbias-from-qbias}
and~\ref{lem:any-qbias-from-bias}; replacing replace Lemma~\ref{lem:any-frequency} with Lemma~\ref{lem:ts-any-frequency}.
\end{proof}
We can now deal with the case when there is $\mathtt{mult}$ instruction in round $\mathbf{ir}$. This is an analog of Lemma~\ref{lem:no-mult-sequence}.
\begin{lemma} \label{lem:ts-min}
Suppose round $\mathbf{ir}$ either contains a $\mathtt{mult}$ instruction or $\thr_u^{\mathbf{ir}} < 1/2$.
Then either the algorithm violates consensus or we can remove the
$\mathtt{mult}$ instruction and make $\thr_u^{\mathbf{ir}} = 1/2$ without
affecting the semantics of the algorithm. \end{lemma}
\begin{proof}
Suppose round $\mathbf{ir}$ either contains a $\mathtt{mult}$ instruction or
$\thr_u^{\mathbf{ir}} < 1/2$. We consider two cases:
The first case is when there does not exist any tuple $(f,t)$ with
$(f,t) \lact{\overline{\p}_1}_1 f_1 \dots \lact{\overline{\p}_{\mathbf{ir}-2}}_{\mathbf{ir}-2} f_{\mathbf{ir}-2}$
such that $a,b \in \operatorname{fire}_{\mathbf{ir}-1}(f_{\mathbf{ir}-2},\overline{\p})$. It is then clear
that the $\mathtt{mult}$ instructions in round $\mathbf{ir}$ will never be fired
and so we can remove all these instructions in round $\mathbf{ir}$.
Further it is also clear that setting $\thr_u^{\mathbf{ir}} = 1/2$
does not affect the semantics of the algorithm in this case.
So it remains to examine the case when there exists a tuple $(f,t)$ with
$(f,t) \lact{\overline{\p}_1}_1 f_1\lact{\overline{\p}_2} \cdots\lact{\overline{\p}_{\mathbf{ir}-2}} f_{\mathbf{ir}-2}$ such that
$a,b \in \operatorname{fire}_{\mathbf{ir}-1}(f_{\mathbf{ir}-2},\overline{\p})$.
It is clear that in this case, we also have,
$(\mathit{bias}(\thr_u^1+\e),0) \lact{\overline{\p}_1}_1 f_1\lact{\overline{\p}_2} \cdots\lact{\overline{\p}_{\mathbf{ir}-2}} f_{\mathbf{ir}-2}$. Also we get
$f_{\mathbf{ir}-2} \lact{\overline{\p}_{\mathbf{ir}-1}}_{\mathbf{ir}-1} \mathit{bias}(\th)$ for arbitrary $\th$.
In this case, we will show the following:
Depending on the structure of rounds $\mathbf{ir}$ and $\mathbf{ir}+1$ we will define
two tuples $f_{\mathbf{ir}-1}$ and $f_{\mathbf{ir}}$ with the following properties:
\begin{itemize}
\item $f_{\mathbf{ir}-2} \lact{\overline{\p}_{\mathbf{ir}-1}}_{\mathbf{ir}-1} f_{\mathbf{ir}-1} \lact{\overline{\p}_{\mathbf{ir}}}_{\mathbf{ir}} f_{\mathbf{ir}}$,
\item $f_{\mathbf{ir}}$ contains no $?$ and at least one $a$,
\item either $a,b \in \operatorname{fire}_{\mathbf{ir}+1}(f_{\mathbf{ir}},\overline{\p}_{\mathbf{ir}+1})$
or $b,? \in \operatorname{fire}_{\mathbf{ir}+1}(f_{\mathbf{ir}},\overline{\p}_{\mathbf{ir}+1})$.
\end{itemize}
Notice that if $a,b \in \operatorname{fire}_{\mathbf{ir}+1}(f_{\mathbf{ir}},\overline{\p}_{\mathbf{ir}+1})$ and all rounds after
round $\mathbf{ir}$ have a $\mathtt{mult}$ instruction, then we can use Lemma~\ref{lem:any-frequency}
to conclude that we can decide on both $a$ and $b$.
In the other case, i.e., if some round after round $\mathbf{ir}$ does not have a $\mathtt{mult}$
instruction, or $b,? \in \operatorname{fire}_{\mathbf{ir}+1}(f_{\mathbf{ir}},\overline{\p}_{\mathbf{ir}+1})$
we use Lemmas~\ref{lem:any-qbias-from-bias} and~\ref{lem:any-qbias-from-qbias}
to show that we can make half the processes decide on $b$
and the other half undecided. Now the state of
the algorithm after this phase will be $(f_{\mathbf{ir}},1,\mathit{spread}^?)$
where $f_{\mathbf{ir}}$ contains at least one $a$.
Since all the $\mathtt{mult}$ instructions have $\mathrm{maxts}$ as operator
it follows that we can then get $\mathit{solo}^a$ after the first
round and decide on $a$.
So it remains to come up with $f_{\mathbf{ir}-1}$ and $f_{\mathbf{ir}}$ with the required properties.
We will do a case analysis, and for each case provide both these tuples.
In each of these cases, it can be easily verified that the provided tuples
satisfy the required properties.
\begin{itemize}
\item $\thr_u^{\mathbf{ir}} < 1/2$ or the $\mathtt{mult}$ instruction with the highest threshold
in round $\mathbf{ir}$ has $\mathrm{smor}$ as operator.
\begin{itemize}
\item The $\mathtt{mult}$ instruction with the highest threshold in round $\mathbf{ir}+1$
has $\mathrm{smor}$ as operator: Take $f_{\mathbf{ir}-1} = f_{\mathbf{ir}} = \mathit{spread}$.
\item Otherwise: Take $f_{\mathbf{ir}-1} = \mathit{spread}, f_{\mathbf{ir}} = \mathit{bias}(\max(\thr_u^{\mathbf{ir}+1},\mathit{thr}_{\mathbf{ir}+1}(\overline{\p}))+\e)$.
\end{itemize}
\item The $\mathtt{mult}$ instruction with the highest threshold in round $\mathbf{ir}$ has $\min$
as operator.
\begin{itemize}
\item The $\mathtt{mult}$ instruction with the highest threshold in round $\mathbf{ir}+1$
has $\mathrm{smor}$ as operator: Take $f_{\mathbf{ir}-1} = \mathit{bias}(\max(\thr_u^{\mathbf{ir}},\mathit{thr}_{\mathbf{ir}}(\overline{\p}))+\e), f_{\mathbf{ir}} = \mathit{spread}$.
\item Otherwise: Take $f_{\mathbf{ir}-1} = \mathit{bias}(\max(\thr_u^{\mathbf{ir}},\mathit{thr}_{\mathbf{ir}}(\overline{\p}))+\e)$,\\
$f_{\mathbf{ir}} =
\mathit{bias}(\max(\thr_u^{\mathbf{ir}+1},\mathit{thr}_{\mathbf{ir}+1}(\overline{\p}))+\e)$.
\end{itemize}
\end{itemize}
\end{proof}
\begin{corollary} \label{cor:ts-min}
If round $\mathbf{ir}+1$ has a $\mathtt{mult}$ instruction or $\thr_u^{\mathbf{ir}+1} < 1/2$,
then the $\mathtt{mult}$ instruction can be removed and $\thr_u^{\mathbf{ir}+1}$ can be
made $1/2$ without altering the semantics of the algorithm. \end{corollary}
\begin{proof}
By the previous lemma, round $\mathbf{ir}$ does not have any $\mathtt{mult}$
instruction and $\thr_u^{\mathbf{ir}} \ge 1/2$.
It then follows that if $f \lact{\f}_{\mathbf{ir}} f'$ for arbitrary predicate $\f$
then there cannot be both $a$ and $b$ in $f$.
Hence the $\mathtt{mult}$ instruction in
round $\mathbf{ir}+1$ will never be fired.
Consequently, it can
be removed without affecting the correctness of the algorithm.
It is also clear that we can raise the value of $\thr_u^{\mathbf{ir}+1}$ to $1/2$
without affecting the semantics of the algorithm. \end{proof}
\begin{lemma} \label{lem:ts-constants}
If the property of constants from Definition~\ref{def:ts-structure}
is not satisfied, then agreement is violated.
\end{lemma}
\begin{proof}
The proof starts similarly to the one of Lemma~\ref{lem:constants}.
We consider an execution under the global predicate $\overline{\p}$, and employ
Lemmas~\ref{lem:ts-any-frequency} and~\ref{lem:ts-any-bias}.
We start from configuration $(\mathit{bias}(\th_1),0)$ where
$\th_1>\thr_u^1$ big enough so that by Lemma~\ref{lem:ts-bias-fire} we get
$\set{a,b} \incl\operatorname{fire}_1(\mathit{bias}(\th_1),\overline{\p})$.
We consider also $\th=\thr_u^{\mathbf{ir}+1}+\e$.
By Lemma~\ref{lem:ts-min} there is a preserving round before
round $\mathbf{ir}+1$.
We proceed differently depending on wether
$\thr_m^{1,k}<1-\thr_u^{\mathbf{ir}+1}$ or not.
By the same argument as in Lemma~\ref{lem:constants}, we can
can get $\mathit{bias}^{?}(\th)$ or $\mathit{bias}^{?}_a(\th)$ after round $\mathbf{ir}$.
If $\thr_m^{1,k}<1-\thr_u^{\mathbf{ir}+1}$ then we choose to get $\mathit{bias}^{?}(\th)$.
We use Lemma~\ref{lem:any-qbias-from-bias} to make some processes decide on
$b$.
After this phase there are $1-\th$ processes with timestamp $0$.
We can ensure that among them there is at least one with value $a$ and one
with value $b$.
Since there is $\mathtt{mult}$ instruction in the first round, in the next phase we
send all the values with timestamp $0$.
This way we get $\mathit{solo}^{a}$ after the first round, and make some process decide
on $a$.
The remaining case is when $\thr_m^{1,k}\geq 1-\thr_u^{\mathbf{ir}+1}$.
So we have $\thr_u^1<1-\thr_u^{\mathbf{ir}+1}$, since we have assumed that that the
property of constants from Definition~\ref{def:ts-structure} does not hold.
This time we choose to get $\mathit{bias}^{?}_a(\th)$ after round $\mathbf{ir}$, and make some process
decide on $a$.
Since we have started with $\mathit{bias}(\th_1)$ we can arrange updates so that we
have at least $\min(\th_1,1-\th)$ processes who have value $b$ with timestamp
$0$.
But $\min(\th_1,1-\th)>\thr_u^1$, so by sending $\mathsf{H}$ set consisting of these
$b$'s we reach $\mathit{solo}$ after the first round and make some processes decide on
$b$. \end{proof}
Now we can state the sufficiency proof, similar to Lemma~\ref{lem:agreement}.
\begin{lemma}\label{lem:ts-structure}
If all the structural properties from Definition~\ref{def:ts-structure} are
satisfied then the algorithm satisfies agreement. \end{lemma} \begin{proof}
It is clear that the algorithm satisfies agreement when the initial frequency
is either $\mathit{solo}$ or $\mathit{solo}^a$. Suppose $(\mathit{bias}(\theta),t,d) \act{\p^*}
(\mathit{bias}(\theta'),t',d')$ such that $(\mathit{bias}(\theta'),t',d')$ is the first state in this execution
with a process $p$ which has decided on a value.
Without loss of generality let $b$ be the value that $p$ has decided on.
By Lemma~\ref{lem:ts-min} round $\mathbf{ir}$ does not have any
$\mathtt{mult}$ instructions and $\thr_u^{\mathbf{ir}} \ge 1/2$ and so it follows
that every other process could only decide
on $b$ or not decide at all. For the same reason it follows
that every process either updated its $\mathit{inp}$ value to $b$
or did not update its $\mathit{inp}$ value at all.
Further notice that since $b$ was decided by some
process, it has to be the case that more than $\thr_u^{\mathbf{ir}+1}$ processes have $b$ as
their $\mathit{inp}$ value with the most recent phase as their timestamps.
This means that the number of $a$'s in the configuration is less than $1-\thr_u^{\mathbf{ir}+1}$. Moreover since every process either updated its
$\mathit{inp}$ to $b$ or did not update all, no process with $a$ has the latest
timestamp.
Since $\thr_u^1 \geq 1 - \thr_u^{\mathbf{ir}+1}$, it follows that $a$ cannot be fired
from $(\mathit{bias}(\theta'),t')$ using the $\mathtt{uni}$ instruction in the first round. Further
since $\thr_m^{1,k}\geq 1 - \thr_u^{\mathbf{ir}+1}$ it follows that any $\mathsf{H}$ set bigger
than $\thr_m^{1,k}$ has to contain a value with the latest timestamp.
Since the only value with the latest timestamp is the value $b$,
it follows that
$a$ cannot be fired from $(\mathit{bias}(\theta'),t')$ using the $\mathtt{mult}$ instruction as
well. In consequence, the number of $a$'s can only decrease from this point onwards and so it
follows that no process from this point onwards can decide on $a$.
\end{proof}
The proof of termination is simpler compared to the proof of termination for core language. This is in part due to the use of $maxts$ rather than $smor$ as the operator in the first round.
\subsubsection*{Part 2: termination for timestamps}
\begin{lemma}\label{lem:ts-dec}
If $\p$ is a decider and $(\mathit{solo},t,\mathit{solo}^{?})\act{\p}(f',t',d')$
then $(f',d') = (\mathit{solo},\mathit{solo})$ for every timestamp tuple $t$. Similarly if
$(\mathit{solo}^a,t,\mathit{solo}^{?}) \act{\p}(f',t',d')$ then $(f',d') = (\mathit{solo}^a,\mathit{solo}^a)$. \end{lemma} \begin{proof}
Immediate \end{proof}
\begin{lemma}\label{lem:strong-unifier}
Suppose $\p$ is a strong unifier. If $(\mathit{bias}(\th),t)\act{\p}(f,t')$
then $f=\mathit{solo}$ or $f=\mathit{solo}^a$ (for every timestamp tuple $t$). \end{lemma} \begin{proof}
We first observe that the value $?$ cannot be produced in the first round.
Since $\p$ is a strong unifier we have $\thr_m^{1,k}\leq \mathit{thr}_1(\p)$
and $\thr_u^1\leq \mathit{thr}_1(\p)$, so every $\mathsf{H}$ set above the threshold will
satisfy an instruction of the first round.
Let $i$ be the round such that $\p_i$ is an equalizer
and rounds $2,\dots,i$ are non-preserving.
This round exists by the definition of a unifier.
Thanks to above, we know that till round $i$ we cannot produce $?$ under the predicate $\p$.
Because $\p_i$ has an equalizer, after round $i$ we either have $\mathit{solo}$ or $\mathit{solo}^a$.
This tuple stays till round $\mathbf{ir}$ as the rounds $i+1,\dots,\mathbf{ir}$ are solo-safe. \end{proof}
\begin{proof}\textbf{Main positive}
Suppose there is a unifier followed by a decider.
After the strong unifier we have $\mathit{solo}$ or $\mathit{solo}^{a}$ thanks to
Lemma~\ref{lem:strong-unifier}.
After decider all processes decide thanks to Lemma~\ref{lem:ts-dec}.
\end{proof}
\subsubsection*{Part 3: Non-termination for timestamps}
\begin{lemma}\label{lem:ts-not-decider}
If $\p$ is a not a decider then $(\mathit{solo},t)\act{\p}(\mathit{solo},t)$ and $(\mathit{solo}^a,t)\act{\p}(\mathit{solo}^a,t)$ for any timestamp $t$. \end{lemma} \begin{proof}
If $\p$ is not a decider then there is a round (say $i$) that is not solo-safe.
So from both $\mathit{solo}$ and $\mathit{solo}^{a}$
we can reach the tuple $\mathit{solo}^{?}$ after round $i$.
From $\mathit{solo}^{?}$ no process can decide. \end{proof}
\begin{lemma} \label{lem:ts-not-str-uni}
If $\p$ is not a strong unifier then $(\mathit{bias}(\th),i) \act{\p} (\mathit{bias}(\th),j)$
is possible (for large enough $\th$, arbitrary $i$, and some $j$). \end{lemma}
\begin{proof}
Let $\th > \max(\mathit{thr}_u^1,\mathit{thr}_1(\p))+\e$ and so $a,b \in \operatorname{fire}_1(\mathit{bias}(\th),i)$ by Lemma~\ref{lem:ts-bias-fire}. Suppose $\p$ is not a strong unifier. We do a
case analysis.
Suppose $\mathit{thr}_1(\p) < \thr_m^{1,k}$ or $\mathit{thr}_1(\p) < \thr_u^1$.
Clearly we can get $\mathit{solo}^?$ as the tuple after the first round and then
use this to not decide on anything and retain the input tuple.
Suppose $\p$ does not have an equalizer. We can then
apply Lemmas~\ref{lem:ts-any-bias}
and~\ref{lem:ts-min}
to get $\mathit{solo}^{?}$ after round $\mathbf{ir}$
and so we are done, because nothing is changed in the phase.
Suppose the $k$-th component of $\p$ is an equalizer and suppose
there is a preserving round before round $k$ (it can be round 1 as well).
Let the first preserving round before round $k$ be round $l$.
Since no round before round $l$ is preserving, it follows that
all these rounds have $\mathtt{mult}$ instructions.
Hence by Lemma~\ref{lem:ts-any-frequency} we can get to $\mathit{bias}(\th')$
(where $\th' > \max(\thr_u^l,\mathit{thr}_l(\p))$) before round $l$
(Notice that if $l = 1$ then we
need to reach $\mathit{bias}(\th')$ with $\th' > \max(\thr_u^1,\mathit{thr}_1(\p))$ which is where we start at).
It is clear that $\mathit{bias}(\th') \lact{\p_l}_l \mathit{solo}^{?}$.
We can then propagate
$\mathit{solo}^{?}$ all the way down to get the phase transition $(\mathit{bias}(\th),i) \act{\p}
(\mathit{bias}(\th),i)$.
Suppose the $k$-th component of $\p$ is an equalizer and suppose
there is a non solo-safe round $l$ after $k$. It is
clear that we can reach $\mathit{solo}$ after round $k$ and using this get $\mathit{solo}^{?}$
after round $l$. Hence we once again get the phase transition $(\mathit{bias}(\th),i)
\act{\p} (\mathit{bias}(\th),i)$. \end{proof}
\begin{proof}
\textbf{Main non-termination}
We show that if there is no strong unifier followed by a decider, then the
algorithm may not terminate. We start with $(\mathit{bias}(\th),0)$ where $\th$
is large enough. If $\p$ is not a strong unifier then by
Lemma~\ref{lem:ts-not-str-uni} $(\mathit{bias}(\th),i) \act{\p} (\mathit{bias}(\th),j)$
is possible for arbitrary $i$, and some $j$.
Hence if there is no strong unifier the algorithm will not terminate.
Otherwise let $\p^l$ be the first strong unifier. Notice that $\p^l$ is not
the global predicate as we have assumed the global predicate does not have
equalizers.
Till $\p^l$ we can maintain $\mathit{bias}(\th)$ thanks to
Lemma~\ref{lem:ts-not-str-uni}.
Suppose $\p^l$ is not a decider. By Lemma~\ref{lem:strong-unifier} the
state of $\mathit{inp}$
after this phase will become $\mathit{solo}$ or $\mathit{solo}^a$. However, since $\p^l$ is not a
decider, we can choose to not decide on any value. Hence we get the transition
$(\mathit{bias}(\th),i) \act{\p} (\mathit{solo},i+1)$. Now, since none of the
$\p^{l+1},\dots,\p^k$ and neither the global predicate $\p$ are deciders,
by Lemma~\ref{lem:ts-not-decider} we can have a transition where no decision
happens.
Hence the algorithm does not terminate if there is no decider after a strong unifier. \end{proof}
\section{Proofs for algorithms with coordinators}
We give a proof of Theorem~\ref{thm:coordinators}. The structure of the proof is quite similar to the previous cases.
\subsubsection*{Part 1: Structural properties for coordinators}
\begin{lemma}\label{lem:c-no-uni}
If there is a round without uni instruction then the algorithm does not terminate. \end{lemma} \begin{proof}
We get $\mathit{solo}^{a}\act{}\mathit{solo}^{a}$ for every communication predicate. \end{proof}
Compared to the core language, it is not easy to see that the first round of an algorithm with coordinators should have a $\mathtt{mult}$ instruction. However, this is indeed the case as we prove later. For the moment we make an observation.
\begin{lemma} \label{lem:co-weak-mult}
If the first round is not of type $\mathtt{ls}$ then the first round should have a $\mathtt{mult}$ instruction. \end{lemma}
\begin{proof}
Otherwise we have $\mathit{spread} \act{\p} \mathit{spread}$ for arbitrary communication
predicate $\p$. \end{proof}
Before considering the remaining structural requirements we state some useful lemmas.
\begin{lemma}\label{lem:co-no-mult-fire}
If round $k$ is not of
type $\mathtt{ls}$ and does not have a $\mathtt{mult}$ instruction then for all sufficiently big
$\th$ we have $\set{b,?}\in \operatorname{fire}_k(\mathit{bias}(\th),\f)$, for arbitrary predicate
$\f$. \end{lemma} \begin{proof}
Take $\th >\max(\thr_u^k,\f_k(\mathit{thr}))$.
We have $b\in\operatorname{fire}_k(\mathit{bias}(\th_k),\f)$ because of the $\mathtt{uni}$ instruction.
We have $?\in\operatorname{fire}_k(\mathit{bias}(\th_k),\f)$ because there is no $\mathtt{mult}$ instruction. \end{proof}
\begin{lemma}\label{lem:co-bias-fire}
Suppose the first round has a $\mathtt{mult}$ instruction with
$\min$ as operation or is of type $\mathtt{ls}$. Then for the global predicate $\overline{\p}$, we have $\set{a,b}\incl\operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$ for
sufficiently big $\th$. \end{lemma}
\begin{proof}
The claim is clear when the first round is of type $\mathtt{ls}$. Suppose the first round has a $\mathtt{mult}$ instruction with $\min$ as operation.
Let $I$ be that instruction and let $\mathit{thr}^I$ be the threshold
value appearing in instruction $I$.
Let $\th > \thr_u^1$. Notice that $b \in \operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$
because of the $\mathtt{uni}$ instruction in the first round.
Further notice that, from $\mathit{bias}(\th)$ we can construct
a multi-set $\mathsf{H}$ having at least one $a$ and is of size just above $\mathit{thr}^I$.
Since $\overline{\p}$ is the global predicate, we know that this multi-set satisfies $\overline{\p}$
because of assumption~\eqref{eq:syntactic-property}.
Further it is clear that $a = \mathtt{update}_1(\mathsf{H})$ and so
we have $a \in \operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$. \end{proof}
\begin{lemma}\label{lem:co-spread}
Suppose in the first round all $\mathtt{mult}$ instructions have $\mathrm{smor}$ as
operation.
Then for every predicate $\p$ we have
$\set{a,b}\incl\operatorname{fire}_1(\mathit{spread},\p)$.
\end{lemma} \begin{proof}
Same proof as Lemma~\ref{lem:spread}. \end{proof}
\begin{lemma}\label{lem:co-bias-propagation}
Suppose none of $\p_k,\dots,\p_l$ is a c-equalizer.
Suppose round $l$ is not of type $\mathtt{lr}$.
Then there is $\th$ with $\mathit{bias}^{?}(\th)\lact{\p_k}_k\dots\lact{\p_l}_l\mathit{bias}^{?}(\th')$ for arbitrary $\th'$. \end{lemma}
\begin{proof}
If the $k^{th}$ round is a $\mathtt{ls}$ round, consider arbitrary $\th$. By
definition of transitions, we get
$\mathit{bias}^{?}(\th)\lact{\p_k}_k\mathit{bias}^{?}(\th')$ for arbitrary $\th'$.
If the $k^{th}$ round is a $\mathtt{lr}$ round, take $\th=\thr_u^k+\e$ for small $\e$.
We can get $b$ from $\mathit{bias}^{?}(\th)$ because of the $\mathtt{uni}$ instruction.
Since this is an $\mathtt{lr}$ round we have $\mathit{bias}^{?}(\th) \lact{\p_k}_k one_b$.
This round must be followed by an $\mathtt{ls}$ round, so the argument from the previous
paragraph applies, and we can get arbitrary $\mathit{bias}^{?}(\th')$ after round $k+1$.
Otherwise $k^{th}$ round is neither $\mathtt{ls}$ nor $\mathtt{lr}$.
By Lemma~\ref{lem:any-qbias-from-qbias}, we can get arbitrary $\mathit{bias}^{?}(\th')$
after round $k$.
We can repeat this argument till round $l$. \end{proof}
\begin{lemma}\label{lem:co-any-frequency}
Suppose none of $\p_k,\dots,\p_l$ is a c-equalizer, and all rounds $k\dots l$
have $\mathtt{mult}$ instructions. Suppose round $l$ is not of type $\mathtt{lr}$.
For every $f$ and every $f'$ without $?$ such that $\mathit{set}(f')\incl
\operatorname{fire}_k(f,\p_k)$ we have $f\lact{\p_k}_k\dots\lact{\p_l}_l f'$. \end{lemma} \begin{proof}
Notice that since all the considered rounds have $\mathtt{mult}$ instructions,
none of these rounds are of type $\mathtt{ls}$ by assumption on
page~\pageref{assumption-ls-lr}.
Further, since every $\mathtt{lr}$ round is followed by a $\mathtt{ls}$ round, it follows that
we have only two cases:
either all rounds $k,\dots,l$ are of type $\mathtt{every}$, or
rounds $k,\dots,l-1$ are of type $\mathtt{every}$ and round $l$ is of type $\mathtt{lr}$.
Since the second case is excluded by assumption, we only have the
first case which holds by Lemma~\ref{lem:any-frequency}. \end{proof}
\begin{lemma}\label{lem:co-any-bias}
Suppose none of $\p_k,\dots,\p_l$ is an equalizer, and some round $k,\dots,l$
does not have a $\mathtt{mult}$ instruction.
Suppose round $l$ is not of type $\mathtt{lr}$.
For every $\th$ and every $f$ such that $\set{a,b}\in
\operatorname{fire}_k(f,\p_k)$ we have $f\lact{\p_k}_k\dots\lact{\p_l}_l \mathit{bias}^{?}(\th)$,
and $f\lact{\p_k}_k\dots\lact{\p_l}_l \mathit{bias}^{?}_a(\th)$. \end{lemma} \begin{proof}
Let $i$ be the first round without a $\mathtt{mult}$ instruction.
There are two cases.
Suppose rounds $k,\dots,i-1$ are all of type $\mathtt{every}$.
In this case we use Lemma~\ref{lem:co-any-frequency} to reach any
$\mathit{bias}(\theta')$ before round $i$.
If round $i$ is of type $\mathtt{every}$ then we can use
Lemma~\ref{lem:co-no-mult-fire}
to get arbitrary $\mathit{bias}^{?}(\th'')$ after round $i$.
If round $i$ is of type $\mathtt{lr}$, then round $i+1$ is of type $\mathtt{ls}$ and
so we can use Lemma~\ref{lem:co-no-mult-fire} to get
$one_b$ after round $i$ and (since $\p_{i+1}$ is not a c-equalizer)
then use that to get arbitrary $\mathit{bias}^{?}(\th)$
after round $i+1$.
If round $i$ is of type $\mathtt{ls}$, since $\p_i$ is not a c-equalizer,
so we can get arbitrary $\mathit{bias}^{?}(\th)$ after round $i$.
We can then use Lemma~\ref{lem:co-bias-propagation} to finish
the proof.
In the remaining case, by the same reasoning as in the previous lemma we see
that all rounds $k,\dots,i-2$ must be of type $\mathtt{every}$, and round $i-1$ must be
of type $\mathtt{lr}$.
We can use Lemma~\ref{lem:co-any-frequency} to reach $\mathit{bias}(\max(\thr_u^{i-1},\mathit{thr}(\p_{i-1}))+\e)$
before round $i-1$ and then using that reach $one_b$
before round $i$. Since $i-1$ is of type $\mathtt{lr}$, $i$ is of type
$\mathtt{ls}$ and since there are no c-equalizers we can get
arbitrary $\mathit{bias}^{?}(\th)$ after round $i$.
We can then use Lemma~\ref{lem:co-bias-propagation} to finish
the proof.
\end{proof}
\begin{lemma}\label{lem:co-no-mult-fo}
If round $\mathbf{ir}+1$ contains a $\mathtt{mult}$ instruction then the algorithm
does not satisfy agreement, or it can be removed without altering the
semantics of the algorithm \end{lemma}
\begin{proof}
Suppose round $\mathbf{ir}+1$ contains a $\mathtt{mult}$ instruction.
Recall that this implies that round ${\mathbf{ir}+1}$ is not of type $\mathtt{ls}$ (cf.\
assumption on page~\pageref{assumption-ls-lr}).
Recall that $\overline{\p}$ denotes the global predicate.
The first case is when there does not exist any tuple $f$ having an execution
$f \lact{\overline{\p}_1}_1 f_1 \dots \lact{\overline{\p}_{\mathbf{ir}-1}}_{\mathbf{ir}-1} f_{\mathbf{ir}-1}$
with $a,b \in \operatorname{fire}_\mathbf{ir}(f_{\mathbf{ir}-1},\overline{\p})$. It is then clear
that the $\mathtt{mult}$ instructions in round $\mathbf{ir}+1$ will never be fired
and so we can remove all these instructions in round $\mathbf{ir}+1$.
So it remains to examine the case when there exists a tuple $f$ with
$f \lact{\overline{\p}_1}_1 f_1\lact{\overline{\p}_2} \cdots\lact{\overline{\p}_{\mathbf{ir}-1}} f_{\mathbf{ir}-1}$ such that
$a,b \in \operatorname{fire}_{\mathbf{ir}}(f_{\mathbf{ir}-1},\overline{\p}_{\mathbf{ir}})$.
Notice that in this case, the first round cannot be of type $\mathtt{ls}$.
Since round $\mathbf{ir}$ cannot be of type $\mathtt{lr}$ (cf. assumption on
page~\pageref{assumption-ls-lr}) we can get
$f_{\mathbf{ir}-1} \lact{\overline{\p}_{\mathbf{ir}}}_{\mathbf{ir}} \mathit{bias}(\th)$ for arbitrary $\th$.
We now consider two cases: Suppose round $\mathbf{ir}+1$ is of type $\mathtt{every}$.
Then we can proceed exactly as the proof of Lemma~\ref{lem:no-mult-sequence}
and show that agreement is not satisfied.
Suppose round $\mathbf{ir}+1$ is of type $\mathtt{lr}$. Hence round $\mathbf{ir}+2$ is
of type $\mathtt{ls}$.
Let $I$ be the $\mathtt{mult}$ instruction in round $\mathbf{ir}+1$ with the
highest threshold value.
Suppose $I$ has $\mathrm{smor}$ as its operation. Then we consider
$f_{\mathbf{ir}-1} \lact{\overline{\p}_{\mathbf{ir}}}_{\mathbf{ir}} \mathit{spread}$.
Since $I$ has $\mathrm{smor}$ as operation, it is easy to see that
$\mathit{spread} \lact{\overline{\p}_{\mathbf{ir}+1}}_{\mathbf{ir}+1} \mathit{one}_b$.
Since $\overline{\p}$ is the global predicate, $\overline{\p}_{\mathbf{ir}+2}$ is not a c-equalizer,
and so we get $\mathit{one}_b \lact{\overline{\p}_{\mathbf{ir}+2}}_{\mathbf{ir}+2} \mathit{bias}^{?}(\th')$
for arbitrary $\th'$. We can
then use Lemma~\ref{lem:co-bias-propagation} to conclude that
we can make one process decide on $b$ and leave the rest
undecided.
In the next phase, the state of $\mathit{inp}$ is $\mathit{spread}$.
We know, by Lemma~\ref{lem:co-weak-mult} that the first round has a $\mathtt{mult}$
instruction (since as observed above, the first round
is not $\mathtt{ls}$ in this case).
This instruction has $\mathrm{smor}$ (or) $\min$ as its operation,
it is clear that in either case, $a \in \operatorname{fire}_1(\mathit{spread},\overline{\p})$ and
so we can get $\mathit{solo}^a$ after the first round and decide on $a$.
Suppose $I$ has $\min$ as its operation. Then we consider
$f_{\mathbf{ir}-1} \lact{\overline{\p}_{\mathbf{ir}}}_{\mathbf{ir}} \mathit{bias}(\th)$ where $\th > \thr_u^1$
is sufficiently big.
If we send the entire tuple as a HO set, we can fire $a$.
Hence we get $\mathit{bias}(\th) \lact{\overline{\p}_{\mathbf{ir}+1}}_{\mathbf{ir}+1} \mathit{one}_a$.
As in the previous case this allows us to make one process decide on $a$.
Note that the state of $\mathit{inp}$ will be $\mathit{bias}(\th)$
after the end of the phase.
Since the first round has a $\mathtt{uni}$ instruction (Lemma~\ref{lem:c-no-uni}), and
since $\th > \thr_u^1$ (and $\thr_u^1 \ge \mathit{thr}_1(\overline{\p})$ by equation~\ref{eq:c-syntactic-property}), we can get $\mathit{solo}$ after the first round
and decide on $b$. \end{proof}
\begin{lemma} \label{lem:co-ls}
If the first round is of type $\mathtt{ls}$ or has a $\mathtt{mult}$ instruction with $\min$ as operation, then the algorithm does not solve agreement. \end{lemma}
\begin{proof}
Suppose that indeed in the first round we have a $\mathtt{mult}$ instruction with
$\min$ operation or the first round is of type $\mathtt{ls}$.
We execute the phase under the global predicate $\overline{\p}$.
By Lemma~\ref{lem:co-bias-fire} we have $\set{a,b}\incl
\operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$, for some sufficiently big $\th$.
Consider $\th_{\mathbf{ir}+1}=\max(\thr_u^{\mathbf{ir}+1},\mathit{thr}_{\mathbf{ir}+1}(\overline{\p}))+\e$
for some small $\e$.
Thanks to our proviso, the global predicate does not have a c-equalizer,
hence we can freely apply Lemmas~\ref{lem:co-any-frequency}
and~\ref{lem:co-any-bias} to get $\mathit{bias}(\th_{\mathbf{ir}+1})$ or
$\mathit{bias}^{?}(\th_{\mathbf{ir}+1})$ after round $\mathbf{ir}$.
By Lemma~\ref{lem:co-no-mult-fo}, there is no $\mathtt{mult}$ instruction in round
${\mathbf{ir}+1}$.
Hence $\set{b,?}\in\operatorname{fire}_{\mathbf{ir}+1}(\mathit{bias}(\th_{\mathbf{ir}+1}),\overline{\p})$.
We can apply Lemma~\ref{lem:co-bias-propagation} to set $\mathit{dec}$ of
one process to $b$ in this phase and leave the other processes undecided.
Moreover, in the round $\mathbf{ir}$ the variable $\mathit{inp}$ is set to $\mathit{bias}(\max(\th,\th_{\mathbf{ir}+1}))$.
In the next phase, Lemma~\ref{lem:co-bias-fire} says that $\set{a,b}\in
\operatorname{fire}_1(\mathit{bias}(\max(\th,\th_{\mathbf{ir}+1})),\overline{\p})$.
We can get $\mathit{solo}^a$ as the tuple after the first round under global predicate, hence we can set some $\mathit{dec}$ to $a$. \end{proof}
\begin{lemma}
If the first round does not have a $\mathtt{mult}$ instruction then the algorithm does not terminate. \end{lemma}
\begin{proof}
Since the first round does not have type $\mathtt{ls}$, if there are no $\mathtt{mult}$
instructions in the first round, then we have $\mathit{spread} \lact{\f}_1
\mathit{solo}_?$ for any predicate $\f$. \end{proof}
\begin{lemma} \label{lem:co-ls-fo}
If round $\mathbf{ir}+1$ is of type $\mathtt{ls}$, then the algorithm does not solve consensus. \end{lemma}
\begin{proof}
Suppose round $\mathbf{ir}+1$ is of type $\mathtt{ls}$.
We consider an execution of a phase under the
global predicate and so we can freely use Lemmas~\ref{lem:co-any-frequency}
and~\ref{lem:co-any-bias}.
We have seen in Lemma~\ref{lem:co-ls} that in the first round all the
$\mathtt{mult}$ instructions must be $\mathrm{smor}$. We start with $\mathit{spread}$.
We can then use Lemmas~\ref{lem:co-any-frequency}
and~\ref{lem:co-any-bias}
to get $\mathit{spread}$ or $\mathit{spread}^?$ after round $\mathbf{ir}$.
In either case, because round $\mathbf{ir}+1$ is of type $\mathtt{ls}$
and the global predicate does not have c-equalizers,
it follows that we can get $\mathit{bias}^{?}(\th)$ for arbitrary $\th$
after round $\mathbf{ir}+1$. Applying Lemma~\ref{lem:co-bias-propagation}
we can make one process decide on $b$ and prevent the
other processes from deciding.
Notice that the state of $\mathit{inp}$ in the next phase is still $\mathit{spread}$.
By Lemma~\ref{lem:co-spread} we have that $a \in \operatorname{fire}_1(\mathit{spread},\p)$.
Hence we can get $\mathit{solo}^a$ after the first round and use this
to make the undecided processes decide on $a$. \end{proof}
\begin{lemma} \label{lem:co-constants}
If the property of the constants is not satisfied, then the algorithm does not solve consensus. \end{lemma}
\begin{proof}
We consider an execution of a phase under the
global predicate and so we can freely use Lemmas~\ref{lem:co-any-frequency}
and~\ref{lem:co-any-bias}.
We have seen in Lemma~\ref{lem:co-ls} that in the first round all the
$\mathtt{mult}$ instructions must be $\mathrm{smor}$. We start with $\mathit{spread}$.
We have two cases, that resemble those of Lemma~\ref{lem:constants}.
The first case is when all the rounds $1,\dots,\mathbf{ir}$ are not-c-preserving.
Since we consider global predicate, there are not $c$-equalizers, so none of
these rounds in an $\mathtt{ls}$ round.
This implies that all these rounds have a $\mathtt{mult}$ instruction.
We take $\th_{\mathbf{ir}+1}=\thr_u^{\mathbf{ir}+1}+\e$.
From Lemma~\ref{lem:co-any-frequency} we can get $\mathit{bias}(\th_{\mathbf{ir}+1})$ as a tuple
before the round $\mathbf{ir}+1$.
By observation~\eqref{eq:syntactic-property}, we have
$\thr_u^{\mathbf{ir}+1}\geq\mathit{thr}_{\mathbf{ir}+1}(\overline{\p})$, so $b\in\operatorname{fire}_{{\mathbf{ir}+1}}(\mathit{bias}(\th_{\mathbf{ir}+1}),\overline{\p})$.
We also have $?\in\operatorname{fire}_{{\mathbf{ir}+1}}(\mathit{bias}(\th_{\mathbf{ir}+1}),\overline{\p})$, because
round $\mathbf{ir}+1$ does not have any $\mathtt{mult}$ instructions.
Then we can apply Lemma~\ref{lem:co-bias-propagation} to set $\mathit{dec}$ of
one process to $b$ in this round and leave the other processes undecided.
The second case, is when there is a preserving round among $1,\dots,\mathbf{ir}$.
Let $j\leq \mathbf{ir}$ be the first such round.
Since, all rounds before $j$ are not-c-preserving, by
Lemma~\ref{lem:co-any-frequency}, we can get $\mathit{bias}(\thr_u^j+\e)$ before
round $j$.
Since $j$ is preserving, it is either of type $\mathtt{ls}$, or has
no $\mathtt{mult}$ instructions or
$\mathit{thr}_j(\overline{\p})<\max(\thr_u^j,\mathit{thr}^{j,k}_m)$.
In the first case, since the global predicate does not have c-equalizers, we
have $\set{b,?}\incl\operatorname{fire}_j(\mathit{bias}(\thr_u^j+\e),\overline{\p})$.
In the other cases the type of round $j$ can be $\mathtt{every}$ or $\mathtt{lr}$.
For type $\mathtt{every}$ we also get $\set{b,?}\incl\operatorname{fire}_j(\mathit{bias}(\thr_u^j+\e),\overline{\p})$.
For type $\mathtt{lr}$, we have that the round $j+1$ is $\mathtt{ls}$.
Since we have assumed that $\overline{\p}$ does not have a c-equalizer, $\overline{\p}_{j+1}$ does
not have $\f_\mathtt{ls}$.
So we can get $\mathit{bias}^{?}(\th)$ for arbitrary $\th$ after round $j+1$.
After all these cases we can use Lemma~\ref{lem:co-bias-propagation} to get
$\mathit{bias}^{?}(\th_{\mathbf{ir}+1})$ before round ${\mathbf{ir}+1}$; where as before $\th_{\mathbf{ir}+1}=\thr_u^{\mathbf{ir}+1}+\e$.
As in the first case, we employ Lemma~\ref{lem:co-bias-propagation} to make
some process decide on $b$ and leave other processes undecided.
In both cases we can arrange the execution so that the state of
$\mathit{inp}$ after this phase is $(\mathit{bias}(\th_{\mathbf{ir}+1}),\mathit{spread}^{?})$ or
$(\mathit{spread},\mathit{spread}^{?})$.
The same argument as in Lemma~\ref{lem:constants} shows that some process can
decide on $a$ in the next phase. \end{proof}
\begin{lemma} \label{lem:co-suff}
If all the structural properties are satisfied then the algorithm satisfies agreement. \end{lemma}
\begin{proof}
It is clear that the algorithm satisfies agreement when the initial frequency is either $\mathit{solo}$ or $\mathit{solo}^a$. Suppose $(\mathit{bias}(\theta),d) \act{\p^*} (\mathit{bias}(\theta'),d')$ such that for the first time in this transition sequence, some process has decided (say the process has decided on $a$). Since $\thr_m^{1,k}/2 \geq 1-\thr_u^{\mathbf{ir}+1}$ we have that $\thr_u^{\mathbf{ir}+1} \ge 1/2$.
Further, since round $\mathbf{ir}+1$ does not have any $\mathtt{mult}$ instructions (Lemma~\ref{lem:co-no-mult-fo}), it follows that every other process could only decide on $a$ or not decide at all.
Further notice that since $a$ was decided by some process and since
round $\mathbf{ir}+1$ is not of type $\mathtt{ls}$ (Lemma~\ref{lem:co-ls-fo}),
it has to be the case that at least $\thr_u^{\mathbf{ir}+1}$ processes have $a$ as their $\mathit{inp}$ value. Hence $\theta' < 1 - \thr_u^{\mathbf{ir}+1}$.
Recall that the first round is not of type $\mathtt{ls}$ (Lemma~\ref{lem:co-ls}).
Since $\theta' < 1 - \thr_u^{\mathbf{ir}+1} \le \thr_u^1$, it follows that $b$ cannot be fired from $\mathit{bias}(\theta')$ using the $\mathtt{uni}$ instruction in the first round.
Since $\theta' < 1 - \thr_u^{\mathbf{ir}+1} \le \thr_m^{1,k}/2$ and since every $\mathtt{mult}$ instruction in the first round has $\mathrm{smor}$ as its operator, it follows that
$b$ cannot be fired from $\mathit{bias}(\theta')$ using the $\mathtt{mult}$ instruction as well.
Hence the number of $b$'s can only decrease from this point onwards, and so it follows that no process from this point onwards can decide on $b$. \end{proof}
\subsubsection*{Part 2: termination for coordinators}
\begin{lemma}\label{lem:co-fire-one}
For the global predicate $\overline{\p}$: $a\in\operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$ iff $\th<\bar{\thr}$.
(Similarly $b\in\operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$ iff $1-\bar{\thr}<\th$). \end{lemma} \begin{proof}
Since the first round cannot be a $\mathtt{ls}$ round (Lemma~\ref{lem:co-ls}), the
proof of this lemma
is the same as that of Lemma~\ref{lem:fire-one}. \end{proof}
\begin{corollary}\label{cor:co-no-a-above-bthr}
For every predicate $\p$, if $\th\geq\bar{\thr}$ then $a\not\in\operatorname{fire}_1(\mathit{bias}(\th),\p)$.
Similarly if $\th\leq 1-\bar{\thr}$ then $b \not \in\operatorname{fire}_1(\mathit{bias}(\th),\p)$. \end{corollary}
\begin{lemma}\label{lem:co-unifier}
Suppose $\p$ is a unifier and $\mathit{bias}(\th)\act{\p}f$.
If $\thr_u^1\leq \thr_m^{1,k}$ or $1-\bar{\thr}\le\th\le\bar{\thr}$
then $f=\mathit{solo}$ or $f=\mathit{solo}^a$. \end{lemma} \begin{proof}
The argument is the same as in Lemma~\ref{lem:unifier}, as the first round
cannot be of type $\mathtt{ls}$. \end{proof}
\begin{lemma}\label{lem:co-decider}
If $\p$ is a decider and $(\mathit{solo},\mathit{solo}^?)\act{\p}(f',d')$
then $(f',d') = (\mathit{solo},\mathit{solo})$.
Similarly, if $(\mathit{solo}^a,\mathit{solo}^?) \act{\p}(f',d')$ then $(f',d') =
(\mathit{solo}^a,\mathit{solo}^a)$.
In case $\thr_m^{1,k}\leq \thr_u^1$, for every $\th\geq\bar{\thr}$: if
$(\mathit{bias}(\th),\mathit{solo}^?)\act{\p}(f',d')$ then $(f',d')=(\mathit{solo},\mathit{solo})$
and for every $\th\leq 1-\bar{\thr}$: if
$(\mathit{bias}(\th),\mathit{solo}^?)\act{\p}(f',d')$ then $(f',d')=(\mathit{solo}^a,\mathit{solo}^a)$. \end{lemma} \begin{proof}
The same as in the case of the core language. \end{proof}
\begin{lemma}\label{lem:co-main-positive}
If an algorithm in a core language has structural properties from
Definition~\ref{def:structure}, and satisfies condition cT1 then it solves consensus. \end{lemma} \begin{proof}
The same as for the core language. \end{proof}
\subsubsection*{Part 3: non-termination for coordinators}
\begin{lemma}\label{lem:co-not-decider}
If $\p$ is a not a c-decider then
$\mathit{solo}\act{\p}\mathit{solo}$ and $\mathit{solo}^a \act{\p} \mathit{solo}^a$; namely, no process may decide. \end{lemma} \begin{proof}
If $\p$ is not a decider then there is a round, say $i$, that is not c-solo-safe.
By definition this means that either round $i$ is of type $\mathtt{ls}$ with $\p_i$
not containing $\f_\mathtt{ls}$ or it has one of the two other types and $\mathit{thr}_i(\p)< \thr_u^i$.
It is then easy to verify that for $j < i$, $\mathit{solo} \lact{\p_j}_j \mathit{solo}$,
$\mathit{solo} \lact{\p_i}_i \mathit{solo}^?$ and $\mathit{solo}^? \lact{\p_k}_k \mathit{solo}^?$ for $k > i$. Hence this
ensures that no process decides during this phase.
Similar proof holds when the $\mathit{inp}$ tuple is $\mathit{solo}^a$. \end{proof}
\begin{lemma}\label{lem:co-global-th}
For the global predicate $\overline{\p}$: if $1/2\leq \th\leq \bar{\thr}$ then
$\mathit{bias}(\th)\act{\overline{\p}}\mathit{bias}(\th')$ for every $\th'\geq 1/2$. \end{lemma} \begin{proof}
The proof follows the same argument as in Lemma~\ref{lem:global-th}.
There are some complications due to new types of rounds.
As in Lemma~\ref{lem:global-th} we start by observing that $\set{a,b}\incl
\operatorname{fire}_1(\mathit{bias}(\th,\overline{\p})$.
This follows, as we have observed that the first round cannot be of type
$\mathtt{ls}$.
If there are $\mathtt{mult}$ instructions in rounds $2,\dots,\mathbf{ir}$ then
Lemma~\ref{lem:co-any-frequency} ensures that for arbitrary $\th'$ we can get
$\mathit{bias}(\th')$ after round $\mathbf{ir}$ (we have observed that round $\mathbf{ir}$ cannot be
of type $\mathtt{lr}$).
Since there is no $\mathtt{mult}$ instruction in round ${\mathbf{ir}+1}$ (Proviso~\ref{proviso}
and Lemma~\ref{lem:co-no-mult-fo}) we can get $\mathit{solo}^{?}$ after round ${\mathbf{ir}+1}$ and
decide on nothing.
Hence we are done in this case.
If some round $2,\dots,\mathbf{ir}$ does not have a $\mathtt{mult}$ instruction then we can
use Lemma~\ref{lem:co-any-bias} to get $\mathit{bias}^{?}(\th'')$ as well as
$\mathit{bias}^{?}_a(\th'')$, for arbitrary $\th''$, after round $\mathbf{ir}$.
There are two cases depending on $\th'\geq \th$ or not.
If $\th'\geq \th$ then we take $\th''=\min(\th,\thr_u^{\mathbf{ir}+1}-\e)$.
For the same reasons as in Lemma~\ref{lem:global-th} that $\th''\geq 1/2$, so we can get
$\mathit{bias}(\th')$ as a state of $\mathit{inp}$ after round $\mathbf{ir}$.
We show that $\mathit{bias}(\th')\lact{\overline{\p}}_{\mathbf{ir}+1}\mathit{solo}^{?}$.
If round ${\mathbf{ir}+1}$ is of type $\mathtt{ls}$ then this is direct from definition since
round $\mathbf{ir}$ is not of type $\mathtt{lr}$.
Otherwise, we can just set the whole multiset of values to every process, and
there is not enough of $b$'s to pass $\thr_u^{\mathbf{ir}+1}$ threshold.
So we are done in this case.
The remaining case is when $\th'<\th$.
As in Lemma~\ref{lem:global-th} we reach $\mathit{bias}^{?}_a(\th'')$ for
$\th''=\th-\th'$.
This gives us some $a$'s that we need, to convert $\mathit{bias}(\th)$ to
$\mathit{bias}(\th')$.
As in the previous case we argue that we can get $\mathit{solo}^{?}$ after round ${\mathbf{ir}+1}$.
So we are done in this case too. \end{proof}
\begin{lemma}
If $\p$ is not a c-unifier then
\begin{equation*}
\mathit{bias}(\th)\act{\p}\mathit{bias}(\th)\qquad \text{for some $1/2\leq \th <\bar{\thr}$}
\end{equation*} \end{lemma} \begin{proof}
As in the proof of an analogous lemma for the core language,
Lemma~\ref{lem:not-uni}, we examine all the reasons for $\p$ not to be a
c-unifier.
The case of conditions of constants is the same as in Lemma~\ref{lem:not-uni}
as the round cannot be of type $\mathtt{ls}$.
If there is no equalizer in $\p$ up to round $\mathbf{ir}$ then the reasoning is the
same but now using Lemmas~\ref{lem:co-any-frequency} and~\ref{lem:co-any-bias}. \end{proof}
We can conclude the non-termination case. The proof is the same in for the core language but now using Lemmas~\ref{lem:co-not-decider} and~\ref{lem:co-global-th}.
\begin{lemma}
If the structural properties from Definition~\ref{def:structure} hold, but the
condition cT1 does not hold then the algorithm does not terminate \end{lemma}
\section{Proofs for algorithms with coordinators and timestamps} We give a proof of Theorem~\ref{thm:ts-coordinators}. The structure of the proof is the same as in the other cases.
\subsubsection*{Part 1: Structural properties for coordinators with timestamps}
\begin{lemma}
If there is a round without uni instruction then the algorithm does not terminate. \end{lemma} \begin{proof}
Let $l$ be the first round without uni instruction and let $\p$
be any predicate. If $l < \mathbf{ir}$,
we get $(\mathit{solo}^a,0)\act{\p}(\mathit{solo}^a,0)$ for every communication predicate.
Otherwise we get $(\mathit{solo}^a,i) \act{\p} (\mathit{solo}^a,i+1)$ for every communication predicate. \end{proof}
\begin{lemma} \label{lem:co-ts-weak-mult}
If the first round is not of type $\mathtt{ls}$ then the first round should have a $\mathtt{mult}$ instruction. \end{lemma}
\begin{proof}
Let $\p$ be any predicate. If the first round is not of type $\mathtt{ls}$
and does not have a $\mathtt{mult}$ instruction then we will have
$(\mathit{spread},0) \act{\p} (\mathit{spread},0)$. \end{proof}
The following lemma is an adaption of Lemma~\ref{lem:ts-min} to the extension with coordinators.
\begin{lemma}\label{lem:co-ts-no-mult-fo}
If round $\mathbf{ir}$ is not a $\mathtt{ls}$ round
and either contains a $\mathtt{mult}$ instruction or
$\thr_u^{\mathbf{ir}} < 1/2$, then the algorithm
does not satisfy agreement, or we can remove the $\mathtt{mult}$ instruction and make $\thr_u^{\mathbf{ir}} = 1/2$ without altering the
semantics of the algorithm. \end{lemma} \begin{proof}
Suppose round $\mathbf{ir}$ is not of type $\mathtt{ls}$ and
either contains a $\mathtt{mult}$ instruction or
$\thr_u^{\mathbf{ir}} < 1/2$. We consider two cases:
The first case is when there does not exist any tuple $(f,t)$ with
$(f,t) \lact{\overline{\p}_1}_1 f_1 \dots \lact{\overline{\p}_{\mathbf{ir}-2}}_{\mathbf{ir}-2} f_{\mathbf{ir}-2}$
such that $a,b \in \operatorname{fire}_{\mathbf{ir}-1}(f_{\mathbf{ir}-2},\overline{\p})$.
Notice that this happens in particular when some round before round $\mathbf{ir}$ are
of type $\mathtt{ls}$.
It is then clear that the $\mathtt{mult}$ instructions in round $\mathbf{ir}$ will never be
fired and so we can remove all these instructions in round $\mathbf{ir}$.
Further it is also clear that setting $\thr_u^{\mathbf{ir}} = 1/2$
does not affect the semantics of the algorithm in this case.
So it remains to examine the case when there exists a tuple
$(f,t)$ with
$(f,t) \lact{\overline{\p}_1}_1 f_1\lact{\overline{\p}_2} \cdots\lact{\overline{\p}_{\mathbf{ir}-2}} f_{\mathbf{ir}-2}$ such that
$a,b \in \operatorname{fire}_{\mathbf{ir}-1}(f_{\mathbf{ir}-2},\overline{\p})$.
By the above observation, none of the rounds before round $\mathbf{ir}$
are of type $\mathtt{ls}$.
Further, we have assumed that round $\mathbf{ir}$ is itself not of type $\mathtt{ls}$.
By our proviso, it also follows that round $\mathbf{ir}$ is not of type $\mathtt{lr}$.
Since every $\mathtt{lr}$ round should be followed by a $\mathtt{ls}$ round,
it follows that in this case all the rounds up to and including
round $\mathbf{ir}$ are of type $\mathtt{every}$. Hence, the proof of this case is the
same as the proof of Lemma~\ref{lem:ts-min}. \end{proof}
The proof of the following Corollary is the similar to the proof of corollary~\ref{cor:ts-min}.
\begin{corollary} \label{cor:co-ts-no-mult-fo}
If round $\mathbf{ir}+1$ is not of type $\mathtt{ls}$ and has a $\mathtt{mult}$ instruction or $\thr_u^{\mathbf{ir}+1} < 1/2$,
then the $\mathtt{mult}$ instruction can be removed and $\thr_u^{\mathbf{ir}+1}$ can be
made $1/2$ without altering the semantics of the algorithm. \end{corollary}
\begin{lemma} \label{lem:co-ts-no-ls}
The first round cannot be of type $\mathtt{ls}$. \end{lemma} \begin{proof}
Suppose the first round is of type $\mathtt{ls}$.
We execute the phase under the global predicate $\overline{\p}$.
By semantics we have $\set{a,b}\incl
\operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$, for arbitrary $\th$.
Consider $\th_{\mathbf{ir}+1}=\max(\thr_u^{\mathbf{ir}+1},\mathit{thr}_{\mathbf{ir}+1}(\overline{\p}))+\e$
for some small $\e$.
Thanks to our proviso, the global predicate does not have a c-equalizer,
hence we can freely apply Lemmas~\ref{lem:co-any-frequency}
and~\ref{lem:co-any-bias} to get $\mathit{bias}(\th_{\mathbf{ir}+1})$ or
$\mathit{bias}^{?}(\th_{\mathbf{ir}+1})$ after round $\mathbf{ir}$.
By Corollary~\ref{cor:co-ts-no-mult-fo}, there is no $\mathtt{mult}$ instruction in round
${\mathbf{ir}+1}$.
Hence $\set{b,?}\in\operatorname{fire}_{\mathbf{ir}+1}(\mathit{bias}(\th_{\mathbf{ir}+1}),\overline{\p})$.
We can apply Lemma~\ref{lem:co-bias-propagation} to set $\mathit{dec}$ of
one process to $b$ in this phase and leave the other processes undecided.
Moreover, in the round $\mathbf{ir}$ the variable $\mathit{inp}$ is set to $\mathit{bias}(\max(\th,\th_{\mathbf{ir}+1}))$.
In the next phase, once again we have $\set{a,b}\in
\operatorname{fire}_1(\mathit{bias}(\max(\th,\th_{\mathbf{ir}+1})),\overline{\p})$.
We can get $\mathit{solo}_a$ under global predicate, hence we can set some $\mathit{dec}$ to $a$. \end{proof}
\begin{lemma}\label{lem:co-ts-mult-in-the-first-round}
The first round should have a $\mathtt{mult}$ instruction. \end{lemma}
\begin{proof}
Follows from Lemmas~\ref{lem:co-ts-no-ls} and \ref{lem:co-ts-weak-mult}. \end{proof}
\begin{lemma}\label{lem:co-ts-bias-fire}
For the global predicate $\p$, we have
$\set{a,b}\incl\operatorname{fire}_1((\mathit{bias}(\th),i),\p)$ for
sufficiently big $\th$ and every $i$. \end{lemma}
\begin{proof}
Similar to that of Lemma~\ref{lem:ts-bias-fire}. \end{proof}
\begin{lemma} \label{lem:co-ls-fo}
If round $\mathbf{ir}+1$ is of type $\mathtt{ls}$, then the algorithm does not solve consensus. \end{lemma} \begin{proof}
Suppose round $\mathbf{ir}+1$ is of type $\mathtt{ls}$.
We consider an execution of a phase under the
global predicate and so we can freely use Lemmas~\ref{lem:co-any-frequency}
and~\ref{lem:co-any-bias}.
We have seen that the first round cannot be of type $\mathtt{ls}$.
We can take $\th$ big enough to have $\set{a,b}\in\operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$.
We can then use Lemmas~\ref{lem:co-any-frequency}
and~\ref{lem:co-any-bias}
to get $\mathit{bias}(\th')$ or $\mathit{bias}^{?}(\th')$ after round $\mathbf{ir}$; for arbitrary
$\th'$.
In either case, because round $\mathbf{ir}+1$ is of type $\mathtt{ls}$
and the global predicate does not have c-equalizers,
it follows that we can get $\mathit{bias}^{?}(\th'')$ for arbitrary $\th''$
after round ${\mathbf{ir}+1}$. Applying Lemma~\ref{lem:co-bias-propagation}
we can make one process decide on $b$ and prevent the
other processes from deciding.
Notice that the state of $\mathit{inp}$ in the next phase will have $\th'$ processes
with value $b$ and timestamp $1$.
Till now we have put no constraints on $\th'$, so we can take it sufficiently
small so that $1-\th' > \thr_u^1$. This enables us to get $\mathit{solo}^a$ after the first round.
We use this to make the undecided processes decide on $a$. \end{proof}
\begin{lemma} \label{lem:co-ts-constants}
If the property of constants from Definition~\ref{def:ts-structure}
is not satisfied, then agreement is violated. \end{lemma}
\begin{proof}
The proof is similar to the one of Lemma~\ref{lem:ts-constants}.
We consider an execution under the global predicate $\overline{\p}$, and employ
Lemmas~\ref{lem:co-any-frequency} and~\ref{lem:co-ts-bias-fire}.
We start from configuration $(\mathit{bias}(\th_1),0)$ where
$\th_1>\thr_u^1$ big enough so that by Lemma~\ref{lem:co-ts-bias-fire} we get
$\set{a,b} \incl\operatorname{fire}_1(\mathit{bias}(\th_1),\overline{\p})$.
Observe that the first round
cannot be of type $\mathtt{ls}$ by Lemma~\ref{lem:co-ts-no-ls} so we can get arbitrary
bias after the first round.
Due to Lemma~\ref{lem:co-ts-no-mult-fo} we know that there
is a c-preserving round before round ${\mathbf{ir}+1}$.
Let $j\leq \mathbf{ir}$ be the first c-preserving round.
Since all rounds before $j$ are non-c-preserving, by
Lemma~\ref{lem:co-any-frequency} we can get $\mathit{bias}(\mathit{thr}^j_u+\e)$, as well as
$\mathit{bias}(1-(\mathit{thr}^j_u+\e))$ before round $j$
(intuitively, we can get bias with many $b$'s or many $a$'s).
Since $j$ is preserving, it is either of type $\mathtt{ls}$ or $\mathit{thr}_j(\overline{\p})<
\max(\thr_u^j,\thr_m^{j,k})$.
If it is of type $\mathtt{ls}$ then we get $\set{a,b,?}\in
\operatorname{fire}_j(\mathit{bias}((\mathit{thr}^j_u+\e)),\overline{\p})$ since $\overline{\p}\!\!\downharpoonright_j$ is not $c$-equalizer .
In the other cases we use $\mathit{bias}(\mathit{thr}^j_u+\e)$ if we want to get $\set{b,?}$
and $\mathit{bias}(1-(\mathit{thr}^j_u+\e))$ if we want to get $\set{a,?}$.
If $j$ is of type $\mathtt{every}$ we get it at round $j$.
If $j$ is of type $\mathtt{lr}$ then we get it at round $j+1$ since round $j+1$ must be necessarily of
type $\mathtt{ls}$.
We then use Lemma~\ref{lem:co-bias-propagation} to reach $\mathit{bias}^{?}(\th_{\mathbf{ir}+1})$ or
$\mathit{bias}^{?}(1-\th_{\mathbf{ir}+1})$ before round ${\mathbf{ir}+1}$; where as before
$\th_{\mathbf{ir}+1}=\thr_u^{\mathbf{ir}+1}+\e$.
We have two cases depending on whether $\thr_m^{1,k}< 1-\thr_u^{\mathbf{ir}+1}$ or not.
If $\thr_m^{1,k}< 1-\thr_u^{\mathbf{ir}+1}$ then we reach $\mathit{bias}^{?}(\th_{\mathbf{ir}+1})$ before round
${\mathbf{ir}+1}$, and then make some processes decide on $b$.
After this phase there are $1-\th_{\mathbf{ir}+1}$ processes with timestamp $0$.
We can ensure that among them there is at least one with value $a$ and one
with value $b$.
Since there is $\mathtt{mult}$ instruction in the first round (Lemma~\ref{lem:co-ts-mult-in-the-first-round}), in the next phase we
send all the values with timestamp $0$.
This way we get $\mathit{solo}^{a}$ after the first round, and make some process decide
on $a$.
The remaining case is when $\thr_m^{1,k}\geq 1-\thr_u^{\mathbf{ir}+1}$.
So we have $\thr_u^1<1-\thr_u^{\mathbf{ir}+1}$, since we have assumed that that the
property of constants from Definition~\ref{def:ts-structure} does not hold.
This time we choose to get $\mathit{bias}^{?}_a(\th_{\mathbf{ir}+1})$ after round $\mathbf{ir}$, and make some process
decide on $a$.
Since we have started with $\mathit{bias}(\th_1)$ we can arrange updates so that at
the beginning of the next phase we have at least $\min(\th_1,1-\th_{\mathbf{ir}+1})$ processes who
have value $b$ with timestamp $0$.
But $\thr_u^1<\min(\th_1,1-\th_{\mathbf{ir}+1})$, so by sending $\mathsf{H}$ set consisting of these
$b$'s we reach $\mathit{solo}$ after the first round and make some processes decide on
$b$. \end{proof}
\begin{lemma}
If all the structural properties are satisfied then the algorithm satisfies agreement. \end{lemma}
\begin{proof}
It is clear that the algorithm satisfies agreement when the initial frequency
is either $\mathit{solo}$ or $\mathit{solo}^a$. Suppose $(\mathit{bias}(\theta),t,d) \act{\p^*}
(\mathit{bias}(\theta'),t',d')$ such that for the first time in this transition sequence,
some process has decided (say the process has decided on $b$).
Since there exists an $\mathtt{ls}$ round in the algorithm, it follows
that every other process could only decide
on $b$ or not decide at all.
Further notice that since $b$ was decided by some
process, it has to be the case that more than $\thr_u^{\mathbf{ir}+1}$ processes have $b$ as
their $\mathit{inp}$ value and maximum timestamps.
This means that the number of $a$'s in the configuration is less than $1-\thr_u^{\mathbf{ir}+1}$.
Also notice that since round $\mathbf{ir}$ has no $\mathtt{mult}$ instructions
and $\thr_u^{\mathbf{ir}} \ge 1/2$, it follows that no process with
value $a$ has the latest timestamp.
Since $\thr_u^1 \geq 1 - \thr_u^{\mathbf{ir}+1}$, it follows that $a$ cannot be fired
from $\mathit{bias}(\theta')$ using the $\mathtt{uni}$ instruction in the first round. Further
since $\thr_m^{1,k}\geq 1 - \thr_u^{\mathbf{ir}+1}$ it follows that any HO set bigger
than $\thr_m^{1,k}$ has to contain a value with the latest timestamp.
As no $a$ has the latest timestamp, $a$ cannot be fired from $\mathit{bias}(\theta')$
using the $\mathtt{mult}$ instruction as
well. In consequence, the number of $b$'s can only increase from this point onwards and so it
follows that no process from this point onwards can decide on $a$. A similar
argument applies if the first value decided was $a$. \end{proof}
\subsubsection*{Part 2: termination for coordinators with timestamps.}
The proof for termination is very similar to the case of timestamps.
\begin{lemma}\label{lem:co-ts-dec}
If $\f$ is a c-decider and $(\mathit{solo},t,\mathit{solo}^?)\act{\f}(f,t',d)$
then $(f,d) = (\mathit{solo},\mathit{solo})$ for any ts-tuple $t$. Similarly if $(\mathit{solo}^a,t,\mathit{solo}^?) \act{\f}(f,t',d)$ then $(f,d) = (\mathit{solo}^a,\mathit{solo}^a)$. \end{lemma} \begin{proof}
Immediate \end{proof}
\begin{lemma}\label{lem:co-ts-str-uni}
Suppose $\f$ is a strong c-unifier and $(\mathit{bias}(\th),t)\act{\f}(f,t')$ then
$f=\mathit{solo}$ or $f=\mathit{solo}^a$ (for every tuple of timestamps $t$). \end{lemma} \begin{proof}
Let $i$ be the round with c-equalizer.
Till round $i$ we cannot produce $?$.
After round $i$ we have $\mathit{solo}$ or $\mathit{solo}^{a}$.
This stays till round $\mathbf{ir}$ as the rounds after $i$ are c-solo-safe. \end{proof}
\begin{proof}\textbf{Main positive}
Suppose there is a strong c-unifier followed by a c-decider.
After the strong c-unifier we have $\mathit{solo}$ or $\mathit{solo}^{a}$ thanks to Lemma~\ref{lem:co-ts-str-uni}.
After c-decider all processes decide thanks to Lemma~\ref{lem:co-ts-dec}. \end{proof}
\subsubsection*{Part 3: Non-termination for coordinators with timestamps}
\begin{lemma}\label{lem:co-ts-not-decider}
If $\p$ is a not a c-decider then $(\mathit{solo},t)\act{\p}(\mathit{solo},t)$ and
$(\mathit{solo}^a,t)\act{\p}(\mathit{solo}^a,t)$ for every tuple of timestamps $t$. \end{lemma} \begin{proof}
If $\p$ is not a c-decider then there is a round that is not c-solo-safe.
So we can go to $\mathit{solo}^{?}$ both from $\mathit{solo}$ and from $\mathit{solo}^{a}$.
From $\mathit{solo}^{?}$ no process can decide. \end{proof}
\begin{lemma} \label{lem:co-ts-not-str-uni}
If $\p$ is not a strong c-unifier
then $(\mathit{bias}(\th),i) \act{\p} (\mathit{bias}(\th),j)$
is possible (for large enough $\th$, arbitrary $i$, and some $j$). \end{lemma}
\begin{proof}
Let $\th > \max(\thr_u^1,\mathit{thr}_1(\p)) + \e$. Suppose $\p$ is not a strong c-unifier.
We do a case analysis.
Suppose $\mathit{thr}_1(\p) < \thr_m^{1,k}$ or $\mathit{thr}_1(\p) < \thr_u^1$.
We can get $\mathit{solo}^?$ after the first round and then
use this to not decide on anything and retain the input tuple.
Suppose $\p$ does not have an c-equalizer. In this case can we
apply Lemmas~\ref{lem:co-ts-bias-fire},~\ref{lem:co-ts-no-ls},~\ref{lem:co-any-bias} and~\ref{lem:co-ts-no-mult-fo} to conclude that
we can reach $\mathit{solo}^{?}$ before round ${\mathbf{ir}+1}$
and so we are done, because nothing is changed after the phase.
The next possible situation is that $i$ is the first component of $\p$ that is a c-equalizer,
and there is a c-preserving round, call it $j$ before round $i$
(it can be round $1$ as well).
Every round before round $j$ is non-c-preserving, so it cannot be of type $\mathtt{ls}$.
This is because non-c-preserving round of type $\mathtt{ls}$ is necessarily a c-equalizer,
and the first c-equalizer is $i$.
So every round up-to $(j-1)$ has to be of type $\mathtt{every}$, and round $j$ can be of
either of type $\mathtt{every}$ or of type $\mathtt{lr}$ (because $\mathtt{lr}$ round must be followed by $\mathtt{ls}$ round
thanks to assumption on page~\pageref{assumption-ls-lr}).
In both cases, by Lemma~\ref{lem:co-any-frequency} we can get to $\mathit{bias}(\th')$
(where $\th' > \max(\thr_u^j,\mathit{thr}_j(\p))$) before round $j$
(Notice that if $j = 1$ then we
need to reach $\mathit{bias}(\th')$ with $\th' > \max(\thr_u^1,\mathit{thr}_1(\p))$ which is
where we start at).
If round $j$ is of type $\mathtt{every}$, then since it is preserving it is easy to
see that $\mathit{bias}(\th') \lact{\p_j}_j \mathit{solo}^{?}$.
The remaining possibility is that round $j-1$ is of type $\mathtt{lr}$.
We can get $\mathit{one}^{b}$ after round $j-1$, and because round $j$ is
necessarily of type $\mathtt{ls}$ and is not c-equalizer, we can get $\mathit{solo}^{?}$ after
round $j$.
In both cases, as $j<\mathbf{ir}$ no process changes $\mathit{inp}$ value, or decides in this phase.
The remaining possibility for $\p$ not to be strong c-unifier is that there is
$i$-th round that is a c-equalizer followed by a non-c-solo safe round $j\leq
\mathbf{ir}$.
It is clear that we can reach $\mathit{solo}$ after round $i$ and using this get $\mathit{solo}^{?}$
after round $j$.
Hence nothing will change in the phase giving a transition $(\mathit{bias}(\th),i)
\act{\p} (\mathit{bias}(\th),i)$. \end{proof}
\begin{proof}
\textbf{Main non-termination}
We show that if there is no strong c-unifier followed by a c-decider, then the
algorithm will not terminate. We start with $(\mathit{bias}(\th),0)$ where $\th$
is large enough. If $\p$ is not a strong c-unifier then by
Lemma~\ref{lem:co-ts-not-str-uni}, for every $i$ transition $(\mathit{bias}(\th),i)
\act{\p} (\mathit{bias}(\th),j)$ is possible for some $j$.
Hence if there is no strong c-unifier in the communication predicate then the algorithm will not terminate.
Otherwise let $\p^l$ be the first strong c-unifier.
Notice that $\p^l$ is
not the global predicate. Till $\p^l$ we can maintain
$(\mathit{bias}(\th),i)$ for some $i$.
Suppose $\p^l$ is not a c-decider.
By Lemma~\ref{lem:co-ts-str-uni} the state
after this phase will become $\mathit{solo}$ or $\mathit{solo}^a$. However since $\p^l$ is not a
c-decider, we can choose to not decide on any value. Hence we get the transition
$(\mathit{bias}(\th),i) \act{\p} (\mathit{solo},i+1)$.
Now, since none of the Lemma~\ref{lem:co-ts-not-decider} we can have a
transition where no decision happens.
Hence the algorithm does not terminate if there is no c-decider after a strong c-unifier. \end{proof}
\section{Conclusions}
We have characterized all algorithms solving consensus in a fragment of the Heard-Of model. We have aimed at a fragment that can express most important algorithms while trying to avoid ad hoc restrictions (c.f. proviso on page~\pageref{proviso}). The fragment covers algorithms considered in the context of verification~\cite{MarSprBas:17,ChaMer:09} with a notable exception of algorithms sending more than one variable. In this work we have considered only single phase algorithms while originally the model permits also to have initial phases. We believe that this is not a severe restriction. More severe and technically important restriction is that we allow to use only one variable at a time, in particular it is not possible to send pairs of variables.
One curious direction of further research would be to list all ``best'' consensus algorithms under some external constraints; for example the constraints can come from some properties of an execution platform external to the Heard-Of model. This problem assumes that there is some way to compare two algorithms. One guiding principle for such a measure could be efficient use of knowledge~\cite{MosRaj:02,Mos:16}: at every step the algorithm does maximum it can do, given its knowledge of the state of the system.
This research is on the borderline between distributed computing and verification. From a distributed computing side it considers quite a simple model, but gives a characterization result. From a verification side, the systems are complicated because the number of processes is unbounded, there are timestamps, and interactions are based on a fraction of processes having a particular value. We do not advance on verification methods for such a setting. Instead, we observe that in the context considered here verification may be avoided. We believe that a similar phenomenon can appear also for other problems than consensus. It is also an intriguing question to explore how much we can enrich the current model and still get a characterization. We conjecture that a characterization is possible for an extension with randomness covering at least the Ben-Or algorithm. Of course, formalization of proofs, either in Coq or Isabelle, for such extensions would be very helpful.
In this section we prove Theorem~\ref{thm:core}, namely a characterization of algorithms in the core language that solve consensus.
We fix a \emph{communication predicate} \begin{equation*}
(\lG\overline{\p})\land(\lF(\p^1\land\lF(\p^2\land\dots (\lF\p^k)\dots))) \end{equation*} Recall that each of $\overline{\p},\p^1,\dots,\p^k$ is an $r$-tuple of atomic predicates. We write $\p\!\!\downharpoonright_i$ for the $i$-th element of the tuple. So $\p$ is $(\p\!\!\downharpoonright_1,\dots,\p\!\!\downharpoonright_r)$. Often we will write $\p_i$ instead of $\p\!\!\downharpoonright_i$, in particular when $\p_i$ appears as a subscript; for example $f\lact{\p_i} f'$. If $\f$ is a conjunction of atomic predicates, then by $\mathit{thr}(\f)$ we denote the threshold constant appearing in $\f$, i.e., if $\f$ has $\f_{thr}$ as a conjunct then $\mathit{thr}(\f) = thr$, if it has no such conjunct then $\mathit{thr}(\f) = -1$.
\begin{definition}
We define several tuples of values.
All these tuples will be $n$-tuples for some fixed but large enough $n$
and will be over $\set{a, b}$, or $\set{?,b}$ or $\set{?,a}$.
For $\th < 1$, the tuple $\mathit{bias}(\th)$ is a tuple containing only $a$'s and
$b$'s with $|b|=\th\cdot n$.
Tuple $\mathit{bias}(1/2)$ is also called $\mathit{spread}$ to emphasize that there is the
same number of $a$'s and $b$'s.
A tuple consisting only of $b$'s is denoted $\mathit{solo}$.
Similarly, $\mathit{bias}^{?}(\th)$ is a tuple over $\set{?,b}$ with $|b|=\mathit{thr}\cdot
n$ and $\mathit{bias}^{?}_a(\th)$ is a tuple over $\set{?,a}$ with $|a| = \mathit{thr} \cdot n$. We also write $\mathit{solo}^?$ for a tuple consisting only of $?$'s.
Finally, we write $\mathit{solo}^a$ for a tuple consisting only of $a$'s. \end{definition}
\noindent\textbf{Notations:} \begin{itemize}
\item For a tuple of values $f$ and a predicate $\p$ we write $\operatorname{fire}_i(f,\p)$
instead of $\operatorname{fire}_i(f,\p\!\!\downharpoonright_i)$.
Similarly we write $\mathit{thr}_i(\p)$ for $\mathit{thr}(\p\!\!\downharpoonright_i)$.
\item If $f, f'$ are tuples of values, we write $f\act{\p}f'$ instead of
$(f,\mathit{solo}^{?})\act{\p}(f',\mathit{solo}^{?})$.
\end{itemize}
Recall that the border threshold for an algorithm, by Definition~\ref{def:border-threshold}, is \begin{equation*} \bar{\thr}= \max(1-\thr_u^1,1-\thr_m^{1,k}/2) \end{equation*} Observe that $\bar{\thr}>1/2$ as $\thr_m^{1,k}<1$.
The proof of Theorem~\ref{thm:core} is divided into three parts. First we show that if an algorithm does not satisfy the structural properties then it violates agreement. Then we restrict our attention to algorithms with the structural properties. We show that if condition T holds, then consensus is satisfied. Finally, we prove that if condition T does not hold then the algorithm does not have the termination property.
To simplify the statements of the lemmas, we adopt the following convention. If some condition is proved as necessary for consensus, then for the forthcoming lemmas, that condition is assumed. For example, in Lemma~\ref{lem:no-uni}, we prove that all rounds should have a $\mathtt{uni}$ instruction. Hence after Lemma~\ref{lem:no-uni}, it it implicitly assumed that all algorithms considered have a $\mathtt{uni}$ instruction in every round.
\subsubsection*{Part 1: Structural properties}
\begin{lemma}\label{lem:no-mult}
If no $\mathtt{mult}$ instruction is present in the first round then the algorithm may not terminate. \end{lemma} \begin{proof}
Suppose no $\mathtt{mult}$ instruction is present in the first round.
It is easy to verify that for every predicate $\p$, we have
$\mathit{spread} \lact{\p_1}_1 \mathit{solo}^?$ and $\mathit{solo}^? \lact{\p_i}_i \mathit{solo}^?$ for $i > 1$. Hence we have the phase transition $\mathit{spread} \act{\p} \mathit{spread}$.
\end{proof}
\begin{lemma}\label{lem:no-uni}
If there is a round without a $\mathtt{uni}$ instruction then the algorithm does not terminate. \end{lemma} \begin{proof}
Let $i$ be the round without a $\mathtt{uni}$ instruction. It is easy
to verify that for every predicate $\p$, we have $\mathit{solo} \lact{\p_j}_j \mathit{solo}$ for $j < i$, $\mathit{solo} \lact{\p_i}_i \mathit{solo}^?$ and $\mathit{solo}^? \lact{\p_j}_j \mathit{solo}^?$ for $j > i$. Hence we get the phase
transition $\mathit{solo} \act{\p} \mathit{solo}$.
\end{proof}
Before considering the remaining structural requirements we state some useful lemmas.
\begin{lemma}\label{lem:spread}
Suppose all $\mathtt{mult}$ instructions in the first round have $\mathrm{smor}$ as the
operation. Then for every predicate $\p$ we have $\set{a,b}\incl\operatorname{fire}_1(\mathit{spread},\p)$. \end{lemma} \begin{proof}
From $\mathit{spread}$, it is easy to see that we can construct a multiset $\mathsf{H}$ containing more $a$'s
than $b$'s such that the size of $\mathsf{H}$ is bigger than $\mathit{thr}_1(\p)$ and
$\thr_m^{1,1}$.
Similarly we can construct a multiset having more $b$'s than $a$'s.
This then implies that $\set{a,b} \incl \operatorname{fire}_1(\mathit{spread},\p)$.
\end{proof}
\begin{lemma}\label{lem:bias-preserving}
If a round $i$ is preserving w.r.t.\ $\p$ then
$\set{b,?}\in\operatorname{fire}_i(\mathit{bias}(\th),\p)$ for all sufficiently big $\th$.
Similarly $\set{a,?} \in \operatorname{fire}_i(\mathit{bias}(\th),\p)$ for all sufficiently small $\th$. \end{lemma} \begin{proof}
Let $\th > \max(\thr_u^i,\mathit{thr}_i(\p))$.
Because of the $\mathtt{uni}$ instruction, it is then clear that $b \in \operatorname{fire}_i(\mathit{bias}(\th),\p)$.
Since the round is preserving (and since $\mathtt{uni}$ instructions are present in every round),
either there is no $\mathtt{mult}$ instruction in round $i$ or $\mathit{thr}_i(\p)<\thr_u^i$, or
$\mathit{thr}_i(\p)<\thr_m^{i,k}$.
In the first case, let $\mathsf{H}$ be the entire tuple.
In the second case, let $\mathsf{H}$ be a multi-set consisting only of $b$'s but of size smaller than $\thr_u^i$ and bigger than $\mathit{thr}_i(\p)$.
In the third case, let $\mathsf{H}$ be a multi-set of size smaller
than $\thr_m^{i,k}$ (and bigger than $\mathit{thr}_i(\p)$) with at least one $a$, and one $b$. In all the cases, it is clear
that $? = \mathtt{update}_i(\mathsf{H})$ and so $? \in \operatorname{fire}_i(\mathit{bias}(\th),\p)$.
We can argue similarly for the other case as well. \end{proof}
\begin{lemma}\label{lem:ab-if-mult}
For every predicate $\p$, for every round $i$ with $\mathtt{mult}$ instruction, there is
a threshold $\th\geq 1/2$ such that $\set{a,b}\incl \operatorname{fire}_i(\mathit{bias}(\th),\p)$. \end{lemma} \begin{proof}
Let $I$ be the $\mathtt{mult}$ instruction in round $i$ with the biggest
threshold.
This threshold is called $\thr_m^{i,1}$ in our notation.
If the operation of $I$ is $\mathrm{smor}$ then we take $\th=1/2$ and argue
similar to the proof of Lemma~\ref{lem:spread}.
If the operation of $I$ is $\min$ then we take $\th>\max(\mathit{thr}_i(\p), \thr_u^1,1/2)$.
Because of the $\mathtt{uni}$ instruction, we can
get $b$ by sending a multi-set $\mathsf{H}$ consisting of all the $b$'s in $\mathit{bias}(\th)$.
Further because of the instruction $I$, if we send the entire tuple as a
multi-set, we get $a$. \end{proof}
\begin{lemma}\label{lem:a-if-min}
Suppose the first round has a $\mathtt{mult}$ instruction with
$\min$ as operation. Then $a\in \operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$ for
every $\th > 0$. \end{lemma} \begin{proof}
Let the $j^{th}$ $\mathtt{mult}$ instruction be the instruction
with the $\min$ operation. Let $\mathsf{H}$ be any multiset containing
at least one $a$ and one $b$ and is of size just above $\thr_m^{1,j}$.
By observation~(\ref{eq:syntactic-property}) we have
$\thr_m^{1,j} \ge \mathit{thr}_1(\overline{\p})$ and so we have that $\mathsf{H} \models \overline{\p}_1$ and
$a = \mathtt{update}_1(\mathsf{H})$. \end{proof}
The next sequence of lemmas tells us what can happen in a sequence of rounds.
\begin{lemma}\label{lem:any-qbias-from-qbias}
Suppose none of $\p_k,\dots,\p_l$ is an equalizer. If $\set{b,?}\incl
\operatorname{fire}_k(f,\p_k)$ then for every $\th'$ we have
$f\lact{\p_k}_k\dots\lact{\p_l}_l \mathit{bias}^{?}(\th')$. Similarly, for $b$ replaced
by $a$, and $\mathit{bias}^{?}(\th')$ replaced by $\mathit{bias}^{?}_a(\th')$. \end{lemma} \begin{proof}
The proof is by induction on $k-l$.
If $k=l$ then the lemma is clearly true since we can produce both $b$ and $?$
values, and $\p_k$ is not an equalizer.
For the induction step, consider the last round $l$, and let $\th''=\thr_u^l+\e$ for
some small $\e$.
We have $b\in\operatorname{fire}_l(\mathit{bias}^{?}(\th''),\p_l)$ because of the $\mathtt{uni}$ instruction.
We can also construct a multiset $\mathsf{H}$ from $\mathit{bias}^{?}(\th'')$
of size $1-\e' > \mathit{thr}(\p_l)$ for some small $\e' > \e$
containing $\th''-\e'$ fraction of $b$'s and $1-\th''$ fraction of
$?$.
This multiset shows that $?\in\operatorname{fire}_l(\mathit{bias}^{?}(\th),\p_l)$.
So from $\mathit{bias}^{?}(\th'')$ in round $l$ we can get $\mathit{bias}^{?}(\th')$ for any $\th'$.
The induction assumption gives us $f\lact{\p_k}_k\dots\lact{\p_{l-1}}_{l-1}
\mathit{bias}^{?}(\th'')$, so we are done. \end{proof}
\begin{lemma}\label{lem:any-frequency}
Suppose none of $\p_k,\dots,\p_l$ is an equalizer, and all rounds $k\dots l$
have $\mathtt{mult}$ instructions.
If $\mathit{set}(f')\incl \operatorname{fire}_k(f,\p_k)$ and $? \notin \mathit{set}(f')$
then $f\lact{\p_k}_k\dots\lact{\p_l}_l f'$ is possible.
\end{lemma} \begin{proof}
We proceed by induction on $l-k$.
The lemma is clear when $k = l$. Suppose $k \neq l$.
Consider three cases:
Suppose $f'$ is $\mathit{solo}^a$ or $\mathit{solo}$. By induction hypothesis
we can reach $f'$ after round $l-1$. Since round $l$
has a $\mathtt{uni}$ instruction it is clear that $f' \lact{\p_l}_l f'$.
Suppose $a,b \in \mathit{set}(f')$.
Lemma~\ref{lem:ab-if-mult} says that there is $\th$ for which
$\set{a,b}\in\operatorname{fire}_l(\mathit{bias}(\th),\p_l)$.
Hence $\mathit{bias}(\th)\lact{\p_l}_l f'$.
By induction hypothesis we can reach $\mathit{bias}(\th)$ after round $l-1$. \end{proof}
\begin{lemma}\label{lem:any-qbias-from-bias}
Suppose none of $\p_k,\dots,\p_l$ is an equalizer, and some round $k,\dots,l$
does not have a $\mathtt{mult}$ instruction.
For every $\th$ and every $f$ such that $\set{a,b}\in
\operatorname{fire}_k(f,\p_k)$ we have $f\lact{\p_k}_k\dots\lact{\p_l}_l \mathit{bias}^{?}(\th)$,
and $f\lact{\p_k}_k\dots\lact{\p_l}_l \mathit{bias}^{?}_a(\th)$. \end{lemma} \begin{proof}
Let $i$ be the first round without $\mathtt{mult}$ instruction.
Using Lemma~\ref{lem:any-frequency}, from the tuple $f$ at round $k$, we can arrive at round $i$
with the tuple $\mathit{bias}(\th)$ for any $\th$.
We choose $\th$ according to Lemma~\ref{lem:bias-preserving} so that
$\set{b,?}\incl \operatorname{fire}_i(\mathit{bias}(\th),\p_i)$.
Then we can apply Lemma~\ref{lem:any-qbias-from-qbias} to prove the claim.
The reasoning for $\mathit{bias}^{?}_a$ is analogous. \end{proof}
\begin{lemma}\label{lem:no-mult-sequence}
If round $\mathbf{ir}+1$ contains a $\mathtt{mult}$ instruction then the algorithm
does not satisfy agreement, or the $\mathtt{mult}$ instruction can be removed without altering the
correctness of the algorithm. \end{lemma}
\begin{proof}
Suppose round $\mathbf{ir}+1$ contains a $\mathtt{mult}$ instruction
The first case is when there does not exist any tuple $f$ and an execution
$f \lact{\overline{\p}_1}_1 f_1 \dots \lact{\overline{\p}_{\mathbf{ir}-1}}_{\mathbf{ir}-1} f_{\mathbf{ir}-1}$
such that $a,b \in \operatorname{fire}_\mathbf{ir}(f_{\mathbf{ir}-1},\overline{\p})$. It is then clear
that the $\mathtt{mult}$ instructions in round $\mathbf{ir}+1$ will never be fired
and so we can remove all these instructions in round $\mathbf{ir}+1$.
So it remains to examine the case when there exists a tuple $f$ with
$f \lact{\overline{\p}_1}_1 f_1\lact{\overline{\p}_2} \cdots\lact{\overline{\p}_{\mathbf{ir}-1}} f_{\mathbf{ir}-1}$ such that
$a,b \in \operatorname{fire}_{\mathbf{ir}}(f_{\mathbf{ir}-1},\overline{\p})$.
In this case we get
$f_{\mathbf{ir}-1} \lact{\overline{\p}_{\mathbf{ir}}}_{\mathbf{ir}} \mathit{bias}(\th)$ for arbitrary $\th$.
Let $I$ be the $\mathtt{mult}$ instruction in round $\mathbf{ir}+1$ with the
highest threshold value.
Recall that, by proviso from page~\ref{proviso}, $\overline{\p}$ is not an equalizer.
We consider two sub-cases:
Suppose $I$ has $\mathrm{smor}$ as its operation.
Then we consider $f_{\mathbf{ir}-1} \lact{\overline{\p}_{\mathbf{ir}}}_{\mathbf{ir}} \mathit{spread}$.
As $I$ has $\mathrm{smor}$ as operation, from $\mathit{spread}$ we can construct a multiset $\mathsf{H}$ containing more $a$'s
than $b$'s such that the size of $\mathsf{H}$ is bigger than $\mathit{thr}_{\mathbf{ir}+1}(\overline{\p})$ and $\thr_m^{\mathbf{ir}+1,1}$.
Similarly we can construct a multiset having more $b$'s than $a$'s.
Hence we get $\mathit{spread} \lact{\overline{\p}_{\mathbf{ir}+1}}_{\mathbf{ir}+1} \mathit{bias}(\th')$
for arbitrary $\th'$.
If all rounds after $\mathbf{ir}+1$ have $\mathtt{mult}$ instructions, then we can
apply Lemma~\ref{lem:any-frequency} to conclude that we can
reach the tuple $\mathit{spread}$ after round $r$, thereby deciding
on both $a$ and $b$ and violating agreement.
Otherwise we can use Lemma~\ref{lem:any-qbias-from-bias} to conclude that we can
reach the tuple $\mathit{spread}^?$ after round $r$ and hence
make half the processes decide on $b$.
Notice that after this phase the state of the algorithm is $(\mathit{spread},\mathit{spread}^?)$.
We know, by Lemma~\ref{lem:no-mult} that the first round has a $\mathtt{mult}$ instruction.
This instruction has $\mathrm{smor}$ or $\min$ as its operation,
it is clear that in either case, $a \in \operatorname{fire}_1(\mathit{spread},\overline{\p})$ and
so we can get $\mathit{spread} \lact{\overline{\p}_1}_1 \mathit{solo}^a$ and
$\mathit{solo}^a \lact{\overline{\p}_i}_i \mathit{solo}^a$ for $i > 1$, thereby making the
rest of the undecided processes decide on $a$. Hence
agreement is violated.
Suppose $I$ has $\min$ as its operation. Then we consider
$f_{\mathbf{ir}-1} \lact{\overline{\p}_{\mathbf{ir}}}_{\mathbf{ir}} \mathit{bias}(\th)$ where $\th > \max(\thr_u^{\mathbf{ir}+1},\thr_u^{1},\mathit{thr}_{\mathbf{ir}+1}(\overline{\p} ))$ is sufficiently big.
It is clear that $b \in \operatorname{fire}_{\mathbf{ir}+1}(\mathit{bias}(\th),\overline{\p})$.
Further if we take our multi-set $\mathsf{H}$ to be $\mathit{bias}(\th)$ itself,
then (because of the instruction $I$) we have $a \in \operatorname{fire}_{\mathbf{ir}+1}(\mathit{bias}(\th),\overline{\p})$.
Hence we get $\mathit{bias}(\th) \lact{\overline{\p}_{\mathbf{ir}+1}}_{\mathbf{ir}+1} \mathit{bias}(\th')$
for arbitrary $\th'$.
As in the previous case, either this immediately allows us to conclude that agreement is violated, or this allows us to make half the processes decide on $a$. In the latter case,
note that the state of the algorithm after this
phase will be $(\mathit{bias}(\th),\mathit{spread}^?_a)$.
Since $\th \ge \thr_u^1$ and since $\thr_u^1 \ge \mathit{thr}_1(\overline{\p})$ by observation~(\ref{eq:syntactic-property}), it follows
that $b \in \operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$.
Hence we can get $\mathit{solo}$ as the tuple after the first round and decide
on $b$, as in the previous case.
\end{proof}
\begin{lemma}\label{lem:no-min}
If the first round has $\mathtt{mult}$ instruction with
$\min$ as the operation then the algorithm does not satisfy agreement. \end{lemma}
\begin{proof}
Suppose that indeed the first round does have a $\mathtt{mult}$ instruction with
$\min$ operation.
Thanks to our proviso, the global predicate does not have an equalizer, hence we
can freely apply Lemmas~\ref{lem:any-frequency} and~\ref{lem:any-qbias-from-bias}.
We use Lemma~\ref{lem:ab-if-mult} to find $\th$ with $\set{a,b}\incl
\operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$.
We consider two cases.
If all the rounds $2,\dots,\mathbf{ir}$ have a $\mathtt{mult}$ instruction then
Lemma~\ref{lem:any-frequency} allows us to get $\mathit{bias}(\th')$, for arbitrary
$\th'$, after round $\mathbf{ir}$.
By Lemma~\ref{lem:no-mult-sequence} there is no $\mathtt{mult}$ instruction in round
$\mathbf{ir}+1$.
By Lemma~\ref{lem:bias-preserving} there is $\th'$ such that $\set{b,?}\incl
\operatorname{fire}_{\mathbf{ir}+1}(\mathit{bias}(\th'),\overline{\p})$.
Using Lemma~\ref{lem:any-qbias-from-qbias} we can make some process decide on $b$, while
keeping the other processes undecided.
Hence the state of the algorithm after this phase is $(bias(\th'),\mathit{spread}^?)$.
By Lemma~\ref{lem:a-if-min}, $a\in\operatorname{fire}_1(\mathit{bias}(\th'),\overline{\p})$, and so we can get
$\mathit{solo}^a$ as the tuple after the first round and make all the other processes decide on $a$.
The second case is when one of the rounds $2,\dots,\mathbf{ir}$ does not have a $\mathtt{mult}$
instruction.
For arbitrary $\th'$, Lemma~\ref{lem:any-qbias-from-bias} allows us to get
$\mathit{bias}^{?}(\th')$ after round $\mathbf{ir}$.
As in the above case, we use it to decide on $b$ for some process while
leaving other undecided.
In the next phase we make other processes decide on $a$. \end{proof}
\iffalse \begin{proof} Suppose that indeed in the first round we have a $\mathtt{mult}$ instruction with $\min$ operation. We execute the phase under the global predicate $\p$. Thanks to our proviso, global predicate does not have an equalizer, hence we can freely apply Lemmas~\ref{lem:any-frequency} and~\ref{lem:any-bias}. Also recall that by our proviso, we have $\thr_u^{\mathbf{ir}+1} \ge \mathit{thr}_{\mathbf{ir}+1}(\p)$.
We start with the state of the algorithm being $(\mathit{bias}(\th),\mathit{solo}^?)$ for some sufficiently big $\th$. By Lemma~\ref{lem:bias-fire} we have $\set{a,b}\incl \operatorname{fire}_1(\mathit{bias}(\th),\p)$. We then use Lemma~\ref{lem:any-frequency} or Lemma~\ref{lem:any-bias} to get $\mathit{bias}(\th_{\mathbf{ir}+1})$ or $\mathit{bias}^{?}(\th_{\mathbf{ir}+1})$ after round $\mathbf{ir}$; where we take $\th_{\mathbf{ir}+1}=\thr_u^{\mathbf{ir}+1}+\e$ for some small $\e$.
So at the round $\mathbf{ir}+1$ we can arrive with either $\mathit{bias}(\th_{\mathbf{ir}+1})$ or $\mathit{bias}^{?}(\th_{\mathbf{ir}+1})$. In either case, if we send to a process, a multi-set $\mathsf{H}$ to be of size $\th_{\mathbf{ir}+1}$ containing only $b$'s then, because of our proviso $\thr_u^{\mathbf{ir}+1} \ge \mathit{thr}_{\mathbf{ir}+1}(\p)$, we have that the process will set the value of its $x_{\mathbf{ir}+1}$ variable to $b$. Further, in the former case, because there is no $\mathtt{mult}$ instruction in round $\mathbf{ir}+1$, if we send to a process, the entire multi-set $\mathit{bias}(\th_{\mathbf{ir}+1})$ then the process does nothing. In the latter case, if we send to a process, a multi-set $\mathsf{H}$ of size $\th_{\mathbf{ir}+1}$ but with some fraction of $?$'s and less than $\thr_u^{\mathbf{ir}+1}$ fraction of $b$'s then no instruction gets satisfied and so the process does nothing.
Hence we can get $\mathit{bias}^{?}(\th)$ (for any $\th$) as the tuple after round $\mathbf{ir}+1$. Lemma~\ref{lem:bias-propagation} then allows us to set the $\mathit{dec}$ variable of some process to $b$, and leave the other $\mathit{dec}$ variables unset. Further, we can arrange the execution in such a way that the the state of the variable $\mathit{inp}$ is set to $\mathit{bias}(\max(\th,\th_{\mathbf{ir}+1}))$.
In the next phase, Lemma~\ref{lem:bias-fire} says that $\set{a,b}\in \operatorname{fire}_1(\mathit{bias}(\max(\th,\th_{\mathbf{ir}+1})),\p)$. Hence we can get $\mathit{solo}^a$ as the tuple after the first round and since $\mathit{solo}^a \lact{\p_i}_i \mathit{solo}^a$ for any $i$, we can make the rest of the undecided processes decide on $a$, thus violating agreement. \end{proof} \fi
\begin{lemma}\label{lem:constants}
If property of constants from Definition~\ref{def:structure} is not satisfied then the algorithm does not satisfy
agreement. \end{lemma}
\iffalse
\begin{proof} We consider an execution of a phase under the global predicate and so we can freely use Lemmas~\ref{lem:any-frequency} and~\ref{lem:any-bias}. We have seen in Lemma~\ref{lem:no-min} that in the first round all the $\mathtt{mult}$ instructions must be $\mathrm{smor}$. We start with the state $(\mathit{spread},\mathit{solo}^?)$. Recall that according to our proviso $\thr_u^{\mathbf{ir}+1}\ge\p_{\mathbf{ir}+1}(\th)$.
Suppose $\thr_u^{\mathbf{ir}+1}< 1/2$. By Lemma~\ref{lem:spread} we have $a,b \in \operatorname{fire}_1(\mathit{spread},\p)$. Hence we can use Lemma~\ref{lem:any-frequency} or Lemma~\ref{lem:any-bias} to arrive at either $\mathit{spread}$ or $\mathit{spread}^?$ before round $\mathbf{ir}+1$. In either case, it is clear that $b \in \operatorname{fire}_{\mathbf{ir}+1}(\mathit{spread},\p)$. In the former case, if we take our multi-set $\mathsf{H}$ to be the entire tuple $\mathit{spread}$, then since there is no $\mathtt{mult}$ instruction in round $\mathbf{ir}+1$ (Lemma~\ref{lem:no-mult-fo}), it follows that $? = \operatorname{fire}_{\mathbf{ir}+1}(\mathit{spread},\p)$. In the latter case, just taking our multi-set to contain all the $?$'s from $\mathit{spread}^?$ ensures that $? \in \operatorname{fire}_{\mathbf{ir}+1}(\mathit{spread}^?,\p)$. Hence we have $b,? \in \operatorname{fire}_{\mathbf{ir}+1}(\mathit{spread}^?,\p)$ and since round $\mathbf{ir}+1$ does not have a $\mathtt{mult}$ instruction (Lemma~\ref{lem:no-mult-fo}), by using Lemma~\ref{lem:any-bias} we can make half the processes decide on $b$. Further, we can arrange the execution such that the state of $\mathit{inp}$ after the execution is $\mathit{spread}$. By Lemma~\ref{lem:spread} we have $a \in \operatorname{fire}_1(\mathit{spread},\p)$ and hence we can get $\mathit{solo}^a$ after the first round and use this to make the (undecided) processes decide on $a$. Thus agreement is violated.
Suppose we have $\thr_u^{\mathbf{ir}+1} \ge 1/2$.
Let us fix $\th_{\mathbf{ir}+1} = \thr_u^{\mathbf{ir}+1}+\e$ for some small $\e$. Thanks to Lemmas~\ref{lem:any-frequency} and~\ref{lem:any-bias} we can then get $\mathit{bias}(\th_{\mathbf{ir}+1})$ or $\mathit{bias}^{?}(\th_{\mathbf{ir}+1})$ after round $\mathbf{ir}$. Using the same argument as in the previous lemma, we can set $\mathit{dec}$ variable of some process to $b$, and leave other $\mathit{dec}$ variables unset. We can arrange execution in such a way that the state after the phase is $\mathit{bias}(\max(1/2,\th_{\mathbf{ir}+1})) = \mathit{bias}(\th_{\mathbf{ir}+1})$.
If $\thr_m^{1,k}/2<1-\thr_u^{\mathbf{ir}+1}$ then in the first round of the next phase consider the $\mathsf{H}$ set containing all the $a$'s in $\mathit{bias}(\th_{\mathbf{ir}+1})$ and the number of $b$'s smaller by $\e$ than the number of $a$'s. The size of this set is $(1-\th_{\mathbf{ir}+1})+(1-\th_{\mathbf{ir}+1}-\e)=2(1-\thr_u^{\mathbf{ir}+1})-3\e$. For a suitably small $\e$, this quantity is bigger than $\thr_m^{1,k}$ which by observation~(\ref{eq:syntactic-property}) is bigger than $\mathit{thr}_1(\p)$. So we can get $\mathit{solo}^a$ as the tuple after the first round and then use this to make the undecided processes decide on $a$.
If $\thr_u^1<1-\thr_u^{\mathbf{ir}+1}$, then just take $\mathsf{H}$ set consisting only of $a$'s. The size of this set is bigger than $\thr_u^1$ which by observation~(\ref{eq:syntactic-property}) is bigger than $\mathit{thr}_1(\p)$. Hence, we can get $\mathit{solo}^a$ as the tuple after the first round and use this to make the undecided processed decide on $a$. \end{proof} \fi
\begin{proof}
We consider an execution of a phase under the
global predicate and so we can freely use Lemmas~\ref{lem:any-frequency}
and~\ref{lem:any-qbias-from-bias}.
We have seen in Lemma~\ref{lem:no-min} that in the first round all the
$\mathtt{mult}$ instructions must be $\mathrm{smor}$. We start with the state
$(\mathit{spread},\mathit{solo}^?)$.
We consider two cases.
\textbf{First case :} There are no preserving rounds before
round $\mathbf{ir}+1$. Hence every round before $\mathbf{ir}+1$ has a $\mathtt{mult}$ instruction. By Lemma~\ref{lem:any-frequency} from $\mathit{spread}$
we can get $\mathit{bias}(\th)$ (for any $\th$) as the tuple before
round $\mathbf{ir}+1$. Choose $\th = \thr_u^{\mathbf{ir}+1} + \e$ for some
small $\e$.
By Lemma~\ref{lem:no-mult-sequence} we know
that round $\mathbf{ir}+1$ does not have any $\mathtt{mult}$ instruction.
This implies that $? \in \operatorname{fire}_{\mathbf{ir}+1}(\mathit{bias}(\th),\overline{\p})$.
Further, by observation~(\ref{eq:syntactic-property}) we know
that $\thr_u^{\mathbf{ir}+1} \ge \mathit{thr}_{\mathbf{ir}+1}(\overline{\p})$.
Therefore, $b \in \operatorname{fire}_{\mathbf{ir}+1}(\mathit{bias}(\th),\overline{\p})$. Hence
$\set{b,?} \subseteq \operatorname{fire}_{\mathbf{ir}+1}(\mathit{bias}(\th),\overline{\p})$.
\textbf{Second case: } There is a round $j < \mathbf{ir}+1$ such
that round $j$ is preserving. Let $j$ be the first such round.
By Lemma~\ref{lem:any-frequency} from $\mathit{spread}$ we can get
$\mathit{bias}(\thr_u^j+\e')$ (for some small $\e'$) before round $j$.
Since round $j$ is preserving it follows that either round $j$ has no $\mathtt{mult}$ instruction or $\mathit{thr}_j(\overline{\p}) < \max(\thr_u^j,\thr_m^{j,k})$.
It is then clear that $? \in \operatorname{fire}_j(\mathit{bias}(\thr_u^j+\e'))$.
It is also clear that $b \in \operatorname{fire}_j(\mathit{bias}(\thr_u^j+\e'))$.
Notice that by Lemma~\ref{lem:any-qbias-from-bias} we can
get $\mathit{bias}^{?}(\th)$ (for any $\th$) as the tuple before round
$\mathbf{ir}+1$. Choose $\th = \thr_u^{\mathbf{ir}+1} + \e$ for some small
$\epsilon$.
It is clear that we can construct a multi-set $\mathsf{H}$ of size $1-\epsilon$
consisting of $\thr_u^{\mathbf{ir}+1}$ fraction of $b$'s and
the remaining as $?$'s from the tuple $\mathit{bias}(\th)$. Notice that $\mathsf{H}$
does not satisfy any instructions and (for a small enough $\e$)
is bigger than $\mathit{thr}_{\mathbf{ir}+1}(\overline{\p})$.
Further by sending the entire tuple as a multi-set we get that
$b \in \operatorname{fire}_{\mathbf{ir}+1}(\mathit{bias}^{?}(\th),\overline{\p})$. Hence
$\set{b,?} \subseteq \operatorname{fire}_{\mathbf{ir}+1}(\mathit{bias}^{?}(\th),\overline{\p})$.\\
In both cases, we can then use Lemma~\ref{lem:any-qbias-from-bias}
to ensure that half the processes remain undecided and half the processes decide on $b$.
Further, in both cases, we can arrange the execution in such a way that the state after this phase is either $(\mathit{bias}(\th),\mathit{spread}^?)$ or $(\mathit{spread},\mathit{spread}^?)$.
If the state is $(\mathit{spread},\mathit{spread}^?)$ then by Lemma~\ref{lem:spread} $a \in \operatorname{fire}_1(\mathit{spread},\overline{\p})$
and so in the next phase we can get $\mathit{solo}^{a}$ as the tuple after the first
round and make the other processes decide on $a$.
In the remaining case we consider separately the two conditions on constants
that can be violated.
If $\thr_m^{1,k}/2<1-\thr_u^{\mathbf{ir}+1}$ then in the first round of the next phase
consider the $\mathsf{H}$ set containing all the $a$'s in $\mathit{bias}(\th)$ and the number of $b$'s
smaller by $\e$ than the number of $a$'s.
The size of this set is $(1-\th)+(1-\th-\e)=2(1-\thr_u^{\mathbf{ir}+1})-3\e$.
For a suitably small $\e$, this quantity is bigger than $\thr_m^{1,k}$
which by observation~(\ref{eq:syntactic-property}) is bigger
than $\mathit{thr}_1(\p)$.
So we can get $\mathit{solo}^a$ as the tuple after the first round and then use this to make the undecided processes decide on $a$.
If $\thr_u^1<1-\thr_u^{\mathbf{ir}+1}$, then just take $\mathsf{H}$ set in $\mathit{bias}(\th)$
consisting only of all the $a$'s.
Once again for a small enough $\e$, the size of this set is bigger than $\thr_u^1$ which by
observation~(\ref{eq:syntactic-property}) is bigger than $\mathit{thr}_1(\p)$.
Hence, we can get $\mathit{solo}^a$ as the tuple after the first round
and use this to make the undecided processes decide on $a$. \end{proof}
\begin{lemma}\label{lem:agreement}
If all the structural properties are satisfied then the algorithm satisfies agreement. \end{lemma}
\begin{proof}
It is clear that the algorithm satisfies agreement when the state of
the $\mathit{inp}$ variable
is either $\mathit{solo}$ or $\mathit{solo}^a$. Suppose
we have an execution $(\mathit{bias}(\theta),d) \act{\overline{\p}}\cdots\act{\overline{\p}}
(\mathit{bias}(\theta'),d')$
such that $(\mathit{bias}(\theta'),d')$ is the first state in this execution
with a process $p$ which has decided on a value.
We consider the case when $a$ is this value.
The other case is analogous.
Since
$\thr_m^{1,k}/2 \geq 1-\thr_u^{\mathbf{ir}+1}$ we have that $\thr_u^{\mathbf{ir}+1} \ge 1/2$. Further round $\mathbf{ir}+1$ does not have a $\mathtt{mult}$ instruction.
It then follows directly from the semantics that if $q$ is a process
then either $d'(q) = a$ or $d'(q) = ?$.
Further notice that since $a$ was decided by some process, it has to be
the case that at least $\thr_u^{\mathbf{ir}+1}$ processes have $a$ as their $\mathit{inp}$ value. Hence $\theta' < 1 - \thr_u^{\mathbf{ir}+1}$.
Since $\theta' < 1 - \thr_u^{\mathbf{ir}+1} \le \thr_u^1$, it follows that $b$ cannot be fired from $\mathit{bias}(\theta')$ using the $\mathtt{uni}$ instruction in the first round.
Since $\theta' < 1 - \thr_u^{\mathbf{ir}+1} \le \thr_m^{1,k}/2$ and since every $\mathtt{mult}$ instruction in the first round has $\mathrm{smor}$ as its operator, it follows that
$b$ cannot be fired from $\mathit{bias}(\theta')$ using the $\mathtt{mult}$ instruction as well.
Hence the number of $b$'s in the $\mathit{inp}$ tuple can only decrease from this point onwards and so it follows that no process from this point onwards can decide on $b$.
The same argument applies if there are more than two values. \end{proof}
\subsubsection*{Part 2: termination} We consider only two values $a,b$. It is direct form the arguments below that the termination proof also works if there are more values.
\begin{lemma}\label{lem:fire-one}
For the global predicate $\overline{\p}$: $a\in\operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$ iff $\th<\bar{\thr}$.
(Similarly $b\in\operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$ iff $1-\bar{\thr}<\th$). \end{lemma} \begin{proof}
In order for a multi-set $\mathsf{H}$ to be such that $a = \mathtt{update}_1(\mathsf{H})$
there are two possibilities: (i) it should be of size $>\thr_u^1$ and contain only $a$'s, or (ii) of size
$>\thr_m^{1,k}$ and contain at least as many $a$'s as $b$'s.
Recall that by observation~(\ref{eq:syntactic-property}) on page~\pageref{eq:syntactic-property} we have
$\thr_u^1 \ge \mathit{thr}_1(\overline{\p})$ and $\thr_m^{1,k} \ge \mathit{thr}_1(\overline{\p})$.
The number of $a$'s in $\mathit{bias}(\th)$ is $1-\th$.
So the first case is possible only iff $1-\th>\thr_u^1$, i.e., when $\th<1-\thr_u^1$.
Further if $\th < 1-\thr_u^1$, then we can send a set $\mathsf{H}$ consisting only of
$a$'s, such that $|\mathsf{H}| > \thr_u^1 \ge \mathit{thr}_1(\overline{\p})$
and so $a \in \operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$.
The second case, is possible only if $1-\th>\thr_m^{1,k}/2$, or equivalently, $\th<1-\thr_m^{1,k}/2$. Further if $\th < 1-\thr_m^{1,k}/2$,
then we can send a set $\mathsf{H}$ of size $\thr_m^{1,k}$ consisting of
both $a$'s and $b$'s, which will ensure that $a \in \operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$.
To sum up, $a \in \operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$ iff $\th<\bar{\thr}$.
The proof for $b \in \operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$ iff $1-\bar{\thr} < \th$ is similar. \end{proof}
\begin{corollary}\label{cor:no-a-above-bthr}
For every predicate $\p$, if $\th\geq\bar{\thr}$ then $a\not\in\operatorname{fire}_1(\mathit{bias}(\th),\p)$.
Similarly if $\th\leq 1-\bar{\thr}$ then $b \not \in\operatorname{fire}_1(\mathit{bias}(\th),\p)$. \end{corollary} \begin{proof}
We have assumed that every predicate implies the global predicate, so every
$\mathsf{H}$ set that is admissible w.r.t.\ some predicate, is also admissible
w.r.t.\ the global predicate.
Lemma~\ref{lem:fire-one}, says that $a$ cannot be obtained under the global predicate if $\th \geq \bar{\thr}$.
Similar proof holds for the other claim as well. \end{proof}
\begin{lemma}\label{lem:unifier}
Suppose $\p$ is a unifier and $\mathit{bias}(\th)\act{\p}f$.
If $\thr_u^1\leq \thr_m^{1,k}$ or $1-\bar{\thr}\le\th\le\bar{\thr}$
then $f=\mathit{solo}$ or $f=\mathit{solo}^a$. \end{lemma} \begin{proof}
We first show that the value $?$ cannot be produced in the first round.
Since $\p$ is a unifier we have $\mathit{thr}_1(\p) \geq \thr_m^{1,k}$.
If $\mathit{thr}_1(\p) \geq \thr_u^1$ then we are done.
Otherwise $\mathit{thr}_1(\p)< \thr_u^1$, implying $\mathit{thr}_1(\p)\geq \bar{\thr}$, by the definition
of unifier.
We consider $1-\bar{\thr}\le\th\le\bar{\thr}$, and the tuple $\mathit{bias}(\th)$.
In this case, every heard-of multiset $\mathsf{H}$ strictly bigger than the
threshold $\bar{\thr}$ (and hence bigger than $\mathit{thr}_1(\p)$)
must contain both $a$ and $b$.
Since there is a $\mathtt{mult}$ instruction in the first round
(and since $\mathit{thr}_1(\p) \ge \thr_m^{1,k}$), the first
round cannot produce $?$, i.e., after the first round
the value of the variable $x_1$ of each process is either $a$ or $b$.
Let $i$ be the round such that $\p_i$ is an equalizer
and rounds $2,\dots,i$ are non-preserving.
This round exists by the definition of a unifier.
Thanks to above, we know that after the first round no process
has $?$ as their $x_1$ value. Since rounds $2,\dots,i$ are non-preserving, it follows that
till round $i$ we cannot produce $?$ under the predicate $\p$.
Because $\p_i$ has an equalizer, after round $i$ we either have the tuple $\mathit{solo}$ or $\mathit{solo}^a$.
This tuple stays till round $\mathbf{ir}$ as the rounds $i+1,\dots,\mathbf{ir}$ are solo-safe. \end{proof}
Observe that if rounds $\mathbf{ir}+1,\dots,r$ of a unifier $\p$ are solo-safe then $\p$ is also a decider and all processes decide. Otherwise some processes may not decide. So unifier by itself is not sufficient to guarantee termination.
\begin{lemma}\label{lem:decider}
If $\p$ is a decider and $(\mathit{solo},\mathit{solo}^?)\act{\p}(f',d')$
then $(f',d') = (\mathit{solo},\mathit{solo})$.
Similarly, if $(\mathit{solo}^a,\mathit{solo}^?) \act{\p}(f',d')$ then $(f',d') =
(\mathit{solo}^a,\mathit{solo}^a)$.
In case $\thr_m^{1,k}\leq \thr_u^1$, for every $\th\geq\bar{\thr}$: if
$(\mathit{bias}(\th),\mathit{solo}^?)\act{\p}(f',d')$ then $(f',d')=(\mathit{solo},\mathit{solo})$
and for every $\th\leq 1-\bar{\thr}$: if
$(\mathit{bias}(\th),\mathit{solo}^?)\act{\p}(f',d')$ then $(f',d')=(\mathit{solo}^a,\mathit{solo}^a)$. \end{lemma} \begin{proof}
The first two statements are direct from the definition as all the rounds in a
decider are solo safe.
We only prove the third statement, as the proof of the fourth statement is similar.
For the third statement, by Corollary~\ref{cor:no-a-above-bthr} after the first round we
cannot produce $a$'s under $\p_1$.
Because the first round is solo-safe, we get
$\thr_u^1\leq\mathit{thr}_1(\p)$; and since $\thr_m^{1,k}\leq \thr_u^1$, we get $\thr_m^{1,k}\leq \mathit{thr}_1(\p)$.
Hence, the first round cannot produce $?$ neither.
This means that from $\mathit{bias}(\th)$ as the input tuple, we can
only get $\mathit{solo}$ as the tuple after the first round under the
predicate $\p_1$.
Since rounds $2,\dots,r$ are solo-safe it follows that
all the processes decide on $b$ in round $r$. \end{proof}
We are now ready to show one direction of Theorem~\ref{thm:core}.
\begin{lemma}\label{main-positive}
If an algorithm in a core language has structural properties from
Definition~\ref{def:structure}, and satisfies condition T then it solves consensus. \end{lemma} \begin{proof}
Lemma~\ref{lem:agreement} says that the algorithm satisfies agreement.
If condition T holds, there is a unifier followed by a decider.
If $\thr_u^1\leq \thr_m^{1,k}$ then after a unifier the $\mathit{inp}$ tuple becomes
$\mathit{solo}$ or $\mathit{solo}^{a}$ thanks to Lemma~\ref{lem:unifier}.
After a decider all processes decide thanks to Lemma~\ref{lem:decider}.
Otherwise $\thr_m^{1,k}< \thr_u^1$.
If before the unifier the $\mathit{inp}$ tuple was $\mathit{bias}(\th)$ with $1-\bar{\thr}\le\th\le\bar{\thr}$ then after the
unifier $\mathit{inp}$ becomes $\mathit{solo}$ or $\mathit{solo}^{a}$ thanks to Lemma~\ref{lem:unifier}.
We once again conclude as above.
If $\th>\bar{\thr}$ (or $\th < 1-\bar{\thr}$) then by Corollary~\ref{cor:no-a-above-bthr}, the number of $b$'s (resp. number of $a$'s) can only
increase after this point. Hence till the decider,
the state of the $\mathit{inp}$ tuple remains as $\mathit{bias}(\th')$ with
$\th' > \bar{\thr}$ (resp. $\th' < 1-\bar{\thr}$).
After a decider all processes decide thanks to Lemma~\ref{lem:decider}. \end{proof}
\subsubsection*{Part 3: non-termination}
\begin{lemma}\label{lem:not-decider}
If $\p$ is a not a decider then
$\mathit{solo}\act{\p}\mathit{solo}$ and $\mathit{solo}^a \act{\p} \mathit{solo}^a$; namely, no process may decide.
\end{lemma} \begin{proof}
If $\p$ is not a decider then there is a round, say $i$, that is not solo-safe.
By definition this means $\mathit{thr}_i(\p)< \thr_u^i$.
It is then easy to verify that for $j < i$, $\mathit{solo} \lact{\p_j}_j \mathit{solo}$,
$\mathit{solo} \lact{\p_i}_i \mathit{solo}^?$ and $\mathit{solo}^? \lact{\p_k}_k \mathit{solo}^?$ for $k > i$. Hence this
ensures that no process decides during this phase.
Similar proof holds when the $\mathit{inp}$ tuple is $\mathit{solo}^a$.
\end{proof}
\begin{lemma}\label{lem:global-th}
For the global predicate $\overline{\p}$: if $1/2\leq\th<\bar{\thr}$, then
$\mathit{bias}(\th)\act{\overline{\p}}\mathit{bias}(\th')$ for every $\th'\geq 1/2$. \end{lemma} \begin{proof}
We first observe that $a,b \in \operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$.
Indeed, by Lemma~\ref{lem:fire-one}, $a\in \operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$.
Further since $1/2 \le \th$ and every $\mathtt{mult}$ instruction
in the first round has $\mathrm{smor}$ as operator, it
follows that $b \in \operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$.
Recall that by our proviso, the global predicate is not an equalizer.
Suppose there are $\mathtt{mult}$ instructions in rounds $2,\dots,\mathbf{ir}$.
Then Lemma~\ref{lem:any-frequency} allows us to get
$\mathit{bias}(\th')$ as the tuple after round $\mathbf{ir}$.
Moreover, our proviso from page~\pageref{proviso} says that there is no
$\mathtt{mult}$ instruction in round $\mathbf{ir}+1$.
So we can get $\mathit{solo}^{?}$ as the tuple after round $\mathbf{ir}+1$ by sending the whole multiset. We can then propagate the tuple $\mathit{solo}^{?}$ all
the way till the last round.
This ensures that no process decides and we are done in this case.
Otherwise there is a round $j$ such that $2 \le j \le \mathbf{ir}$ and $j$ does not have any $\mathtt{mult}$ instruction.
By Lemma~\ref{lem:any-qbias-from-bias}
we can get $\mathit{bias}^{?}(\th'')$ as well as $\mathit{bias}^{?}_a(\th'')$ (for any $\th''$) after
round $\mathbf{ir}$.
There are two cases depending if $\th'\geq\th$ or not.
If $\th'\geq \th$, then we consider the tuple $\mathit{bias}^{?}(\th'')$ for some $1/2\le \th''<
\min(\thr_u^{\mathbf{ir}+1}-\e,\th)$, (where $\e$ is some small number).
Notice that by Lemma~\ref{lem:constants} we have $\thr_m^{1,k}/2 \geq 1-\thr_u^{\mathbf{ir}+1}$ and since
$\thr_m^{1,k} < 1$, this implies that $\thr_u^{\mathbf{ir}+1} > 1/2$, and so such a $\th''$ exists.
It is clear that $? \in \operatorname{fire}_{\mathbf{ir}+1}(\mathit{bias}^{?}(\th''),\overline{\p})$
and so we can get $\mathit{solo}^{?}$ as the tuple after round $\mathbf{ir}+1$ thereby
ensuring that no process decides.
To terminate, we need to arrange this execution so that the state of $\mathit{inp}$
becomes $\mathit{bias}(\th')$ after this phase.
Since $\th''\ge1/2$ we have enough $b$'s to change $\th'-\th$ fraction of $a$'s
to $b$'s.
We leave the other values unchanged.
This changes the state of $\mathit{inp}$ from $\mathit{bias}(\th)$
to $\mathit{bias}(\th')$.
Suppose $\th' < \th$.
By Lemma~\ref{lem:any-qbias-from-bias}, after round $\mathbf{ir}$ we can reach the
tuple
$\mathit{bias}^{?}_a(\th'')$ for $\th''=\th-\th'$.
Arguing as before, we can ensure that the
state of the $\mathit{inp}$ can be converted to $\mathit{bias}(\th')$. We just have
to show that all processes can choose to not decide in the last
round.
We observe that $\th'' \le \thr_u^{\mathbf{ir}+1}$.
Indeed since $\th < \bar{\thr} \le 1$ and $\th' \geq 1/2$, it
follows that $\th'' < 1/2 \le \thr_u^{\mathbf{ir}+1}$, where the last inequality
follows from the discussion in the previous paragraph.
Now, as $\th'' \le \thr_u^{\mathbf{ir}+1}$, if we send
the entire tuple $\mathit{bias}_a^?(\th'')$ to every process, we get $\mathit{solo}^{?}$ as the tuple after
round $\mathbf{ir}+1$, hence making the processes not decide on anything in the last
round.
\end{proof}
\begin{lemma}\label{lem:not-uni}
If $\p$ is not a unifier then
\begin{equation*}
\mathit{bias}(\th)\act{\p}\mathit{bias}(\th) \quad\text{for some $1/2\leq\th<\bar{\thr}$.}
\end{equation*} \end{lemma} \begin{proof}
We examine all the reasons why $\p$ may not be a unifier.
First let us look at conditions on constants.
If $\mathit{thr}_1(\p)<\thr_m^{1,k}$ then let $\th=1/2$. In the first round, we can then send to every process a multi-set
$\mathsf{H}$ with both $a$'s and $b$'s, and of size in between $\mathit{thr}_1(\p)$ and $\thr_m^{1,k}$.
This allows us to get $\mathit{solo}^{?}$ as the tuple after the first round, and
ensures that neither the $\mathit{inp}$ tuple nor the $\mathit{dec}$ tuple gets
updated in this phase.
Suppose $\mathit{thr}_1(\p)<\thr_u^1$ and $\mathit{thr}_1(\p) < \bar{\thr}$.
Let $\e$ be such that $\mathit{thr}_1(\p) + \e < \min(\thr_u^1,\bar{\thr})$
and let $\th = \max(\mathit{thr}_1(\p)+\e,1/2)$.
In the first round, by sending to every process a fraction of $(\mathit{thr}_1(\p)+\e)$ $b$'s from $\mathit{bias}(\th)$ we get $\mathit{solo}^{?}$ as the tuple after the first round
and that allows us to conclude as before.
The second reason is that there is no equalizer in $\p$ up to round $\mathbf{ir}$.
We take $\th=1/2$.
By Lemmas~\ref{lem:spread} and~\ref{lem:no-min}, we have
$a,b\in\operatorname{fire}_1(\mathit{spread},\p)$.
If all the rounds $1,\dots,\mathbf{ir}$ have a $\mathtt{mult}$ instruction then
Lemma~\ref{lem:any-frequency} allows us to get $\mathit{spread}$ as the tuple after round $\mathbf{ir}$.
Lemma~\ref{lem:no-mult-sequence} says that there cannot be a $\mathtt{mult}$ instruction
in round $\mathbf{ir}+1$, so by sending the whole multiset in this round, we get
$\mathit{solo}^{?}$ as the tuple after round $\mathbf{ir}+1$. This
ensures that no process decides in this phase.
The other case is when there is a round among $1,\dots,\mathbf{ir}$ without the $\mathtt{mult}$ instruction.
Lemma~\ref{lem:any-qbias-from-bias} allows to get $\mathit{solo}^{?}$ after round $\mathbf{ir}$ and
so neither $\mathit{inp}$ nor $\mathit{dec}$ of any process gets updated.
The last reason is that there is a round before an equalizer that is
preserving, or a round after the equalizer that is not solo-safe.
In both cases we can get $\mathit{solo}^{?}$ as the tuple at round $\mathbf{ir}$ and conclude as before. \end{proof}
\iffalse \begin{proof} \textbf{Main non-termination} We show that if one of the conditions T1 or T2 does not hold then the algorithm does not terminate. We recall that the communication predicate is: \begin{equation*} G\p\land (F(\p^1\land F(\p^2\dots(F\p^k)\dots))) \end{equation*} and that we have assumed that the global predicate implies all sporadic predicates. This means, for example, that if the global predicate is a decider then all sporadic predicates are deciders.
We construct an execution \begin{equation*} f_1\act{\p_1}f'_1\act{\p}f_2\act{\p_2}f'_2\act{\p}\dots \act{\p_k}f'_k\act{\p} f'_k \end{equation*} where every second arrow is a transition with the global predicate. The last transition is a transition on the global predicate and a self loop. Recall that we write $f\act{\p}f'$ for $(f,\mathit{solo}^{?})\act{\p}(f',\mathit{solo}^{?})$, so indeed the run as above is a witness to non-termination.
We examine several cases.
If none of $\p_1,\dots,\p_k,\p$ is a decider, then we take $f_i=f'_i=\mathit{solo}$ for all $i=1,\dots,k$. By Lemma~\ref{lem:not-decider} we get the desired execution.
Suppose the global predicate $\p$ is a decider.
Since the global predicate implies every predicate, all the predicates are also deciders. We consider three cases: Suppose there are no finalizers. For all $i=1,\dots,k$ take $f_i=f'_i=\mathit{bias}(\th)$ for some $\th>\thr_u^1$. By Lemma~\ref{lem:not-finalizer} we get the desired execution. Suppose the global predicate $\p$ is a finalizer. Then every other predicate is also a finalizer. Hence, by our assumption none of the predicates are weak-unifiers. By Lemma~\ref{lem:not-thr-uni}, for every $\p_i$, $i=1,\dots,k$, there is $1/2\leq\th_i<\bar{\thr}$ such that $\mathit{bias}(\th_i)\act{\p_i}\mathit{bias}(\th_i)$. So taking $f_i=f'_i=\mathit{bias}(\th_i)$ gives us $f_i\act{\p_i}f'_i$. Since $\th_i<\bar{\thr}$, for the global predicate $\p$ we get $f'_i\act{\p}f_{i+1}$ by Lemma~\ref{lem:global-th}. Thus we have an execution till $f'_k=\mathit{bias}(\th_k)$ and then we have $f'_k \act{\p} f'_k$ thanks to Lemma~\ref{lem:global-th}. The last sub-case is when the global predicate is not a finalizer, but some sporadic predicate is. Let $\p_j$ be the last finalizer. Hence, $\p_1,\dots,\p_j$ are not weak-unifiers. As before, using Lemma~\ref{lem:not-thr-uni} and Lemma~\ref{lem:global-th} we construct an execution till $f'_j = \mathit{bias}(\th_j)$ for some $1/2 \le \th_j < \bar{\thr}$. We can then use Lemma~\ref{lem:global-th} to have $f'_j \act{\p} f_{j+1}$ where $f_{j+1} = \mathit{bias}(\th)$ for some $\th > \thr_u^1$. Now applying Lemma~\ref{lem:not-finalizer} we get the desired execution from $f_{j+1}$.
Otherwise, let the global predicate not be a decider and let $l$ be the index of the last decider in the sequence $\p_1,\dots,\p_k$.
By assumption, this implies that none of $\p_1,\dots,\p_{l}$ is a strong unifier. Recall that the global predicate $\p$ cannot be a strong unifier, and not even a weak unifier, since by our proviso $\p$ does not have an equalizer.
Suppose there is no finalizer in the sequence $\p_1,\dots,\p_{l}$, and the global predicate $\p$ is not a finalizer either. For all $i=1,\dots,k$ take $f_i=f'_i=\mathit{bias}(\th)$ for some $\th>\thr_u^1$. By Lemma~\ref{lem:not-finalizer} and Lemma~\ref{lem:not-decider} we get the desired execution.
Suppose the global predicate $\p$ is a finalizer. Hence every other predicate is also a finalizer. Then our assumption implies that none of the predicates are weak-unifier. By Lemma~\ref{lem:not-thr-uni}, for every $\p_i$, $i=1,\dots,k$, there is $1/2\leq\th_i<\bar{\thr}$ such that $\mathit{bias}(\th_i)\act{\p_i}\mathit{bias}(\th_i)$. So taking $f_i=f'_i=\mathit{bias}(\th_i)$ gives us $f_i\act{\p_i}f'_i$. Since $\th_i<\bar{\thr}$, for the global predicate $\p$ we get $f'_i\act{\p}f_{i+1}$ by Lemma~\ref{lem:global-th}. Thus we have an execution till $f'_k=\mathit{bias}(\th_k)$ and then we have $f'_k \act{\p} f'_k$ thanks to Lemma~\ref{lem:global-th}.
The last case is when the global predicate is not a finalizer, but there is a finalizer among $\p_1,\dots,\p_{l}$. Say the last one is $\p_j$. This implies that none of $\p_1,\dots,\p_j$ are weak-unifiers. In the same way as above we construct an execution till $f'_j=\mathit{bias}(\th_j)$ for some $1/2\leq\th_j<\bar{\thr}$. Take $\th>\thr_u^1$, and set $f_{j+1},f'_{j+1},\dots,f'_{k-1},f_k$ to be $\mathit{bias}(\th)$. Observe that $f'_j\act{\p} f_{j+1}$ is possible by Lemma~\ref{lem:global-th} since $\th_j<\bar{\thr}$. Since none of $\p_{j+1},\dots,\p_l,\p$ are finalizers, Lemma~\ref{lem:not-finalizer} gives us an execution to $f'_l$. As none of $\p_{l+1},\dots,\p_k,\p$ are deciders, Lemma~\ref{lem:not-decider} gives us the rest of execution. \end{proof} \fi
The next lemma gives the main non-termination argument. \begin{lemma}
If the structural conditions from Definition~\ref{def:structure} hold, but condition T
does not hold then the algorithm does not terminate. \end{lemma} \begin{proof}
We recall that the communication predicate is:
\begin{equation*}
(\lG\overline{\p})\land(\lF(\p^1\land\lF(\p^2\land\dots (\lF\p^k)\dots)))
\end{equation*}
and that we have assumed that the global predicate implies all sporadic predicates.
This means, for example, that if the global predicate is a decider then all
sporadic predicates are deciders.
We construct an execution
\begin{equation*}
f_1\act{\p^1}f'_1\act{\overline{\p}}f_2\act{\p^2}f'_2\act{\overline{\p}}\dots \act{\p^k}f'_k\act{\overline{\p}} f'_k
\end{equation*}
where every second arrow is a transition on the global predicate.
The last transition on the global predicate is a self-loop.
Recall that we write $f\act{\p}f'$ for $(f,\mathit{solo}^{?})\act{\p}(f',\mathit{solo}^{?})$, so
indeed the run as above is a witness to non-termination.
We examine several cases.
If none of $\p^1,\dots,\p^k,\overline{\p}$ is a decider, then we take $f_i=f'_i=\mathit{solo}$ for
all $i=1,\dots,k$.
By Lemma~\ref{lem:not-decider} we get the desired execution.
Suppose the last decider in the sequence $\p^1,\dots,\p^k$ is $\p^l$.
(Notice that if the global predicate $\p$ is a decider then $l=k$).
By our assumption, none of $\p^1,\dots,\p^l$ are unifiers.
By Lemma~\ref{lem:not-uni}, for every $\p^i$, $i=1,\dots,l$, there is
$1/2\leq\th_i<\bar{\thr}$ such that $\mathit{bias}(\th_i)\act{\p^i}\mathit{bias}(\th_i)$.
So we take $f_i=f'_i=\mathit{bias}(\th_i)$.
We can then use Lemma~\ref{lem:global-th} to get $f'_i \act{\overline{\p}} f_{i+1}$, for
all $i=1,\dots,l-1$.
This gives us an execution up to $f'_l$.
To complete the execution we consider two cases.
If $l = k$, then by Lemma~\ref{lem:global-th} we have
$f'_k\act{\overline{\p}}f'_k$ and so we are done.
Otherwise $l<k$, and we use Lemma~\ref{lem:global-th} to get
$f'_l\act{\overline{\p}}\mathit{solo}$.
We set $f_j=f'_j=\mathit{solo}$ for $j>l$.
Since $l < k$, neither the global predicate, nor any one of $\p^{l+1},\dots,\p^k$ are
deciders, and so by Lemma~\ref{lem:not-decider} we get
the desired execution. \end{proof}
We prove the characterization from Theorem~\ref{thm:ts}. Recall that in this extension we add timestamps to the $\mathit{inp}$ variable, i.e., timestamps are sent along with $\mathit{inp}$ and are updated whenever $\mathit{inp}$ is updated. The semantics of rounds is different only in the first round where we have $(f_0,t) \lact{}_1 f_1$ instead of the $f_0 \lact{}_1 f_1$ in the core language. Further, whenever the $\mathit{inp}$ of a process is updated, the timestamp is updated as well. (In particular, if the value of $\mathit{inp}$ of a process was $a$ and later it was updated to $a$ once again, then in principle the value of $\mathit{inp}$ does not change but the time stamp is updated).
\begin{definition}
We introduce some abbreviations for tuples of values with timestamps.
For every tuple of values $f$, and every $i \in \mathbb{N}$ define
$(f,i)$ to be a $\mathit{inp}$-timestamp tuple where the value of $\mathit{inp}$ for process
$p$ is $f(p)$ and the value of the timestamp is $i$. So, for example,
$(\mathit{spread},0)$ denotes the tuple where the value of $\mathit{inp}$ for half of the
process is $a$, for the other half it is $b$, and the
timestamp for every process is $0$. \end{definition}
Similarly to the core case, we give the proof of Theorem~\ref{thm:ts} in three parts: we deal first with structural properties and then with termination, and non-termination.
\subsubsection*{Part 1: Structural properties for timestamps} The structure of the argument is similar to the core case.
\begin{lemma}\label{lem:ts-no-mult}
If no $\mathtt{mult}$ instruction in the first round then the algorithm does not have
termination property. \end{lemma} \begin{proof}
It is easy to verify that $(\mathit{spread},0) \act{\p} (\mathit{spread},0)$ is a phase transition, for
every communication predicate $\p$. \end{proof}
\begin{lemma}\label{lem:ts-no-uni}
If there is a round without uni instruction then the algorithm does not have
termination property. \end{lemma} \begin{proof}
Let $l$ be the first round without uni instruction. If $l \le \mathbf{ir}$,
we have $(\mathit{solo}_a,0)\act{\p}(\mathit{solo}_a,0)$ for every communication predicate.
Otherwise we get $(\mathit{solo}_a,i) \act{\p} (\mathit{solo}_a,i+1)$ for every communication predicate. \end{proof}
The next Lemma points out a crucial difference with the case without timestamps (cf.~Lemma~\ref{lem:fire-one})
\begin{lemma}\label{lem:ts-bias-fire}
For every
$\p$, we have $\set{a,b}\incl\operatorname{fire}_1((\mathit{bias}(\th),i),\p)$ for every $i$
and sufficiently big $\th$. \end{lemma}
\begin{proof}
We let $\th = \max(\thr_u^1,\mathit{thr}_1(\p)) + \e$ for small enough $\e$.
So $b \in \operatorname{fire}_1((\mathit{bias}(\th),i),\p)$, when we take $\mathsf{H}$ to contain all
the $b$'s.
Since all the $\mathtt{mult}$ instructions have $\mathrm{maxts}$ as their operator,
it follows that if we take a multi-set $\mathsf{H}$ consisting of all the values in
the tuple then $a = \mathtt{update}_1(\mathsf{H})$. \end{proof}
Since the semantics of the rounds remains the same except for the first one, Lemmas~\ref{lem:any-qbias-from-qbias},~\ref{lem:any-frequency} and~\ref{lem:any-qbias-from-bias} apply for timestamp algorithms for $k>2$. For the first round, we get the following reformulations.
\begin{lemma}\label{lem:ts-any-frequency}
Suppose rounds $1\dots l$ all have $\mathtt{mult}$
instructions
and none of $\p_1,\dots,\p_l$ is an equalizer.
If $\mathit{set}(f')\incl \operatorname{fire}_1((f,t),\p_1)$ and $? \notin \mathit{set}(f')$
then
$(f,t)\lact{\p_1}_1\dots\lact{\p_l}_l f'$ is possible. \end{lemma} \begin{proof}
Same as that of Lemma~\ref{lem:any-frequency}. \end{proof}
\begin{lemma}\label{lem:ts-any-bias}
Suppose none of $\p_1,\dots,\p_l$ is an equalizer, and some round $1,\dots,l$
does not have a $\mathtt{mult}$ instruction.
For every $\th$ and every $(f,t)$ such that $\set{a,b}\in
\operatorname{fire}_1((f,t),\p_1)$ or $\set{b,?}\in\operatorname{fire}_1((f,t),\p_1)$ we have
$(f,t)\lact{\p_1}_1\dots\lact{\p_l}_l \mathit{bias}^{?}(\th)$. \end{lemma} \begin{proof}
The same argument as for Lemmas~\ref{lem:any-qbias-from-qbias}
and~\ref{lem:any-qbias-from-bias}; replacing replace Lemma~\ref{lem:any-frequency} with Lemma~\ref{lem:ts-any-frequency}.
\end{proof}
We can now deal with the case when there is $\mathtt{mult}$ instruction in round $\mathbf{ir}$. This is an analog of Lemma~\ref{lem:no-mult-sequence}.
\begin{lemma} \label{lem:ts-min}
Suppose round $\mathbf{ir}$ either contains a $\mathtt{mult}$ instruction or $\thr_u^{\mathbf{ir}} < 1/2$.
Then either the algorithm violates consensus or we can remove the
$\mathtt{mult}$ instruction and make $\thr_u^{\mathbf{ir}} = 1/2$ without
affecting the semantics of the algorithm. \end{lemma}
\begin{proof}
Suppose round $\mathbf{ir}$ either contains a $\mathtt{mult}$ instruction or
$\thr_u^{\mathbf{ir}} < 1/2$. We consider two cases:
The first case is when there does not exist any tuple $(f,t)$ with
$(f,t) \lact{\overline{\p}_1}_1 f_1 \dots \lact{\overline{\p}_{\mathbf{ir}-2}}_{\mathbf{ir}-2} f_{\mathbf{ir}-2}$
such that $a,b \in \operatorname{fire}_{\mathbf{ir}-1}(f_{\mathbf{ir}-2},\overline{\p})$. It is then clear
that the $\mathtt{mult}$ instructions in round $\mathbf{ir}$ will never be fired
and so we can remove all these instructions in round $\mathbf{ir}$.
Further it is also clear that setting $\thr_u^{\mathbf{ir}} = 1/2$
does not affect the semantics of the algorithm in this case.
So it remains to examine the case when there exists a tuple $(f,t)$ with
$(f,t) \lact{\overline{\p}_1}_1 f_1\lact{\overline{\p}_2} \cdots\lact{\overline{\p}_{\mathbf{ir}-2}} f_{\mathbf{ir}-2}$ such that
$a,b \in \operatorname{fire}_{\mathbf{ir}-1}(f_{\mathbf{ir}-2},\overline{\p})$.
It is clear that in this case, we also have,
$(\mathit{bias}(\thr_u^1+\e),0) \lact{\overline{\p}_1}_1 f_1\lact{\overline{\p}_2} \cdots\lact{\overline{\p}_{\mathbf{ir}-2}} f_{\mathbf{ir}-2}$. Also we get
$f_{\mathbf{ir}-2} \lact{\overline{\p}_{\mathbf{ir}-1}}_{\mathbf{ir}-1} \mathit{bias}(\th)$ for arbitrary $\th$.
In this case, we will show the following:
Depending on the structure of rounds $\mathbf{ir}$ and $\mathbf{ir}+1$ we will define
two tuples $f_{\mathbf{ir}-1}$ and $f_{\mathbf{ir}}$ with the following properties:
\begin{itemize}
\item $f_{\mathbf{ir}-2} \lact{\overline{\p}_{\mathbf{ir}-1}}_{\mathbf{ir}-1} f_{\mathbf{ir}-1} \lact{\overline{\p}_{\mathbf{ir}}}_{\mathbf{ir}} f_{\mathbf{ir}}$,
\item $f_{\mathbf{ir}}$ contains no $?$ and at least one $a$,
\item either $a,b \in \operatorname{fire}_{\mathbf{ir}+1}(f_{\mathbf{ir}},\overline{\p}_{\mathbf{ir}+1})$
or $b,? \in \operatorname{fire}_{\mathbf{ir}+1}(f_{\mathbf{ir}},\overline{\p}_{\mathbf{ir}+1})$.
\end{itemize}
Notice that if $a,b \in \operatorname{fire}_{\mathbf{ir}+1}(f_{\mathbf{ir}},\overline{\p}_{\mathbf{ir}+1})$ and all rounds after
round $\mathbf{ir}$ have a $\mathtt{mult}$ instruction, then we can use Lemma~\ref{lem:any-frequency}
to conclude that we can decide on both $a$ and $b$.
In the other case, i.e., if some round after round $\mathbf{ir}$ does not have a $\mathtt{mult}$
instruction, or $b,? \in \operatorname{fire}_{\mathbf{ir}+1}(f_{\mathbf{ir}},\overline{\p}_{\mathbf{ir}+1})$
we use Lemmas~\ref{lem:any-qbias-from-bias} and~\ref{lem:any-qbias-from-qbias}
to show that we can make half the processes decide on $b$
and the other half undecided. Now the state of
the algorithm after this phase will be $(f_{\mathbf{ir}},1,\mathit{spread}^?)$
where $f_{\mathbf{ir}}$ contains at least one $a$.
Since all the $\mathtt{mult}$ instructions have $\mathrm{maxts}$ as operator
it follows that we can then get $\mathit{solo}^a$ after the first
round and decide on $a$.
So it remains to come up with $f_{\mathbf{ir}-1}$ and $f_{\mathbf{ir}}$ with the required properties.
We will do a case analysis, and for each case provide both these tuples.
In each of these cases, it can be easily verified that the provided tuples
satisfy the required properties.
\begin{itemize}
\item $\thr_u^{\mathbf{ir}} < 1/2$ or the $\mathtt{mult}$ instruction with the highest threshold
in round $\mathbf{ir}$ has $\mathrm{smor}$ as operator.
\begin{itemize}
\item The $\mathtt{mult}$ instruction with the highest threshold in round $\mathbf{ir}+1$
has $\mathrm{smor}$ as operator: Take $f_{\mathbf{ir}-1} = f_{\mathbf{ir}} = \mathit{spread}$.
\item Otherwise: Take $f_{\mathbf{ir}-1} = \mathit{spread}, f_{\mathbf{ir}} = \mathit{bias}(\max(\thr_u^{\mathbf{ir}+1},\mathit{thr}_{\mathbf{ir}+1}(\overline{\p}))+\e)$.
\end{itemize}
\item The $\mathtt{mult}$ instruction with the highest threshold in round $\mathbf{ir}$ has $\min$
as operator.
\begin{itemize}
\item The $\mathtt{mult}$ instruction with the highest threshold in round $\mathbf{ir}+1$
has $\mathrm{smor}$ as operator: Take $f_{\mathbf{ir}-1} = \mathit{bias}(\max(\thr_u^{\mathbf{ir}},\mathit{thr}_{\mathbf{ir}}(\overline{\p}))+\e), f_{\mathbf{ir}} = \mathit{spread}$.
\item Otherwise: Take $f_{\mathbf{ir}-1} = \mathit{bias}(\max(\thr_u^{\mathbf{ir}},\mathit{thr}_{\mathbf{ir}}(\overline{\p}))+\e)$,\\
$f_{\mathbf{ir}} =
\mathit{bias}(\max(\thr_u^{\mathbf{ir}+1},\mathit{thr}_{\mathbf{ir}+1}(\overline{\p}))+\e)$.
\end{itemize}
\end{itemize}
\end{proof}
\begin{corollary} \label{cor:ts-min}
If round $\mathbf{ir}+1$ has a $\mathtt{mult}$ instruction or $\thr_u^{\mathbf{ir}+1} < 1/2$,
then the $\mathtt{mult}$ instruction can be removed and $\thr_u^{\mathbf{ir}+1}$ can be
made $1/2$ without altering the semantics of the algorithm. \end{corollary}
\begin{proof}
By the previous lemma, round $\mathbf{ir}$ does not have any $\mathtt{mult}$
instruction and $\thr_u^{\mathbf{ir}} \ge 1/2$.
It then follows that if $f \lact{\f}_{\mathbf{ir}} f'$ for arbitrary predicate $\f$
then there cannot be both $a$ and $b$ in $f$.
Hence the $\mathtt{mult}$ instruction in
round $\mathbf{ir}+1$ will never be fired.
Consequently, it can
be removed without affecting the correctness of the algorithm.
It is also clear that we can raise the value of $\thr_u^{\mathbf{ir}+1}$ to $1/2$
without affecting the semantics of the algorithm. \end{proof}
\begin{lemma} \label{lem:ts-constants}
If the property of constants from Definition~\ref{def:ts-structure}
is not satisfied, then agreement is violated.
\end{lemma}
\begin{proof}
The proof starts similarly to the one of Lemma~\ref{lem:constants}.
We consider an execution under the global predicate $\overline{\p}$, and employ
Lemmas~\ref{lem:ts-any-frequency} and~\ref{lem:ts-any-bias}.
We start from configuration $(\mathit{bias}(\th_1),0)$ where
$\th_1>\thr_u^1$ big enough so that by Lemma~\ref{lem:ts-bias-fire} we get
$\set{a,b} \incl\operatorname{fire}_1(\mathit{bias}(\th_1),\overline{\p})$.
We consider also $\th=\thr_u^{\mathbf{ir}+1}+\e$.
By Lemma~\ref{lem:ts-min} there is a preserving round before
round $\mathbf{ir}+1$.
We proceed differently depending on wether
$\thr_m^{1,k}<1-\thr_u^{\mathbf{ir}+1}$ or not.
By the same argument as in Lemma~\ref{lem:constants}, we can
can get $\mathit{bias}^{?}(\th)$ or $\mathit{bias}^{?}_a(\th)$ after round $\mathbf{ir}$.
If $\thr_m^{1,k}<1-\thr_u^{\mathbf{ir}+1}$ then we choose to get $\mathit{bias}^{?}(\th)$.
We use Lemma~\ref{lem:any-qbias-from-bias} to make some processes decide on
$b$.
After this phase there are $1-\th$ processes with timestamp $0$.
We can ensure that among them there is at least one with value $a$ and one
with value $b$.
Since there is $\mathtt{mult}$ instruction in the first round, in the next phase we
send all the values with timestamp $0$.
This way we get $\mathit{solo}^{a}$ after the first round, and make some process decide
on $a$.
The remaining case is when $\thr_m^{1,k}\geq 1-\thr_u^{\mathbf{ir}+1}$.
So we have $\thr_u^1<1-\thr_u^{\mathbf{ir}+1}$, since we have assumed that that the
property of constants from Definition~\ref{def:ts-structure} does not hold.
This time we choose to get $\mathit{bias}^{?}_a(\th)$ after round $\mathbf{ir}$, and make some process
decide on $a$.
Since we have started with $\mathit{bias}(\th_1)$ we can arrange updates so that we
have at least $\min(\th_1,1-\th)$ processes who have value $b$ with timestamp
$0$.
But $\min(\th_1,1-\th)>\thr_u^1$, so by sending $\mathsf{H}$ set consisting of these
$b$'s we reach $\mathit{solo}$ after the first round and make some processes decide on
$b$. \end{proof}
Now we can state the sufficiency proof, similar to Lemma~\ref{lem:agreement}.
\begin{lemma}\label{lem:ts-structure}
If all the structural properties from Definition~\ref{def:ts-structure} are
satisfied then the algorithm satisfies agreement. \end{lemma} \begin{proof}
It is clear that the algorithm satisfies agreement when the initial frequency
is either $\mathit{solo}$ or $\mathit{solo}^a$. Suppose $(\mathit{bias}(\theta),t,d) \act{\p^*}
(\mathit{bias}(\theta'),t',d')$ such that $(\mathit{bias}(\theta'),t',d')$ is the first state in this execution
with a process $p$ which has decided on a value.
Without loss of generality let $b$ be the value that $p$ has decided on.
By Lemma~\ref{lem:ts-min} round $\mathbf{ir}$ does not have any
$\mathtt{mult}$ instructions and $\thr_u^{\mathbf{ir}} \ge 1/2$ and so it follows
that every other process could only decide
on $b$ or not decide at all. For the same reason it follows
that every process either updated its $\mathit{inp}$ value to $b$
or did not update its $\mathit{inp}$ value at all.
Further notice that since $b$ was decided by some
process, it has to be the case that more than $\thr_u^{\mathbf{ir}+1}$ processes have $b$ as
their $\mathit{inp}$ value with the most recent phase as their timestamps.
This means that the number of $a$'s in the configuration is less than $1-\thr_u^{\mathbf{ir}+1}$. Moreover since every process either updated its
$\mathit{inp}$ to $b$ or did not update all, no process with $a$ has the latest
timestamp.
Since $\thr_u^1 \geq 1 - \thr_u^{\mathbf{ir}+1}$, it follows that $a$ cannot be fired
from $(\mathit{bias}(\theta'),t')$ using the $\mathtt{uni}$ instruction in the first round. Further
since $\thr_m^{1,k}\geq 1 - \thr_u^{\mathbf{ir}+1}$ it follows that any $\mathsf{H}$ set bigger
than $\thr_m^{1,k}$ has to contain a value with the latest timestamp.
Since the only value with the latest timestamp is the value $b$,
it follows that
$a$ cannot be fired from $(\mathit{bias}(\theta'),t')$ using the $\mathtt{mult}$ instruction as
well. In consequence, the number of $a$'s can only decrease from this point onwards and so it
follows that no process from this point onwards can decide on $a$.
\end{proof}
The proof of termination is simpler compared to the proof of termination for core language. This is in part due to the use of $maxts$ rather than $smor$ as the operator in the first round.
\subsubsection*{Part 2: termination for timestamps}
\begin{lemma}\label{lem:ts-dec}
If $\p$ is a decider and $(\mathit{solo},t,\mathit{solo}^{?})\act{\p}(f',t',d')$
then $(f',d') = (\mathit{solo},\mathit{solo})$ for every timestamp tuple $t$. Similarly if
$(\mathit{solo}^a,t,\mathit{solo}^{?}) \act{\p}(f',t',d')$ then $(f',d') = (\mathit{solo}^a,\mathit{solo}^a)$. \end{lemma} \begin{proof}
Immediate \end{proof}
\begin{lemma}\label{lem:strong-unifier}
Suppose $\p$ is a strong unifier. If $(\mathit{bias}(\th),t)\act{\p}(f,t')$
then $f=\mathit{solo}$ or $f=\mathit{solo}^a$ (for every timestamp tuple $t$). \end{lemma} \begin{proof}
We first observe that the value $?$ cannot be produced in the first round.
Since $\p$ is a strong unifier we have $\thr_m^{1,k}\leq \mathit{thr}_1(\p)$
and $\thr_u^1\leq \mathit{thr}_1(\p)$, so every $\mathsf{H}$ set above the threshold will
satisfy an instruction of the first round.
Let $i$ be the round such that $\p_i$ is an equalizer
and rounds $2,\dots,i$ are non-preserving.
This round exists by the definition of a unifier.
Thanks to above, we know that till round $i$ we cannot produce $?$ under the predicate $\p$.
Because $\p_i$ has an equalizer, after round $i$ we either have $\mathit{solo}$ or $\mathit{solo}^a$.
This tuple stays till round $\mathbf{ir}$ as the rounds $i+1,\dots,\mathbf{ir}$ are solo-safe. \end{proof}
\begin{proof}\textbf{Main positive}
Suppose there is a unifier followed by a decider.
After the strong unifier we have $\mathit{solo}$ or $\mathit{solo}^{a}$ thanks to
Lemma~\ref{lem:strong-unifier}.
After decider all processes decide thanks to Lemma~\ref{lem:ts-dec}.
\end{proof}
\subsubsection*{Part 3: Non-termination for timestamps}
\begin{lemma}\label{lem:ts-not-decider}
If $\p$ is a not a decider then $(\mathit{solo},t)\act{\p}(\mathit{solo},t)$ and $(\mathit{solo}^a,t)\act{\p}(\mathit{solo}^a,t)$ for any timestamp $t$. \end{lemma} \begin{proof}
If $\p$ is not a decider then there is a round (say $i$) that is not solo-safe.
So from both $\mathit{solo}$ and $\mathit{solo}^{a}$
we can reach the tuple $\mathit{solo}^{?}$ after round $i$.
From $\mathit{solo}^{?}$ no process can decide. \end{proof}
\begin{lemma} \label{lem:ts-not-str-uni}
If $\p$ is not a strong unifier then $(\mathit{bias}(\th),i) \act{\p} (\mathit{bias}(\th),j)$
is possible (for large enough $\th$, arbitrary $i$, and some $j$). \end{lemma}
\begin{proof}
Let $\th > \max(\mathit{thr}_u^1,\mathit{thr}_1(\p))+\e$ and so $a,b \in \operatorname{fire}_1(\mathit{bias}(\th),i)$ by Lemma~\ref{lem:ts-bias-fire}. Suppose $\p$ is not a strong unifier. We do a
case analysis.
Suppose $\mathit{thr}_1(\p) < \thr_m^{1,k}$ or $\mathit{thr}_1(\p) < \thr_u^1$.
Clearly we can get $\mathit{solo}^?$ as the tuple after the first round and then
use this to not decide on anything and retain the input tuple.
Suppose $\p$ does not have an equalizer. We can then
apply Lemmas~\ref{lem:ts-any-bias}
and~\ref{lem:ts-min}
to get $\mathit{solo}^{?}$ after round $\mathbf{ir}$
and so we are done, because nothing is changed in the phase.
Suppose the $k$-th component of $\p$ is an equalizer and suppose
there is a preserving round before round $k$ (it can be round 1 as well).
Let the first preserving round before round $k$ be round $l$.
Since no round before round $l$ is preserving, it follows that
all these rounds have $\mathtt{mult}$ instructions.
Hence by Lemma~\ref{lem:ts-any-frequency} we can get to $\mathit{bias}(\th')$
(where $\th' > \max(\thr_u^l,\mathit{thr}_l(\p))$) before round $l$
(Notice that if $l = 1$ then we
need to reach $\mathit{bias}(\th')$ with $\th' > \max(\thr_u^1,\mathit{thr}_1(\p))$ which is where we start at).
It is clear that $\mathit{bias}(\th') \lact{\p_l}_l \mathit{solo}^{?}$.
We can then propagate
$\mathit{solo}^{?}$ all the way down to get the phase transition $(\mathit{bias}(\th),i) \act{\p}
(\mathit{bias}(\th),i)$.
Suppose the $k$-th component of $\p$ is an equalizer and suppose
there is a non solo-safe round $l$ after $k$. It is
clear that we can reach $\mathit{solo}$ after round $k$ and using this get $\mathit{solo}^{?}$
after round $l$. Hence we once again get the phase transition $(\mathit{bias}(\th),i)
\act{\p} (\mathit{bias}(\th),i)$. \end{proof}
\begin{proof}
\textbf{Main non-termination}
We show that if there is no strong unifier followed by a decider, then the
algorithm may not terminate. We start with $(\mathit{bias}(\th),0)$ where $\th$
is large enough. If $\p$ is not a strong unifier then by
Lemma~\ref{lem:ts-not-str-uni} $(\mathit{bias}(\th),i) \act{\p} (\mathit{bias}(\th),j)$
is possible for arbitrary $i$, and some $j$.
Hence if there is no strong unifier the algorithm will not terminate.
Otherwise let $\p^l$ be the first strong unifier. Notice that $\p^l$ is not
the global predicate as we have assumed the global predicate does not have
equalizers.
Till $\p^l$ we can maintain $\mathit{bias}(\th)$ thanks to
Lemma~\ref{lem:ts-not-str-uni}.
Suppose $\p^l$ is not a decider. By Lemma~\ref{lem:strong-unifier} the
state of $\mathit{inp}$
after this phase will become $\mathit{solo}$ or $\mathit{solo}^a$. However, since $\p^l$ is not a
decider, we can choose to not decide on any value. Hence we get the transition
$(\mathit{bias}(\th),i) \act{\p} (\mathit{solo},i+1)$. Now, since none of the
$\p^{l+1},\dots,\p^k$ and neither the global predicate $\p$ are deciders,
by Lemma~\ref{lem:ts-not-decider} we can have a transition where no decision
happens.
Hence the algorithm does not terminate if there is no decider after a strong unifier. \end{proof}
We give a proof of Theorem~\ref{thm:coordinators}. The structure of the proof is quite similar to the previous cases.
\subsubsection*{Part 1: Structural properties for coordinators}
\begin{lemma}\label{lem:c-no-uni}
If there is a round without uni instruction then the algorithm does not terminate. \end{lemma} \begin{proof}
We get $\mathit{solo}^{a}\act{}\mathit{solo}^{a}$ for every communication predicate. \end{proof}
Compared to the core language, it is not easy to see that the first round of an algorithm with coordinators should have a $\mathtt{mult}$ instruction. However, this is indeed the case as we prove later. For the moment we make an observation.
\begin{lemma} \label{lem:co-weak-mult}
If the first round is not of type $\mathtt{ls}$ then the first round should have a $\mathtt{mult}$ instruction. \end{lemma}
\begin{proof}
Otherwise we have $\mathit{spread} \act{\p} \mathit{spread}$ for arbitrary communication
predicate $\p$. \end{proof}
Before considering the remaining structural requirements we state some useful lemmas.
\begin{lemma}\label{lem:co-no-mult-fire}
If round $k$ is not of
type $\mathtt{ls}$ and does not have a $\mathtt{mult}$ instruction then for all sufficiently big
$\th$ we have $\set{b,?}\in \operatorname{fire}_k(\mathit{bias}(\th),\f)$, for arbitrary predicate
$\f$. \end{lemma} \begin{proof}
Take $\th >\max(\thr_u^k,\f_k(\mathit{thr}))$.
We have $b\in\operatorname{fire}_k(\mathit{bias}(\th_k),\f)$ because of the $\mathtt{uni}$ instruction.
We have $?\in\operatorname{fire}_k(\mathit{bias}(\th_k),\f)$ because there is no $\mathtt{mult}$ instruction. \end{proof}
\begin{lemma}\label{lem:co-bias-fire}
Suppose the first round has a $\mathtt{mult}$ instruction with
$\min$ as operation or is of type $\mathtt{ls}$. Then for the global predicate $\overline{\p}$, we have $\set{a,b}\incl\operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$ for
sufficiently big $\th$. \end{lemma}
\begin{proof}
The claim is clear when the first round is of type $\mathtt{ls}$. Suppose the first round has a $\mathtt{mult}$ instruction with $\min$ as operation.
Let $I$ be that instruction and let $\mathit{thr}^I$ be the threshold
value appearing in instruction $I$.
Let $\th > \thr_u^1$. Notice that $b \in \operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$
because of the $\mathtt{uni}$ instruction in the first round.
Further notice that, from $\mathit{bias}(\th)$ we can construct
a multi-set $\mathsf{H}$ having at least one $a$ and is of size just above $\mathit{thr}^I$.
Since $\overline{\p}$ is the global predicate, we know that this multi-set satisfies $\overline{\p}$
because of assumption~\eqref{eq:syntactic-property}.
Further it is clear that $a = \mathtt{update}_1(\mathsf{H})$ and so
we have $a \in \operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$. \end{proof}
\begin{lemma}\label{lem:co-spread}
Suppose in the first round all $\mathtt{mult}$ instructions have $\mathrm{smor}$ as
operation.
Then for every predicate $\p$ we have
$\set{a,b}\incl\operatorname{fire}_1(\mathit{spread},\p)$.
\end{lemma} \begin{proof}
Same proof as Lemma~\ref{lem:spread}. \end{proof}
\begin{lemma}\label{lem:co-bias-propagation}
Suppose none of $\p_k,\dots,\p_l$ is a c-equalizer.
Suppose round $l$ is not of type $\mathtt{lr}$.
Then there is $\th$ with $\mathit{bias}^{?}(\th)\lact{\p_k}_k\dots\lact{\p_l}_l\mathit{bias}^{?}(\th')$ for arbitrary $\th'$. \end{lemma}
\begin{proof}
If the $k^{th}$ round is a $\mathtt{ls}$ round, consider arbitrary $\th$. By
definition of transitions, we get
$\mathit{bias}^{?}(\th)\lact{\p_k}_k\mathit{bias}^{?}(\th')$ for arbitrary $\th'$.
If the $k^{th}$ round is a $\mathtt{lr}$ round, take $\th=\thr_u^k+\e$ for small $\e$.
We can get $b$ from $\mathit{bias}^{?}(\th)$ because of the $\mathtt{uni}$ instruction.
Since this is an $\mathtt{lr}$ round we have $\mathit{bias}^{?}(\th) \lact{\p_k}_k one_b$.
This round must be followed by an $\mathtt{ls}$ round, so the argument from the previous
paragraph applies, and we can get arbitrary $\mathit{bias}^{?}(\th')$ after round $k+1$.
Otherwise $k^{th}$ round is neither $\mathtt{ls}$ nor $\mathtt{lr}$.
By Lemma~\ref{lem:any-qbias-from-qbias}, we can get arbitrary $\mathit{bias}^{?}(\th')$
after round $k$.
We can repeat this argument till round $l$. \end{proof}
\begin{lemma}\label{lem:co-any-frequency}
Suppose none of $\p_k,\dots,\p_l$ is a c-equalizer, and all rounds $k\dots l$
have $\mathtt{mult}$ instructions. Suppose round $l$ is not of type $\mathtt{lr}$.
For every $f$ and every $f'$ without $?$ such that $\mathit{set}(f')\incl
\operatorname{fire}_k(f,\p_k)$ we have $f\lact{\p_k}_k\dots\lact{\p_l}_l f'$. \end{lemma} \begin{proof}
Notice that since all the considered rounds have $\mathtt{mult}$ instructions,
none of these rounds are of type $\mathtt{ls}$ by assumption on
page~\pageref{assumption-ls-lr}.
Further, since every $\mathtt{lr}$ round is followed by a $\mathtt{ls}$ round, it follows that
we have only two cases:
either all rounds $k,\dots,l$ are of type $\mathtt{every}$, or
rounds $k,\dots,l-1$ are of type $\mathtt{every}$ and round $l$ is of type $\mathtt{lr}$.
Since the second case is excluded by assumption, we only have the
first case which holds by Lemma~\ref{lem:any-frequency}. \end{proof}
\begin{lemma}\label{lem:co-any-bias}
Suppose none of $\p_k,\dots,\p_l$ is an equalizer, and some round $k,\dots,l$
does not have a $\mathtt{mult}$ instruction.
Suppose round $l$ is not of type $\mathtt{lr}$.
For every $\th$ and every $f$ such that $\set{a,b}\in
\operatorname{fire}_k(f,\p_k)$ we have $f\lact{\p_k}_k\dots\lact{\p_l}_l \mathit{bias}^{?}(\th)$,
and $f\lact{\p_k}_k\dots\lact{\p_l}_l \mathit{bias}^{?}_a(\th)$. \end{lemma} \begin{proof}
Let $i$ be the first round without a $\mathtt{mult}$ instruction.
There are two cases.
Suppose rounds $k,\dots,i-1$ are all of type $\mathtt{every}$.
In this case we use Lemma~\ref{lem:co-any-frequency} to reach any
$\mathit{bias}(\theta')$ before round $i$.
If round $i$ is of type $\mathtt{every}$ then we can use
Lemma~\ref{lem:co-no-mult-fire}
to get arbitrary $\mathit{bias}^{?}(\th'')$ after round $i$.
If round $i$ is of type $\mathtt{lr}$, then round $i+1$ is of type $\mathtt{ls}$ and
so we can use Lemma~\ref{lem:co-no-mult-fire} to get
$one_b$ after round $i$ and (since $\p_{i+1}$ is not a c-equalizer)
then use that to get arbitrary $\mathit{bias}^{?}(\th)$
after round $i+1$.
If round $i$ is of type $\mathtt{ls}$, since $\p_i$ is not a c-equalizer,
so we can get arbitrary $\mathit{bias}^{?}(\th)$ after round $i$.
We can then use Lemma~\ref{lem:co-bias-propagation} to finish
the proof.
In the remaining case, by the same reasoning as in the previous lemma we see
that all rounds $k,\dots,i-2$ must be of type $\mathtt{every}$, and round $i-1$ must be
of type $\mathtt{lr}$.
We can use Lemma~\ref{lem:co-any-frequency} to reach $\mathit{bias}(\max(\thr_u^{i-1},\mathit{thr}(\p_{i-1}))+\e)$
before round $i-1$ and then using that reach $one_b$
before round $i$. Since $i-1$ is of type $\mathtt{lr}$, $i$ is of type
$\mathtt{ls}$ and since there are no c-equalizers we can get
arbitrary $\mathit{bias}^{?}(\th)$ after round $i$.
We can then use Lemma~\ref{lem:co-bias-propagation} to finish
the proof.
\end{proof}
\begin{lemma}\label{lem:co-no-mult-fo}
If round $\mathbf{ir}+1$ contains a $\mathtt{mult}$ instruction then the algorithm
does not satisfy agreement, or it can be removed without altering the
semantics of the algorithm \end{lemma}
\begin{proof}
Suppose round $\mathbf{ir}+1$ contains a $\mathtt{mult}$ instruction.
Recall that this implies that round ${\mathbf{ir}+1}$ is not of type $\mathtt{ls}$ (cf.\
assumption on page~\pageref{assumption-ls-lr}).
Recall that $\overline{\p}$ denotes the global predicate.
The first case is when there does not exist any tuple $f$ having an execution
$f \lact{\overline{\p}_1}_1 f_1 \dots \lact{\overline{\p}_{\mathbf{ir}-1}}_{\mathbf{ir}-1} f_{\mathbf{ir}-1}$
with $a,b \in \operatorname{fire}_\mathbf{ir}(f_{\mathbf{ir}-1},\overline{\p})$. It is then clear
that the $\mathtt{mult}$ instructions in round $\mathbf{ir}+1$ will never be fired
and so we can remove all these instructions in round $\mathbf{ir}+1$.
So it remains to examine the case when there exists a tuple $f$ with
$f \lact{\overline{\p}_1}_1 f_1\lact{\overline{\p}_2} \cdots\lact{\overline{\p}_{\mathbf{ir}-1}} f_{\mathbf{ir}-1}$ such that
$a,b \in \operatorname{fire}_{\mathbf{ir}}(f_{\mathbf{ir}-1},\overline{\p}_{\mathbf{ir}})$.
Notice that in this case, the first round cannot be of type $\mathtt{ls}$.
Since round $\mathbf{ir}$ cannot be of type $\mathtt{lr}$ (cf. assumption on
page~\pageref{assumption-ls-lr}) we can get
$f_{\mathbf{ir}-1} \lact{\overline{\p}_{\mathbf{ir}}}_{\mathbf{ir}} \mathit{bias}(\th)$ for arbitrary $\th$.
We now consider two cases: Suppose round $\mathbf{ir}+1$ is of type $\mathtt{every}$.
Then we can proceed exactly as the proof of Lemma~\ref{lem:no-mult-sequence}
and show that agreement is not satisfied.
Suppose round $\mathbf{ir}+1$ is of type $\mathtt{lr}$. Hence round $\mathbf{ir}+2$ is
of type $\mathtt{ls}$.
Let $I$ be the $\mathtt{mult}$ instruction in round $\mathbf{ir}+1$ with the
highest threshold value.
Suppose $I$ has $\mathrm{smor}$ as its operation. Then we consider
$f_{\mathbf{ir}-1} \lact{\overline{\p}_{\mathbf{ir}}}_{\mathbf{ir}} \mathit{spread}$.
Since $I$ has $\mathrm{smor}$ as operation, it is easy to see that
$\mathit{spread} \lact{\overline{\p}_{\mathbf{ir}+1}}_{\mathbf{ir}+1} \mathit{one}_b$.
Since $\overline{\p}$ is the global predicate, $\overline{\p}_{\mathbf{ir}+2}$ is not a c-equalizer,
and so we get $\mathit{one}_b \lact{\overline{\p}_{\mathbf{ir}+2}}_{\mathbf{ir}+2} \mathit{bias}^{?}(\th')$
for arbitrary $\th'$. We can
then use Lemma~\ref{lem:co-bias-propagation} to conclude that
we can make one process decide on $b$ and leave the rest
undecided.
In the next phase, the state of $\mathit{inp}$ is $\mathit{spread}$.
We know, by Lemma~\ref{lem:co-weak-mult} that the first round has a $\mathtt{mult}$
instruction (since as observed above, the first round
is not $\mathtt{ls}$ in this case).
This instruction has $\mathrm{smor}$ (or) $\min$ as its operation,
it is clear that in either case, $a \in \operatorname{fire}_1(\mathit{spread},\overline{\p})$ and
so we can get $\mathit{solo}^a$ after the first round and decide on $a$.
Suppose $I$ has $\min$ as its operation. Then we consider
$f_{\mathbf{ir}-1} \lact{\overline{\p}_{\mathbf{ir}}}_{\mathbf{ir}} \mathit{bias}(\th)$ where $\th > \thr_u^1$
is sufficiently big.
If we send the entire tuple as a HO set, we can fire $a$.
Hence we get $\mathit{bias}(\th) \lact{\overline{\p}_{\mathbf{ir}+1}}_{\mathbf{ir}+1} \mathit{one}_a$.
As in the previous case this allows us to make one process decide on $a$.
Note that the state of $\mathit{inp}$ will be $\mathit{bias}(\th)$
after the end of the phase.
Since the first round has a $\mathtt{uni}$ instruction (Lemma~\ref{lem:c-no-uni}), and
since $\th > \thr_u^1$ (and $\thr_u^1 \ge \mathit{thr}_1(\overline{\p})$ by equation~\ref{eq:c-syntactic-property}), we can get $\mathit{solo}$ after the first round
and decide on $b$. \end{proof}
\begin{lemma} \label{lem:co-ls}
If the first round is of type $\mathtt{ls}$ or has a $\mathtt{mult}$ instruction with $\min$ as operation, then the algorithm does not solve agreement. \end{lemma}
\begin{proof}
Suppose that indeed in the first round we have a $\mathtt{mult}$ instruction with
$\min$ operation or the first round is of type $\mathtt{ls}$.
We execute the phase under the global predicate $\overline{\p}$.
By Lemma~\ref{lem:co-bias-fire} we have $\set{a,b}\incl
\operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$, for some sufficiently big $\th$.
Consider $\th_{\mathbf{ir}+1}=\max(\thr_u^{\mathbf{ir}+1},\mathit{thr}_{\mathbf{ir}+1}(\overline{\p}))+\e$
for some small $\e$.
Thanks to our proviso, the global predicate does not have a c-equalizer,
hence we can freely apply Lemmas~\ref{lem:co-any-frequency}
and~\ref{lem:co-any-bias} to get $\mathit{bias}(\th_{\mathbf{ir}+1})$ or
$\mathit{bias}^{?}(\th_{\mathbf{ir}+1})$ after round $\mathbf{ir}$.
By Lemma~\ref{lem:co-no-mult-fo}, there is no $\mathtt{mult}$ instruction in round
${\mathbf{ir}+1}$.
Hence $\set{b,?}\in\operatorname{fire}_{\mathbf{ir}+1}(\mathit{bias}(\th_{\mathbf{ir}+1}),\overline{\p})$.
We can apply Lemma~\ref{lem:co-bias-propagation} to set $\mathit{dec}$ of
one process to $b$ in this phase and leave the other processes undecided.
Moreover, in the round $\mathbf{ir}$ the variable $\mathit{inp}$ is set to $\mathit{bias}(\max(\th,\th_{\mathbf{ir}+1}))$.
In the next phase, Lemma~\ref{lem:co-bias-fire} says that $\set{a,b}\in
\operatorname{fire}_1(\mathit{bias}(\max(\th,\th_{\mathbf{ir}+1})),\overline{\p})$.
We can get $\mathit{solo}^a$ as the tuple after the first round under global predicate, hence we can set some $\mathit{dec}$ to $a$. \end{proof}
\begin{lemma}
If the first round does not have a $\mathtt{mult}$ instruction then the algorithm does not terminate. \end{lemma}
\begin{proof}
Since the first round does not have type $\mathtt{ls}$, if there are no $\mathtt{mult}$
instructions in the first round, then we have $\mathit{spread} \lact{\f}_1
\mathit{solo}_?$ for any predicate $\f$. \end{proof}
\begin{lemma} \label{lem:co-ls-fo}
If round $\mathbf{ir}+1$ is of type $\mathtt{ls}$, then the algorithm does not solve consensus. \end{lemma}
\begin{proof}
Suppose round $\mathbf{ir}+1$ is of type $\mathtt{ls}$.
We consider an execution of a phase under the
global predicate and so we can freely use Lemmas~\ref{lem:co-any-frequency}
and~\ref{lem:co-any-bias}.
We have seen in Lemma~\ref{lem:co-ls} that in the first round all the
$\mathtt{mult}$ instructions must be $\mathrm{smor}$. We start with $\mathit{spread}$.
We can then use Lemmas~\ref{lem:co-any-frequency}
and~\ref{lem:co-any-bias}
to get $\mathit{spread}$ or $\mathit{spread}^?$ after round $\mathbf{ir}$.
In either case, because round $\mathbf{ir}+1$ is of type $\mathtt{ls}$
and the global predicate does not have c-equalizers,
it follows that we can get $\mathit{bias}^{?}(\th)$ for arbitrary $\th$
after round $\mathbf{ir}+1$. Applying Lemma~\ref{lem:co-bias-propagation}
we can make one process decide on $b$ and prevent the
other processes from deciding.
Notice that the state of $\mathit{inp}$ in the next phase is still $\mathit{spread}$.
By Lemma~\ref{lem:co-spread} we have that $a \in \operatorname{fire}_1(\mathit{spread},\p)$.
Hence we can get $\mathit{solo}^a$ after the first round and use this
to make the undecided processes decide on $a$. \end{proof}
\begin{lemma} \label{lem:co-constants}
If the property of the constants is not satisfied, then the algorithm does not solve consensus. \end{lemma}
\begin{proof}
We consider an execution of a phase under the
global predicate and so we can freely use Lemmas~\ref{lem:co-any-frequency}
and~\ref{lem:co-any-bias}.
We have seen in Lemma~\ref{lem:co-ls} that in the first round all the
$\mathtt{mult}$ instructions must be $\mathrm{smor}$. We start with $\mathit{spread}$.
We have two cases, that resemble those of Lemma~\ref{lem:constants}.
The first case is when all the rounds $1,\dots,\mathbf{ir}$ are not-c-preserving.
Since we consider global predicate, there are not $c$-equalizers, so none of
these rounds in an $\mathtt{ls}$ round.
This implies that all these rounds have a $\mathtt{mult}$ instruction.
We take $\th_{\mathbf{ir}+1}=\thr_u^{\mathbf{ir}+1}+\e$.
From Lemma~\ref{lem:co-any-frequency} we can get $\mathit{bias}(\th_{\mathbf{ir}+1})$ as a tuple
before the round $\mathbf{ir}+1$.
By observation~\eqref{eq:syntactic-property}, we have
$\thr_u^{\mathbf{ir}+1}\geq\mathit{thr}_{\mathbf{ir}+1}(\overline{\p})$, so $b\in\operatorname{fire}_{{\mathbf{ir}+1}}(\mathit{bias}(\th_{\mathbf{ir}+1}),\overline{\p})$.
We also have $?\in\operatorname{fire}_{{\mathbf{ir}+1}}(\mathit{bias}(\th_{\mathbf{ir}+1}),\overline{\p})$, because
round $\mathbf{ir}+1$ does not have any $\mathtt{mult}$ instructions.
Then we can apply Lemma~\ref{lem:co-bias-propagation} to set $\mathit{dec}$ of
one process to $b$ in this round and leave the other processes undecided.
The second case, is when there is a preserving round among $1,\dots,\mathbf{ir}$.
Let $j\leq \mathbf{ir}$ be the first such round.
Since, all rounds before $j$ are not-c-preserving, by
Lemma~\ref{lem:co-any-frequency}, we can get $\mathit{bias}(\thr_u^j+\e)$ before
round $j$.
Since $j$ is preserving, it is either of type $\mathtt{ls}$, or has
no $\mathtt{mult}$ instructions or
$\mathit{thr}_j(\overline{\p})<\max(\thr_u^j,\mathit{thr}^{j,k}_m)$.
In the first case, since the global predicate does not have c-equalizers, we
have $\set{b,?}\incl\operatorname{fire}_j(\mathit{bias}(\thr_u^j+\e),\overline{\p})$.
In the other cases the type of round $j$ can be $\mathtt{every}$ or $\mathtt{lr}$.
For type $\mathtt{every}$ we also get $\set{b,?}\incl\operatorname{fire}_j(\mathit{bias}(\thr_u^j+\e),\overline{\p})$.
For type $\mathtt{lr}$, we have that the round $j+1$ is $\mathtt{ls}$.
Since we have assumed that $\overline{\p}$ does not have a c-equalizer, $\overline{\p}_{j+1}$ does
not have $\f_\mathtt{ls}$.
So we can get $\mathit{bias}^{?}(\th)$ for arbitrary $\th$ after round $j+1$.
After all these cases we can use Lemma~\ref{lem:co-bias-propagation} to get
$\mathit{bias}^{?}(\th_{\mathbf{ir}+1})$ before round ${\mathbf{ir}+1}$; where as before $\th_{\mathbf{ir}+1}=\thr_u^{\mathbf{ir}+1}+\e$.
As in the first case, we employ Lemma~\ref{lem:co-bias-propagation} to make
some process decide on $b$ and leave other processes undecided.
In both cases we can arrange the execution so that the state of
$\mathit{inp}$ after this phase is $(\mathit{bias}(\th_{\mathbf{ir}+1}),\mathit{spread}^{?})$ or
$(\mathit{spread},\mathit{spread}^{?})$.
The same argument as in Lemma~\ref{lem:constants} shows that some process can
decide on $a$ in the next phase. \end{proof}
\begin{lemma} \label{lem:co-suff}
If all the structural properties are satisfied then the algorithm satisfies agreement. \end{lemma}
\begin{proof}
It is clear that the algorithm satisfies agreement when the initial frequency is either $\mathit{solo}$ or $\mathit{solo}^a$. Suppose $(\mathit{bias}(\theta),d) \act{\p^*} (\mathit{bias}(\theta'),d')$ such that for the first time in this transition sequence, some process has decided (say the process has decided on $a$). Since $\thr_m^{1,k}/2 \geq 1-\thr_u^{\mathbf{ir}+1}$ we have that $\thr_u^{\mathbf{ir}+1} \ge 1/2$.
Further, since round $\mathbf{ir}+1$ does not have any $\mathtt{mult}$ instructions (Lemma~\ref{lem:co-no-mult-fo}), it follows that every other process could only decide on $a$ or not decide at all.
Further notice that since $a$ was decided by some process and since
round $\mathbf{ir}+1$ is not of type $\mathtt{ls}$ (Lemma~\ref{lem:co-ls-fo}),
it has to be the case that at least $\thr_u^{\mathbf{ir}+1}$ processes have $a$ as their $\mathit{inp}$ value. Hence $\theta' < 1 - \thr_u^{\mathbf{ir}+1}$.
Recall that the first round is not of type $\mathtt{ls}$ (Lemma~\ref{lem:co-ls}).
Since $\theta' < 1 - \thr_u^{\mathbf{ir}+1} \le \thr_u^1$, it follows that $b$ cannot be fired from $\mathit{bias}(\theta')$ using the $\mathtt{uni}$ instruction in the first round.
Since $\theta' < 1 - \thr_u^{\mathbf{ir}+1} \le \thr_m^{1,k}/2$ and since every $\mathtt{mult}$ instruction in the first round has $\mathrm{smor}$ as its operator, it follows that
$b$ cannot be fired from $\mathit{bias}(\theta')$ using the $\mathtt{mult}$ instruction as well.
Hence the number of $b$'s can only decrease from this point onwards, and so it follows that no process from this point onwards can decide on $b$. \end{proof}
\subsubsection*{Part 2: termination for coordinators}
\begin{lemma}\label{lem:co-fire-one}
For the global predicate $\overline{\p}$: $a\in\operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$ iff $\th<\bar{\thr}$.
(Similarly $b\in\operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$ iff $1-\bar{\thr}<\th$). \end{lemma} \begin{proof}
Since the first round cannot be a $\mathtt{ls}$ round (Lemma~\ref{lem:co-ls}), the
proof of this lemma
is the same as that of Lemma~\ref{lem:fire-one}. \end{proof}
\begin{corollary}\label{cor:co-no-a-above-bthr}
For every predicate $\p$, if $\th\geq\bar{\thr}$ then $a\not\in\operatorname{fire}_1(\mathit{bias}(\th),\p)$.
Similarly if $\th\leq 1-\bar{\thr}$ then $b \not \in\operatorname{fire}_1(\mathit{bias}(\th),\p)$. \end{corollary}
\begin{lemma}\label{lem:co-unifier}
Suppose $\p$ is a unifier and $\mathit{bias}(\th)\act{\p}f$.
If $\thr_u^1\leq \thr_m^{1,k}$ or $1-\bar{\thr}\le\th\le\bar{\thr}$
then $f=\mathit{solo}$ or $f=\mathit{solo}^a$. \end{lemma} \begin{proof}
The argument is the same as in Lemma~\ref{lem:unifier}, as the first round
cannot be of type $\mathtt{ls}$. \end{proof}
\begin{lemma}\label{lem:co-decider}
If $\p$ is a decider and $(\mathit{solo},\mathit{solo}^?)\act{\p}(f',d')$
then $(f',d') = (\mathit{solo},\mathit{solo})$.
Similarly, if $(\mathit{solo}^a,\mathit{solo}^?) \act{\p}(f',d')$ then $(f',d') =
(\mathit{solo}^a,\mathit{solo}^a)$.
In case $\thr_m^{1,k}\leq \thr_u^1$, for every $\th\geq\bar{\thr}$: if
$(\mathit{bias}(\th),\mathit{solo}^?)\act{\p}(f',d')$ then $(f',d')=(\mathit{solo},\mathit{solo})$
and for every $\th\leq 1-\bar{\thr}$: if
$(\mathit{bias}(\th),\mathit{solo}^?)\act{\p}(f',d')$ then $(f',d')=(\mathit{solo}^a,\mathit{solo}^a)$. \end{lemma} \begin{proof}
The same as in the case of the core language. \end{proof}
\begin{lemma}\label{lem:co-main-positive}
If an algorithm in a core language has structural properties from
Definition~\ref{def:structure}, and satisfies condition cT1 then it solves consensus. \end{lemma} \begin{proof}
The same as for the core language. \end{proof}
\subsubsection*{Part 3: non-termination for coordinators}
\begin{lemma}\label{lem:co-not-decider}
If $\p$ is a not a c-decider then
$\mathit{solo}\act{\p}\mathit{solo}$ and $\mathit{solo}^a \act{\p} \mathit{solo}^a$; namely, no process may decide. \end{lemma} \begin{proof}
If $\p$ is not a decider then there is a round, say $i$, that is not c-solo-safe.
By definition this means that either round $i$ is of type $\mathtt{ls}$ with $\p_i$
not containing $\f_\mathtt{ls}$ or it has one of the two other types and $\mathit{thr}_i(\p)< \thr_u^i$.
It is then easy to verify that for $j < i$, $\mathit{solo} \lact{\p_j}_j \mathit{solo}$,
$\mathit{solo} \lact{\p_i}_i \mathit{solo}^?$ and $\mathit{solo}^? \lact{\p_k}_k \mathit{solo}^?$ for $k > i$. Hence this
ensures that no process decides during this phase.
Similar proof holds when the $\mathit{inp}$ tuple is $\mathit{solo}^a$. \end{proof}
\begin{lemma}\label{lem:co-global-th}
For the global predicate $\overline{\p}$: if $1/2\leq \th\leq \bar{\thr}$ then
$\mathit{bias}(\th)\act{\overline{\p}}\mathit{bias}(\th')$ for every $\th'\geq 1/2$. \end{lemma} \begin{proof}
The proof follows the same argument as in Lemma~\ref{lem:global-th}.
There are some complications due to new types of rounds.
As in Lemma~\ref{lem:global-th} we start by observing that $\set{a,b}\incl
\operatorname{fire}_1(\mathit{bias}(\th,\overline{\p})$.
This follows, as we have observed that the first round cannot be of type
$\mathtt{ls}$.
If there are $\mathtt{mult}$ instructions in rounds $2,\dots,\mathbf{ir}$ then
Lemma~\ref{lem:co-any-frequency} ensures that for arbitrary $\th'$ we can get
$\mathit{bias}(\th')$ after round $\mathbf{ir}$ (we have observed that round $\mathbf{ir}$ cannot be
of type $\mathtt{lr}$).
Since there is no $\mathtt{mult}$ instruction in round ${\mathbf{ir}+1}$ (Proviso~\ref{proviso}
and Lemma~\ref{lem:co-no-mult-fo}) we can get $\mathit{solo}^{?}$ after round ${\mathbf{ir}+1}$ and
decide on nothing.
Hence we are done in this case.
If some round $2,\dots,\mathbf{ir}$ does not have a $\mathtt{mult}$ instruction then we can
use Lemma~\ref{lem:co-any-bias} to get $\mathit{bias}^{?}(\th'')$ as well as
$\mathit{bias}^{?}_a(\th'')$, for arbitrary $\th''$, after round $\mathbf{ir}$.
There are two cases depending on $\th'\geq \th$ or not.
If $\th'\geq \th$ then we take $\th''=\min(\th,\thr_u^{\mathbf{ir}+1}-\e)$.
For the same reasons as in Lemma~\ref{lem:global-th} that $\th''\geq 1/2$, so we can get
$\mathit{bias}(\th')$ as a state of $\mathit{inp}$ after round $\mathbf{ir}$.
We show that $\mathit{bias}(\th')\lact{\overline{\p}}_{\mathbf{ir}+1}\mathit{solo}^{?}$.
If round ${\mathbf{ir}+1}$ is of type $\mathtt{ls}$ then this is direct from definition since
round $\mathbf{ir}$ is not of type $\mathtt{lr}$.
Otherwise, we can just set the whole multiset of values to every process, and
there is not enough of $b$'s to pass $\thr_u^{\mathbf{ir}+1}$ threshold.
So we are done in this case.
The remaining case is when $\th'<\th$.
As in Lemma~\ref{lem:global-th} we reach $\mathit{bias}^{?}_a(\th'')$ for
$\th''=\th-\th'$.
This gives us some $a$'s that we need, to convert $\mathit{bias}(\th)$ to
$\mathit{bias}(\th')$.
As in the previous case we argue that we can get $\mathit{solo}^{?}$ after round ${\mathbf{ir}+1}$.
So we are done in this case too. \end{proof}
\begin{lemma}
If $\p$ is not a c-unifier then
\begin{equation*}
\mathit{bias}(\th)\act{\p}\mathit{bias}(\th)\qquad \text{for some $1/2\leq \th <\bar{\thr}$}
\end{equation*} \end{lemma} \begin{proof}
As in the proof of an analogous lemma for the core language,
Lemma~\ref{lem:not-uni}, we examine all the reasons for $\p$ not to be a
c-unifier.
The case of conditions of constants is the same as in Lemma~\ref{lem:not-uni}
as the round cannot be of type $\mathtt{ls}$.
If there is no equalizer in $\p$ up to round $\mathbf{ir}$ then the reasoning is the
same but now using Lemmas~\ref{lem:co-any-frequency} and~\ref{lem:co-any-bias}. \end{proof}
We can conclude the non-termination case. The proof is the same in for the core language but now using Lemmas~\ref{lem:co-not-decider} and~\ref{lem:co-global-th}.
\begin{lemma}
If the structural properties from Definition~\ref{def:structure} hold, but the
condition cT1 does not hold then the algorithm does not terminate \end{lemma}
We give a proof of Theorem~\ref{thm:ts-coordinators}. The structure of the proof is the same as in the other cases.
\subsubsection*{Part 1: Structural properties for coordinators with timestamps}
\begin{lemma}
If there is a round without uni instruction then the algorithm does not terminate. \end{lemma} \begin{proof}
Let $l$ be the first round without uni instruction and let $\p$
be any predicate. If $l < \mathbf{ir}$,
we get $(\mathit{solo}^a,0)\act{\p}(\mathit{solo}^a,0)$ for every communication predicate.
Otherwise we get $(\mathit{solo}^a,i) \act{\p} (\mathit{solo}^a,i+1)$ for every communication predicate. \end{proof}
\begin{lemma} \label{lem:co-ts-weak-mult}
If the first round is not of type $\mathtt{ls}$ then the first round should have a $\mathtt{mult}$ instruction. \end{lemma}
\begin{proof}
Let $\p$ be any predicate. If the first round is not of type $\mathtt{ls}$
and does not have a $\mathtt{mult}$ instruction then we will have
$(\mathit{spread},0) \act{\p} (\mathit{spread},0)$. \end{proof}
The following lemma is an adaption of Lemma~\ref{lem:ts-min} to the extension with coordinators.
\begin{lemma}\label{lem:co-ts-no-mult-fo}
If round $\mathbf{ir}$ is not a $\mathtt{ls}$ round
and either contains a $\mathtt{mult}$ instruction or
$\thr_u^{\mathbf{ir}} < 1/2$, then the algorithm
does not satisfy agreement, or we can remove the $\mathtt{mult}$ instruction and make $\thr_u^{\mathbf{ir}} = 1/2$ without altering the
semantics of the algorithm. \end{lemma} \begin{proof}
Suppose round $\mathbf{ir}$ is not of type $\mathtt{ls}$ and
either contains a $\mathtt{mult}$ instruction or
$\thr_u^{\mathbf{ir}} < 1/2$. We consider two cases:
The first case is when there does not exist any tuple $(f,t)$ with
$(f,t) \lact{\overline{\p}_1}_1 f_1 \dots \lact{\overline{\p}_{\mathbf{ir}-2}}_{\mathbf{ir}-2} f_{\mathbf{ir}-2}$
such that $a,b \in \operatorname{fire}_{\mathbf{ir}-1}(f_{\mathbf{ir}-2},\overline{\p})$.
Notice that this happens in particular when some round before round $\mathbf{ir}$ are
of type $\mathtt{ls}$.
It is then clear that the $\mathtt{mult}$ instructions in round $\mathbf{ir}$ will never be
fired and so we can remove all these instructions in round $\mathbf{ir}$.
Further it is also clear that setting $\thr_u^{\mathbf{ir}} = 1/2$
does not affect the semantics of the algorithm in this case.
So it remains to examine the case when there exists a tuple
$(f,t)$ with
$(f,t) \lact{\overline{\p}_1}_1 f_1\lact{\overline{\p}_2} \cdots\lact{\overline{\p}_{\mathbf{ir}-2}} f_{\mathbf{ir}-2}$ such that
$a,b \in \operatorname{fire}_{\mathbf{ir}-1}(f_{\mathbf{ir}-2},\overline{\p})$.
By the above observation, none of the rounds before round $\mathbf{ir}$
are of type $\mathtt{ls}$.
Further, we have assumed that round $\mathbf{ir}$ is itself not of type $\mathtt{ls}$.
By our proviso, it also follows that round $\mathbf{ir}$ is not of type $\mathtt{lr}$.
Since every $\mathtt{lr}$ round should be followed by a $\mathtt{ls}$ round,
it follows that in this case all the rounds up to and including
round $\mathbf{ir}$ are of type $\mathtt{every}$. Hence, the proof of this case is the
same as the proof of Lemma~\ref{lem:ts-min}. \end{proof}
The proof of the following Corollary is the similar to the proof of corollary~\ref{cor:ts-min}.
\begin{corollary} \label{cor:co-ts-no-mult-fo}
If round $\mathbf{ir}+1$ is not of type $\mathtt{ls}$ and has a $\mathtt{mult}$ instruction or $\thr_u^{\mathbf{ir}+1} < 1/2$,
then the $\mathtt{mult}$ instruction can be removed and $\thr_u^{\mathbf{ir}+1}$ can be
made $1/2$ without altering the semantics of the algorithm. \end{corollary}
\begin{lemma} \label{lem:co-ts-no-ls}
The first round cannot be of type $\mathtt{ls}$. \end{lemma} \begin{proof}
Suppose the first round is of type $\mathtt{ls}$.
We execute the phase under the global predicate $\overline{\p}$.
By semantics we have $\set{a,b}\incl
\operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$, for arbitrary $\th$.
Consider $\th_{\mathbf{ir}+1}=\max(\thr_u^{\mathbf{ir}+1},\mathit{thr}_{\mathbf{ir}+1}(\overline{\p}))+\e$
for some small $\e$.
Thanks to our proviso, the global predicate does not have a c-equalizer,
hence we can freely apply Lemmas~\ref{lem:co-any-frequency}
and~\ref{lem:co-any-bias} to get $\mathit{bias}(\th_{\mathbf{ir}+1})$ or
$\mathit{bias}^{?}(\th_{\mathbf{ir}+1})$ after round $\mathbf{ir}$.
By Corollary~\ref{cor:co-ts-no-mult-fo}, there is no $\mathtt{mult}$ instruction in round
${\mathbf{ir}+1}$.
Hence $\set{b,?}\in\operatorname{fire}_{\mathbf{ir}+1}(\mathit{bias}(\th_{\mathbf{ir}+1}),\overline{\p})$.
We can apply Lemma~\ref{lem:co-bias-propagation} to set $\mathit{dec}$ of
one process to $b$ in this phase and leave the other processes undecided.
Moreover, in the round $\mathbf{ir}$ the variable $\mathit{inp}$ is set to $\mathit{bias}(\max(\th,\th_{\mathbf{ir}+1}))$.
In the next phase, once again we have $\set{a,b}\in
\operatorname{fire}_1(\mathit{bias}(\max(\th,\th_{\mathbf{ir}+1})),\overline{\p})$.
We can get $\mathit{solo}_a$ under global predicate, hence we can set some $\mathit{dec}$ to $a$. \end{proof}
\begin{lemma}\label{lem:co-ts-mult-in-the-first-round}
The first round should have a $\mathtt{mult}$ instruction. \end{lemma}
\begin{proof}
Follows from Lemmas~\ref{lem:co-ts-no-ls} and \ref{lem:co-ts-weak-mult}. \end{proof}
\begin{lemma}\label{lem:co-ts-bias-fire}
For the global predicate $\p$, we have
$\set{a,b}\incl\operatorname{fire}_1((\mathit{bias}(\th),i),\p)$ for
sufficiently big $\th$ and every $i$. \end{lemma}
\begin{proof}
Similar to that of Lemma~\ref{lem:ts-bias-fire}. \end{proof}
\begin{lemma} \label{lem:co-ls-fo}
If round $\mathbf{ir}+1$ is of type $\mathtt{ls}$, then the algorithm does not solve consensus. \end{lemma} \begin{proof}
Suppose round $\mathbf{ir}+1$ is of type $\mathtt{ls}$.
We consider an execution of a phase under the
global predicate and so we can freely use Lemmas~\ref{lem:co-any-frequency}
and~\ref{lem:co-any-bias}.
We have seen that the first round cannot be of type $\mathtt{ls}$.
We can take $\th$ big enough to have $\set{a,b}\in\operatorname{fire}_1(\mathit{bias}(\th),\overline{\p})$.
We can then use Lemmas~\ref{lem:co-any-frequency}
and~\ref{lem:co-any-bias}
to get $\mathit{bias}(\th')$ or $\mathit{bias}^{?}(\th')$ after round $\mathbf{ir}$; for arbitrary
$\th'$.
In either case, because round $\mathbf{ir}+1$ is of type $\mathtt{ls}$
and the global predicate does not have c-equalizers,
it follows that we can get $\mathit{bias}^{?}(\th'')$ for arbitrary $\th''$
after round ${\mathbf{ir}+1}$. Applying Lemma~\ref{lem:co-bias-propagation}
we can make one process decide on $b$ and prevent the
other processes from deciding.
Notice that the state of $\mathit{inp}$ in the next phase will have $\th'$ processes
with value $b$ and timestamp $1$.
Till now we have put no constraints on $\th'$, so we can take it sufficiently
small so that $1-\th' > \thr_u^1$. This enables us to get $\mathit{solo}^a$ after the first round.
We use this to make the undecided processes decide on $a$. \end{proof}
\begin{lemma} \label{lem:co-ts-constants}
If the property of constants from Definition~\ref{def:ts-structure}
is not satisfied, then agreement is violated. \end{lemma}
\begin{proof}
The proof is similar to the one of Lemma~\ref{lem:ts-constants}.
We consider an execution under the global predicate $\overline{\p}$, and employ
Lemmas~\ref{lem:co-any-frequency} and~\ref{lem:co-ts-bias-fire}.
We start from configuration $(\mathit{bias}(\th_1),0)$ where
$\th_1>\thr_u^1$ big enough so that by Lemma~\ref{lem:co-ts-bias-fire} we get
$\set{a,b} \incl\operatorname{fire}_1(\mathit{bias}(\th_1),\overline{\p})$.
Observe that the first round
cannot be of type $\mathtt{ls}$ by Lemma~\ref{lem:co-ts-no-ls} so we can get arbitrary
bias after the first round.
Due to Lemma~\ref{lem:co-ts-no-mult-fo} we know that there
is a c-preserving round before round ${\mathbf{ir}+1}$.
Let $j\leq \mathbf{ir}$ be the first c-preserving round.
Since all rounds before $j$ are non-c-preserving, by
Lemma~\ref{lem:co-any-frequency} we can get $\mathit{bias}(\mathit{thr}^j_u+\e)$, as well as
$\mathit{bias}(1-(\mathit{thr}^j_u+\e))$ before round $j$
(intuitively, we can get bias with many $b$'s or many $a$'s).
Since $j$ is preserving, it is either of type $\mathtt{ls}$ or $\mathit{thr}_j(\overline{\p})<
\max(\thr_u^j,\thr_m^{j,k})$.
If it is of type $\mathtt{ls}$ then we get $\set{a,b,?}\in
\operatorname{fire}_j(\mathit{bias}((\mathit{thr}^j_u+\e)),\overline{\p})$ since $\overline{\p}\!\!\downharpoonright_j$ is not $c$-equalizer .
In the other cases we use $\mathit{bias}(\mathit{thr}^j_u+\e)$ if we want to get $\set{b,?}$
and $\mathit{bias}(1-(\mathit{thr}^j_u+\e))$ if we want to get $\set{a,?}$.
If $j$ is of type $\mathtt{every}$ we get it at round $j$.
If $j$ is of type $\mathtt{lr}$ then we get it at round $j+1$ since round $j+1$ must be necessarily of
type $\mathtt{ls}$.
We then use Lemma~\ref{lem:co-bias-propagation} to reach $\mathit{bias}^{?}(\th_{\mathbf{ir}+1})$ or
$\mathit{bias}^{?}(1-\th_{\mathbf{ir}+1})$ before round ${\mathbf{ir}+1}$; where as before
$\th_{\mathbf{ir}+1}=\thr_u^{\mathbf{ir}+1}+\e$.
We have two cases depending on whether $\thr_m^{1,k}< 1-\thr_u^{\mathbf{ir}+1}$ or not.
If $\thr_m^{1,k}< 1-\thr_u^{\mathbf{ir}+1}$ then we reach $\mathit{bias}^{?}(\th_{\mathbf{ir}+1})$ before round
${\mathbf{ir}+1}$, and then make some processes decide on $b$.
After this phase there are $1-\th_{\mathbf{ir}+1}$ processes with timestamp $0$.
We can ensure that among them there is at least one with value $a$ and one
with value $b$.
Since there is $\mathtt{mult}$ instruction in the first round (Lemma~\ref{lem:co-ts-mult-in-the-first-round}), in the next phase we
send all the values with timestamp $0$.
This way we get $\mathit{solo}^{a}$ after the first round, and make some process decide
on $a$.
The remaining case is when $\thr_m^{1,k}\geq 1-\thr_u^{\mathbf{ir}+1}$.
So we have $\thr_u^1<1-\thr_u^{\mathbf{ir}+1}$, since we have assumed that that the
property of constants from Definition~\ref{def:ts-structure} does not hold.
This time we choose to get $\mathit{bias}^{?}_a(\th_{\mathbf{ir}+1})$ after round $\mathbf{ir}$, and make some process
decide on $a$.
Since we have started with $\mathit{bias}(\th_1)$ we can arrange updates so that at
the beginning of the next phase we have at least $\min(\th_1,1-\th_{\mathbf{ir}+1})$ processes who
have value $b$ with timestamp $0$.
But $\thr_u^1<\min(\th_1,1-\th_{\mathbf{ir}+1})$, so by sending $\mathsf{H}$ set consisting of these
$b$'s we reach $\mathit{solo}$ after the first round and make some processes decide on
$b$. \end{proof}
\begin{lemma}
If all the structural properties are satisfied then the algorithm satisfies agreement. \end{lemma}
\begin{proof}
It is clear that the algorithm satisfies agreement when the initial frequency
is either $\mathit{solo}$ or $\mathit{solo}^a$. Suppose $(\mathit{bias}(\theta),t,d) \act{\p^*}
(\mathit{bias}(\theta'),t',d')$ such that for the first time in this transition sequence,
some process has decided (say the process has decided on $b$).
Since there exists an $\mathtt{ls}$ round in the algorithm, it follows
that every other process could only decide
on $b$ or not decide at all.
Further notice that since $b$ was decided by some
process, it has to be the case that more than $\thr_u^{\mathbf{ir}+1}$ processes have $b$ as
their $\mathit{inp}$ value and maximum timestamps.
This means that the number of $a$'s in the configuration is less than $1-\thr_u^{\mathbf{ir}+1}$.
Also notice that since round $\mathbf{ir}$ has no $\mathtt{mult}$ instructions
and $\thr_u^{\mathbf{ir}} \ge 1/2$, it follows that no process with
value $a$ has the latest timestamp.
Since $\thr_u^1 \geq 1 - \thr_u^{\mathbf{ir}+1}$, it follows that $a$ cannot be fired
from $\mathit{bias}(\theta')$ using the $\mathtt{uni}$ instruction in the first round. Further
since $\thr_m^{1,k}\geq 1 - \thr_u^{\mathbf{ir}+1}$ it follows that any HO set bigger
than $\thr_m^{1,k}$ has to contain a value with the latest timestamp.
As no $a$ has the latest timestamp, $a$ cannot be fired from $\mathit{bias}(\theta')$
using the $\mathtt{mult}$ instruction as
well. In consequence, the number of $b$'s can only increase from this point onwards and so it
follows that no process from this point onwards can decide on $a$. A similar
argument applies if the first value decided was $a$. \end{proof}
\subsubsection*{Part 2: termination for coordinators with timestamps.}
The proof for termination is very similar to the case of timestamps.
\begin{lemma}\label{lem:co-ts-dec}
If $\f$ is a c-decider and $(\mathit{solo},t,\mathit{solo}^?)\act{\f}(f,t',d)$
then $(f,d) = (\mathit{solo},\mathit{solo})$ for any ts-tuple $t$. Similarly if $(\mathit{solo}^a,t,\mathit{solo}^?) \act{\f}(f,t',d)$ then $(f,d) = (\mathit{solo}^a,\mathit{solo}^a)$. \end{lemma} \begin{proof}
Immediate \end{proof}
\begin{lemma}\label{lem:co-ts-str-uni}
Suppose $\f$ is a strong c-unifier and $(\mathit{bias}(\th),t)\act{\f}(f,t')$ then
$f=\mathit{solo}$ or $f=\mathit{solo}^a$ (for every tuple of timestamps $t$). \end{lemma} \begin{proof}
Let $i$ be the round with c-equalizer.
Till round $i$ we cannot produce $?$.
After round $i$ we have $\mathit{solo}$ or $\mathit{solo}^{a}$.
This stays till round $\mathbf{ir}$ as the rounds after $i$ are c-solo-safe. \end{proof}
\begin{proof}\textbf{Main positive}
Suppose there is a strong c-unifier followed by a c-decider.
After the strong c-unifier we have $\mathit{solo}$ or $\mathit{solo}^{a}$ thanks to Lemma~\ref{lem:co-ts-str-uni}.
After c-decider all processes decide thanks to Lemma~\ref{lem:co-ts-dec}. \end{proof}
\subsubsection*{Part 3: Non-termination for coordinators with timestamps}
\begin{lemma}\label{lem:co-ts-not-decider}
If $\p$ is a not a c-decider then $(\mathit{solo},t)\act{\p}(\mathit{solo},t)$ and
$(\mathit{solo}^a,t)\act{\p}(\mathit{solo}^a,t)$ for every tuple of timestamps $t$. \end{lemma} \begin{proof}
If $\p$ is not a c-decider then there is a round that is not c-solo-safe.
So we can go to $\mathit{solo}^{?}$ both from $\mathit{solo}$ and from $\mathit{solo}^{a}$.
From $\mathit{solo}^{?}$ no process can decide. \end{proof}
\begin{lemma} \label{lem:co-ts-not-str-uni}
If $\p$ is not a strong c-unifier
then $(\mathit{bias}(\th),i) \act{\p} (\mathit{bias}(\th),j)$
is possible (for large enough $\th$, arbitrary $i$, and some $j$). \end{lemma}
\begin{proof}
Let $\th > \max(\thr_u^1,\mathit{thr}_1(\p)) + \e$. Suppose $\p$ is not a strong c-unifier.
We do a case analysis.
Suppose $\mathit{thr}_1(\p) < \thr_m^{1,k}$ or $\mathit{thr}_1(\p) < \thr_u^1$.
We can get $\mathit{solo}^?$ after the first round and then
use this to not decide on anything and retain the input tuple.
Suppose $\p$ does not have an c-equalizer. In this case can we
apply Lemmas~\ref{lem:co-ts-bias-fire},~\ref{lem:co-ts-no-ls},~\ref{lem:co-any-bias} and~\ref{lem:co-ts-no-mult-fo} to conclude that
we can reach $\mathit{solo}^{?}$ before round ${\mathbf{ir}+1}$
and so we are done, because nothing is changed after the phase.
The next possible situation is that $i$ is the first component of $\p$ that is a c-equalizer,
and there is a c-preserving round, call it $j$ before round $i$
(it can be round $1$ as well).
Every round before round $j$ is non-c-preserving, so it cannot be of type $\mathtt{ls}$.
This is because non-c-preserving round of type $\mathtt{ls}$ is necessarily a c-equalizer,
and the first c-equalizer is $i$.
So every round up-to $(j-1)$ has to be of type $\mathtt{every}$, and round $j$ can be of
either of type $\mathtt{every}$ or of type $\mathtt{lr}$ (because $\mathtt{lr}$ round must be followed by $\mathtt{ls}$ round
thanks to assumption on page~\pageref{assumption-ls-lr}).
In both cases, by Lemma~\ref{lem:co-any-frequency} we can get to $\mathit{bias}(\th')$
(where $\th' > \max(\thr_u^j,\mathit{thr}_j(\p))$) before round $j$
(Notice that if $j = 1$ then we
need to reach $\mathit{bias}(\th')$ with $\th' > \max(\thr_u^1,\mathit{thr}_1(\p))$ which is
where we start at).
If round $j$ is of type $\mathtt{every}$, then since it is preserving it is easy to
see that $\mathit{bias}(\th') \lact{\p_j}_j \mathit{solo}^{?}$.
The remaining possibility is that round $j-1$ is of type $\mathtt{lr}$.
We can get $\mathit{one}^{b}$ after round $j-1$, and because round $j$ is
necessarily of type $\mathtt{ls}$ and is not c-equalizer, we can get $\mathit{solo}^{?}$ after
round $j$.
In both cases, as $j<\mathbf{ir}$ no process changes $\mathit{inp}$ value, or decides in this phase.
The remaining possibility for $\p$ not to be strong c-unifier is that there is
$i$-th round that is a c-equalizer followed by a non-c-solo safe round $j\leq
\mathbf{ir}$.
It is clear that we can reach $\mathit{solo}$ after round $i$ and using this get $\mathit{solo}^{?}$
after round $j$.
Hence nothing will change in the phase giving a transition $(\mathit{bias}(\th),i)
\act{\p} (\mathit{bias}(\th),i)$. \end{proof}
\begin{proof}
\textbf{Main non-termination}
We show that if there is no strong c-unifier followed by a c-decider, then the
algorithm will not terminate. We start with $(\mathit{bias}(\th),0)$ where $\th$
is large enough. If $\p$ is not a strong c-unifier then by
Lemma~\ref{lem:co-ts-not-str-uni}, for every $i$ transition $(\mathit{bias}(\th),i)
\act{\p} (\mathit{bias}(\th),j)$ is possible for some $j$.
Hence if there is no strong c-unifier in the communication predicate then the algorithm will not terminate.
Otherwise let $\p^l$ be the first strong c-unifier.
Notice that $\p^l$ is
not the global predicate. Till $\p^l$ we can maintain
$(\mathit{bias}(\th),i)$ for some $i$.
Suppose $\p^l$ is not a c-decider.
By Lemma~\ref{lem:co-ts-str-uni} the state
after this phase will become $\mathit{solo}$ or $\mathit{solo}^a$. However since $\p^l$ is not a
c-decider, we can choose to not decide on any value. Hence we get the transition
$(\mathit{bias}(\th),i) \act{\p} (\mathit{solo},i+1)$.
Now, since none of the Lemma~\ref{lem:co-ts-not-decider} we can have a
transition where no decision happens.
Hence the algorithm does not terminate if there is no c-decider after a strong c-unifier. \end{proof}
\label{sec:app-examples}
In this section present some other known algorithms that fit into our fragment of the Heard-Of model. We also show their variants that are correct by our characterization. Finally, we a show new algorithm written in the core language, i.e., without time-stamps and coordinators.
\begin{algorithm}[H]\label{alg:one-third-more-parameters}
\Send{$(\mathit{inp})$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > \thr_u^1 \cdot |\Pi|$}{$x_1:=\mathit{inp}:=\mathrm{smor}(\mathsf{H})$}
\lIf{$\mathtt{mult}(\mathsf{H}) \land |\mathsf{H}| > \thr_m^1 \cdot |\Pi|$}{$x_1:=\mathit{inp}:=\mathrm{smor}(\mathsf{H})$}
}
\Send{$x_1$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > \thr_u^2 \cdot |\Pi|$}{$\mathit{dec}:=\mathrm{smor}(\mathsf{H})$}
}
\BlankLine
\Cp{$\lF(\p^1 \land \lF\p^2)$}
\caption{Parametrized OneThird algorithm~\cite{charron-heard-distributed09}, $\thr_u^1$, $\thr_m^1, \thr_u^2$ are constants from $(0,1)$} \end{algorithm}
\igw{added analysis of the characterization for the core language.} Let us list all the constraints on the constants that would make this algorithm solve consensus. Observe, that if we want an algorithm with 2 rounds, by structural constraints from Definition~\ref{def:structure}, there must be $\mathtt{mult}$ instruction in the first round and there cannot be $\mathtt{mult}$ instruction in the second round. The operation in the $\mathtt{mult}$ instruction must be $\mathrm{smor}$. Both rounds need to have $\mathtt{uni}$ instruction.
The structural constraints from Definition~\ref{def:structure} say \begin{equation*}
\thr_m^1/2\geq 1-\thr_u^2 \quad\text{and}\quad \thr_u^1\geq 1-\thr_u^2 \end{equation*} Recall that the formula for border threshold is \begin{equation*}
\bar{\thr}= \max(1-\thr_u^1,1-\thr_m^{1,k}/2) \end{equation*}
We will consider only the case when the global predicate is $(\true,\true)$ so there are no constraints coming from the proviso. Let us see what can be $\p^1$ and $\p^2$ so that we have a unifier and a decider.
Decider is a simpler one. We need to have $\p^2:=(\f_{\thr_u^1},\f_{\thr_u^2})$ or some bigger thresholds.
For a unifier we need $\p^1:=(\f_{=}\land\f_{\th},\true)$, but we may have a choice for $\th$ with respect to the constants $\thr_u^1, \thr_m^1, \thr_u^2$. \begin{itemize}
\item Suppose $\thr_u^1\leq \thr_m^1$. Then the constraints on unifier reduce to $\th\geq\thr_m^1$.
\item Suppose $\thr_u^1>\thr_m^1$ and all the constrains for the algorithm to
solve consensus are satisfied. Then we can
decrease $\thr_u^1$ to $\thr_m^1$, and they will be still satisfied. Actually,
one can show that one can decrease $\thr_u^1$ to $\thr_m^1/2$. \end{itemize}
To sum up the constraints are $\th\geq\thr_m^1$, $\thr_u^1=\thr_m^1/2\geq 1 -\thr_u^2$. If we want to keep the constants as small as possible we take $\th=\thr_m^1$. We get the best constraints as a function of $\thr_m^1$: \begin{equation*}
\p^1=(\f_{=}\land\f_{\thr_m^1},\true)\qquad \p^2=(\thr_m^1/2,1-\thr_m^1) \end{equation*}
\begin{algorithm}[H]\label{alg:new-timestamp}
\Send{$(\mathit{inp},ts)$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$x_1:=\mathrm{maxts}(\mathsf{H})$}
\lIf{$\mathtt{mult}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$x_1:=\mathrm{maxts}(\mathsf{H})$}
}
\Send{$x_1$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$x_2:=\mathit{inp}:=\mathrm{smor}(\mathsf{H})$}
}
\Send{$x_2$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$\mathit{dec}:=\mathrm{smor}(\mathsf{H})$}
}
\BlankLine
\Cp{$\lF(\p^1)$ where $\p^1 := (\f_=\land\f_{1/2},\ \f_{1/2},\ \f_{1/2})$}
\caption{A timestamp algorithm from~\cite{Mar:17}} \end{algorithm} This algorithm is correct by Theorem~\ref{thm:ts}. The theorem also says that the communication predicate can be weakened to $\lF(\p^1\land\lF\p^2)$ where $\p^1=(\f_=\land\f_{1/2},\ \f_{1/2},\ \true)$ and $\p^2=(\f_{1/2},\ \f_{1/2},\ \f_{1/2})$.
If we do not want to have $\f_=$ requirement on the first line where we check timestamps, we can consider the following modification of the above.
\begin{algorithm}[H]\label{alg:mod-new-timestamp}
\Send{$(\mathit{inp},ts)$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$x_1:=\mathrm{maxts}(\mathsf{H})$}
\lIf{$\mathtt{mult}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$x_1:=\mathrm{maxts}(\mathsf{H})$}
}
\Send{$x_1$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$x_2:=\mathrm{smor}(\mathsf{H})$}
\lIf{$\mathtt{mult}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$x_2:=\mathrm{smor}(\mathsf{H})$}
}
\Send{$x_2$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$x_3:=\mathit{inp}:=\mathrm{smor}(\mathsf{H})$}
}
\Send{$x_3$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$\mathit{dec}:=\mathrm{smor}(\mathsf{H})$}
}
\BlankLine
\Cp{$\lF(\p^1\land\lF\p^2)$}
\Where{$\p^1=(\f_{1/2},\ \f_=\land\f_{1/2}, \ \f_{1/2}, \ \true)$
and $\p^2=(\f_{1/2},\ \f_{1/2},\ \f_{1/2}, \ \f_{1/2})$}
\caption{A modification of a timestamp algorithm from~\cite{Mar:17}} \end{algorithm} Note that by Theorem~\ref{thm:ts}, when we move the equalizer to the second round, there necessarily has to be a $\mathtt{mult}$ instruction in the second round.
\begin{comment} If we could set $\mathit{inp}$ in the first line, we can have even shorter algorithm
\begin{algorithm}[H]\label{alg:short-timestamp}
\Send{$(\mathit{inp},ts)$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$\mathit{inp}:=x_1:=\mathrm{maxts}(\mathsf{H})$}
\lIf{$\mathtt{mult}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$\mathit{inp}:=x_1:=\mathrm{maxts}(\mathsf{H})$}
}
\Send{$x_2$ $\mathtt{every}$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$\mathit{dec}:=\mathrm{smor}(\mathsf{H})$}
}
\BlankLine
\Cp{$\lF(\p^1\land\lF\p^2)$}
\Where{$\p^1=(\f_=\land\f_{1/2},\ \true)$
and $\p^2=(\f_{1/2},\ \f_{1/2})$}
\caption{A short timestamp algorithm} \end{algorithm} \end{comment}
The next example is a three round version of Paxos algorithm.
\begin{algorithm}[H]\label{alg:paxos-three}
\Send{$(\mathit{inp},ts)$ $\mathtt{lr}$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$x_1:=\mathrm{maxts}(\mathsf{H})$}
\lIf{$\mathtt{mult}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$x_1:=\mathrm{maxts}(\mathsf{H})$}
}
\Send{$x_1$ $\mathtt{ls}$}{
\lIf{$\mathtt{uni}(\mathsf{H})$}{$x_2:=\mathit{inp}:=\mathrm{smor}(\mathsf{H})$}
}
\Send{$x_2$ $\mathtt{every}$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/2 \cdot |\Pi|$}{$\mathit{dec}:=\mathrm{smor}(\mathsf{H})$}
}
\BlankLine
\Cp{$\lF(\p^1)$ where $\p^1 := (\f_{1/2},\ \f_{\mathtt{ls}},\ \f_{1/2})$}
\caption{Three round Paxos algorithm} \end{algorithm} The algorithm is correct by Theorem~\ref{thm:ts-coordinators}. Once again it is possible to change constants in the first round to $1/3$ and in the last round to $2/3$ (both in the algorithm and in the communication predicate).
One \igw{New algorithm} can ask if it is possible to have an algorithm with coordinators without timestamps. Here is a possibility that resembles three round Paxos:
\begin{algorithm}[H]\label{alg:paxos-coordinators}
\Send{$(\mathit{inp})$ $\mathtt{lr}$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 2/3 \cdot |\Pi|$}{$x_1:=\mathrm{smor}(\mathsf{H})$}
\lIf{$\mathtt{mult}(\mathsf{H}) \land |\mathsf{H}| > 2/3 \cdot |\Pi|$}{$x_1:=\mathrm{smor}(\mathsf{H})$}
}
\Send{$x_1$ $\mathtt{ls}$}{
\lIf{$\mathtt{uni}(\mathsf{H})$}{$x_2:=\mathit{inp}:=\mathrm{smor}(\mathsf{H})$}
}
\Send{$x_2$ $\mathtt{every}$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 2/3 \cdot |\Pi|$}{$\mathit{dec}:=\mathrm{smor}(\mathsf{H})$}
}
\BlankLine
\Cp{$\lF(\p)$\quad where $\p := (\f_{2/3},\f_{\mathtt{ls}},\f_{2/3})$}
\caption{Three round coordinator algorithm} \end{algorithm} The algorithm solves consensus by Theorem~\ref{thm:coordinators}. The constants are bigger than in Paxos because we do not have timestamps: the constraints on constants come from Definition~\ref{def:structure}, and not from Definition~\ref{def:ts-structure}. We can parametrize this algorithm in the same way as we did for Algorithm~\ref{alg:one-third}. \\
Chandra-Toueg algorithm in the Heard-Of model is actually syntactically the same as four round Paxos~\cite{Mar:17}. The communication predicate is even stronger so it clearly satisfies our constraints.\\
The notion of a weak unifier from Theorem~\ref{thm:core} seems to be a new. Here is an algorithm that has a weak unifier, but does not have a strong unifier. The algorithm is probably not the most efficient one but it is not the same as the others.
\begin{algorithm}[H]\label{alg:weak-unifier}
\Send{$\mathit{inp}$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 9/10 \cdot |\Pi|$}{$x_1:=\mathrm{smor}(\mathsf{H})$}
\lIf{$\mathtt{mult}(\mathsf{H}) \land |\mathsf{H}| > 4/5 \cdot |\Pi|$}{$x_1:=\mathrm{smor}(\mathsf{H})$}
}
\Send{$x_1$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/100 \cdot |\Pi|$}{$x_2:=\mathit{inp}:=\min(\mathsf{H})$}
\lIf{$\mathtt{mult}(\mathsf{H}) \land |\mathsf{H}| > 1/100 \cdot |\Pi|$}{$x_2:=\mathit{inp}:=\min(\mathsf{H})$}
}
\Send{$x_2$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 3/5 \cdot |\Pi|$}{$x_3:=\min(\mathsf{H})$}
}
\Send{$x_3$}{
\lIf{$\mathtt{uni}(\mathsf{H}) \land |\mathsf{H}| > 1/100 \cdot |\Pi|$}{$\mathit{dec}:=\mathrm{smor}(\mathsf{H})$}
}
\BlankLine
\Cp{$\lF(\p^1\land\lF\p^2)$}
\Where{$\p^1=(\f_=\land\f_{4/5},\ \f_{1/100},\ \true,\ \true)$ and
$\p^2 = (\f_{9/10},\ \f_{1/100}, \ \f_{3/5},\ \f_{1/100})$}
\caption{An algorithm with a weak unifier} \end{algorithm} Clearly $\p^1$ is a weak-unifier and $\p^2$ is both a finalizer and a decider.
\end{document} | arXiv |
\begin{document}
\title{Generalized simultaneous component analysis of binary and quantitative data}
\begin{abstract} In the current era of systems biological research there is a need for the integrative analysis of binary and quantitative genomics data sets measured on the same objects. One standard tool of exploring the underlying dependence structure present in multiple quantitative data sets is simultaneous component analysis (SCA) model. However, it does not have any provisions when a part of the data are binary. To this end, we propose the generalized SCA (GSCA) model, which takes into account the distinct mathematical properties of binary and quantitative measurements in the maximum likelihood framework. Like in the SCA model, a common low dimensional subspace is assumed to represent the shared information between these two distinct types of measurements. However, the GSCA model can easily be overfitted when a rank larger than one is used, leading to some of the estimated parameters to become very large. To achieve a low rank solution and combat overfitting, we propose to use a concave variant of the nuclear norm penalty. An efficient majorization algorithm is developed to fit this model with different concave penalties. Realistic simulations (low signal-to-noise ratio and highly imbalanced binary data) are used to evaluate the performance of the proposed model in recovering the underlying structure. Also, a missing value based cross validation procedure is implemented for model selection. We illustrate the usefulness of the GSCA model for exploratory data analysis of quantitative gene expression and binary copy number aberration (CNA) measurements obtained from the GDSC1000 data sets. \end{abstract}
\keywords{Data integration, SCA, binary data, low rank matrix approximation, concave penalty, majorization.}
\section{Introduction} In biological research it becomes increasingly common to have measurements of different aspects of information on the same objects to study complex biological systems. The resulting coupled data sets should be analyzed simultaneously to explore the dependency between variables in different data sets and to reach a global understanding of the underlying biological system. The Simultaneous Component Analysis (SCA) model is one of the standard methods for the integrative analysis of such coupled data sets in different areas, from psychology to chemistry and biology \cite{van2009structured}. SCA discovers the common low dimensional column subspace of the coupled quantitative data sets, and this subspace represents the shared information between them.\\
Next to the quantitative measurements (such as gene expression data), it is common in biological research to have additional binary measurements, in which distinct categories differ in quality rather than in quantity (such as mutation data). Typical examples include the measurements of point mutations, which reflect the mutation status of the DNA sequence, the binary measurements of copy number aberrations (CNA), in which ``1'' indicates aberrations (gains or losses of segments in chromosomal regions) that occurred and ``0'' indicates the normal wild types status, and binarized DNA methylation measurements, in which ``1'' indicates a high level of methylation and ``0'' means a low level \cite{iorio2016landscape}. Compared to quantitative measurement, a binary measurement only has two mutually exclusive outcomes, such as presence vs absence (or true vs false), which are usually labeled as ``1'' and ``0''. However, ``1'' and ``0'' indicate abstract representations of two categories rather than quantitative values 1 and 0. As such, the special mathematical properties of binary data should be taken into account in the data analysis. In most biological data sets, the number of ``0''s is significantly larger than the number of ``1''s for most binary variables making the data imbalanced. Therefore, an additional requirement of the data analysis method is that it should be able to handle imbalanced data.\\
There is a need for statistical methods appropriate for doing an integrative analysis of coupled binary and quantitative data sets in biology research. The standard SCA models \cite{van2009structured, van2009integrating} that use column centering processing steps and least-squares loss criteria are not appropriate for binary data sets. Recently, iClusterPlus \cite{mo2013pattern} was proposed as a factor analysis framework to model discrete and quantitative data sets simultaneously by exploiting the properties of exponential family distributions. In this framework, the special properties of binary, categorical, and count variables are taken into account in a similar way as in generalized linear models. The common low dimensional latent variables and data set specific coefficients are used to fit the discrete and quantitative data sets. For the binary data set, the Bernoulli distribution is assumed and the canonical logit link function is used. The sum of the log likelihood is then used as the objective function. Furthermore, the approach allows the use of a lasso type penalty for feature selection. The Monte Carlo Newton–Raphson algorithm for this general framework, however, involves a very slow Markov Chain Monte Carlo simulation process. Both the high complexity of the model and the algorithmic inefficiency limit its use for large data sets and exploring its properties through simulations.\\
In this paper, we generalize the SCA model to binary and quantitative data from a probabilistic perspective similar as in Collins \cite{collins2002generalization} and Mo \cite{mo2013pattern}. However, the generalized SCA model can easily lead to overfitting by using a rank restriction higher larger than $1$, leading to some of the parameters to become very large. Therefore, a penalty on the singular values of the matrix contains parameters is used to simultaneously induce the low rank structure in a soft manner and to control the scale of estimated parameters. A natural choice is the convex nuclear norm penalty, which is widely used in low rank approximation problems \cite{koltchinskii2011nuclear, groenen2016multinomial, wu2015fast}. However, nuclear norm penalty shrinks all the singular values (latent factors) to the same degree, leading to biased estimates of the important latent factors. Hence, we would like to reduce the shrinkage for the most important latent factors while increase the shrinkage for unimportant latent factors. This nonlinear shrinkage strategy has shown its superiority in the recent work of low rank matrix approximation problems under the presence of Gaussian noise \cite{gavish2017optimal, josse2016adaptive}. Therefore, we will explore the nonlinear shrinkage of the latent factors through concave penalties in our GSCA model. The fitting of the resulting GSCA model is a penalized maximum likelihood estimation problem. We derive a Majorization-Minimization (MM) \cite{de1994block,hunter2004tutorial} based algorithm to solve it. Simple closed form updates for all the parameters are derived in each iteration. A missing value based cross validation procedure is also implemented to do model selection. Our algorithm is easy to implement and guaranteed to decrease the loss function monotonically in each iteration.\\
In the next sections, we will generalize the SCA model for binary and quantitative data, introduce the concave penalties and describe the majorization algorithm to estimate the model parameters. Section 4 introduces the simulations using low signal-to-noise ratios and highly imbalanced binary data, the performance of the GSCA model in recovering the underlying structure and the cross validation procedure. Section 5 introduces the GDSC data \cite{iorio2016landscape}, and the results of the analysis of this data.\\
\section{The GSCA model} Before the GSCA model is introduced, consider the standard SCA model. The quantitative measurements on the same $I$ objects from two different platforms result into two data sets $\mathbf{X}_1$($I\times J_1$) and $\mathbf{X}_2$($I\times J_2$), in which $J_1$ and $J_2$ are the number of variables. Assume both $\mathbf{X}_1$ and $\mathbf{X}_2$ are column centered. The standard SCA model can be expressed as \begin{equation}\label{eq1} \begin{aligned} \mathbf{X}_1 &= \mathbf{AB}_1^{\text{T}} + \mathbf{E}_1\\ \mathbf{X}_2 &= \mathbf{AB}_2^{\text{T}} + \mathbf{E}_2,\\ \end{aligned} \end{equation} where $\mathbf{A}$($I\times R$) denotes the common component scores (or latent variables), which span the common column subspace of $\mathbf{X}_1$ and $\mathbf{X}_2$, $\mathbf{B}_1$($J_1\times R$) and $\mathbf{B}_2$($J_2\times R$) are the data set specific loading matrices for $\mathbf{X}_1$ and $\mathbf{X}_2$ respectively, $\mathbf{E}_1$($I\times J_1$) and $\mathbf{E}_2$($I\times J_2$) are residuals, $R$, $R \ll \{I,J_1,J_2\}$, is an unknown low rank. Orthogonality is imposed on $\mathbf{A}$ as $\mathbf{A}^{\text{T}}\mathbf{A}=\mathbf{I}_{R}$, where $\mathbf{I}_{R}$ indicates the $R\times R$ identity matrix, to have a unique solution. $\mathbf{A}$, $\mathbf{B}_1$ and $\mathbf{B}_2$ are estimated by minimizing the sum of the squared residuals $\mathbf{E}_1$ and $\mathbf{E}_2$.\\
\subsection{The GSCA model of binary and quantitative data sets} Following the probabilistic interpretation of the PCA model \cite{tipping1999probabilistic}, the high dimensional quantitative data set $\mathbf{X}_2$ can be assumed to be a noisy observation from a deterministic low dimensional structure $\mathbf{\Theta}_2$($I\times J_2$) with independent and identically distributed measurement noise, $\mathbf{X}_2 = \mathbf{\Theta}_2 + \mathbf{E}_2$. Elements in $\mathbf{E}_2$($I\times J_2$) follow a normal distribution with mean 0 and variance $\sigma^2$, $\epsilon_{2ij} \sim \text{N}(0,\sigma^2$). In the same way, following the interpretation of the exponential family PCA on binary data \cite{collins2002generalization}, we assume there is a deterministic low dimensional structure $\mathbf{\Theta}_1$($I\times J_1$) underlying the high dimensional binary observation $\mathbf{X}_1$. Elements in $\mathbf{X}_1$ follow the Bernoulli distribution with parameters $\phi(\mathbf{\Theta}_1)$, $x_{1ij} \sim \text{Ber}(\phi(\theta_{1ij}))$. Here $\phi()$ is the element wise inverse link function in the generalized linear model for binary data; $x_{1ij}$ and $\theta_{1ij}$ are the $ij$-th element of $\mathbf{X}_1$ and $\mathbf{\Theta}_1$ respectively. If the logit link is used, $\phi(\theta) = (1+\exp(-\theta))^{-1}$, while if the probit link is used, $\phi(\theta) = \Phi(\theta)$, where $\Phi$ is the cumulative density function of the standard normal distribution. Although in our paper, we only use the logit link in deriving the algorithm and in setting up the simulations, the option for the probit link is included in our implementation. The two link functions are similar, but their interpretations can be quite different \cite{agresti2013categorical}.\\
In the same way as in the standard SCA model, $\mathbf{\Theta}_1$ and $\mathbf{\Theta}_2$ are assumed to lie in the same low dimensional subspace, which represents the shared information between the coupled matrices $\mathbf{X}_1$ and $\mathbf{X}_2$. The commonly used column centering is not appropriate for the binary data set as the centered binary data will not be ``1'' and ``0'' anymore. Therefore, we include column offset terms $\bm{\mu}_1$($J_1 \times 1$) and $\bm{\mu}_2$($J_2 \times 1$) for a model based centering. The above ideas are modeled as \begin{equation}\label{eq2} \begin{aligned} \mathbf{\Theta}_1 &= \mathbf{1}\bm{\mu}_1^{\text{T}} + \mathbf{AB}_1^{\text{T}}\\ \mathbf{\Theta}_2 &= \mathbf{1}\bm{\mu}_2^{\text{T}} + \mathbf{AB}_2^{\text{T}},\\ \end{aligned} \end{equation} where, $\mathbf{1}$($I\times 1$) is a $I$ dimensional vector of ones; the parameters $\mathbf{A}$, $\mathbf{B}_1$ and $\mathbf{B}_2$ have the same meaning as in the standard SCA model. Constraints $\mathbf{A}^{\text{T}}\mathbf{A}=I\mathbf{I}_R$ and $\mathbf{1}^{\text{T}}\mathbf{A} = \mathbf{0}$ are imposed to have a unique solution.\\
For the generalization to quantitative and binary coupled data, we follow the maximum likelihood estimation framework. The negative log likelihood for fitting coupled binary $\mathbf{X}_1$ and quantitative $\mathbf{X}_2$ is used as the objective function. In order to implement missing value based cross validation procedure \cite{bro2008cross}, we introduce two weighting matrices $\mathbf{Q}_1$($I\times J_1$) and $\mathbf{Q}_2$($I\times J_2$) to handle the missing elements. The $ij$-th element of $\mathbf{Q}_1$, $q_{1ij}$ equals 0 if the $ij$-th element in $\mathbf{X}_1$ is missing, while it equals 1 \textit{vice versa}. The same rules apply to $\mathbf{Q}_2$ and $\mathbf{X}_2$. The loss functions $f_1(\mathbf{\Theta}_1)$ for fitting $\mathbf{X}_1$ and $f_2(\mathbf{\Theta}_2,\sigma^2)$ for fitting $\mathbf{X}_2$ are defined as follows: \begin{equation}\label{eq3} \begin{aligned}
f_1(\mathbf{\Theta}_1) &= -\sum_{i}^{I}\sum_{j}^{J_1} q_{1ij} \left[x_{1ij}\log(\phi(\theta_{1ij})) + (1-x_{1ij})\log(1-\phi(\theta_{1ij}))\right] \\ f_2(\mathbf{\Theta}_2,\sigma^2) &= \frac{1}{2\sigma^2}
||\mathbf{Q}_2 \odot (\mathbf{X}_2-\mathbf{\Theta}_2)||_F^2 + \frac{1}{2} ||\mathbf{Q}_2||_0 \log(2\pi \sigma^2),\\
\end{aligned} \end{equation}
where $\odot$ indicates element-wise multiplication; $||\quad||_F$ is the Frobenius norm of a matrix; $||\quad||_0$ is the pseudo $L_0$ norm of a matrix, which equals the number of nonzero elements.\\
The shared information between $\mathbf{X}_1$ and $\mathbf{X}_2$ is assumed to be fully represented by the low dimensional subspace spanned by the common component score matrix $\mathbf{A}$. Thus, $\mathbf{X}_1$ and $\mathbf{X}_2$ are conditionally independent given that the low dimensional structures $\mathbf{\Theta}_1$ and $\mathbf{\Theta}_2$ lie in the same low dimensional subspace. Therefore, the joint loss function is the direct sum of the negative log likelihood functions for fitting $\mathbf{X}_1$ and $\mathbf{X}_2$. \begin{equation}\label{eq4} \begin{aligned}
f(\mathbf{\Theta}_1,\mathbf{\Theta}_2,\sigma^2) &= -\log(p(\mathbf{X}_1,\mathbf{X}_2|\mathbf{\Theta}_1,\mathbf{\Theta}_2,\sigma^2))\\
& = -\log(p(\mathbf{X}_1|\mathbf{\Theta}_1) p(\mathbf{X}_2|\mathbf{\Theta}_2,\sigma^2))\\
&= -\log(p(\mathbf{X}_1|\mathbf{\Theta}_1)-\log(p(\mathbf{X}_2|\mathbf{\Theta}_2,\sigma^2))\\
&= f_1(\mathbf{\Theta}_1) + f_2(\mathbf{\Theta}_2,\sigma^2).\\ \end{aligned} \end{equation}
\subsection{Concave penalties as surrogates for low rank constraint} To arrive at meaningful solutions for the GSCA model, it is necessary to introduce penalties on the estimated parameters. If we take $\mathbf{\Theta} = [\mathbf{\Theta}_1 ~ \mathbf{\Theta}_2]$, $\bm{\mu} = [\bm{\mu}_1^{\text{T}} \bm{\mu}_2^{\text{T}}]^{\text{T}}$, and $\mathbf{B} = [\mathbf{B}_1 ~ \mathbf{B}_2]$, equation (\ref{eq2}) in the GSCA model can be expressed as $\mathbf{\Theta} = \mathbf{1}\bm{\mu}^{\text{T}} + \mathbf{AB}^{\text{T}}$. In the above interpretation of the GSCA model, the low rank constraint on the column centered $\mathbf{\Theta}$ is expressed as the multiplication of two rank $R$ matrices $\mathbf{A}$, $\mathbf{B}$, $\mathbf{Z} = \mathbf{\Theta} - \mathbf{1}\bm{\mu}^{\text{T}} = \mathbf{AB}^{\text{T}}$. However, using a exact low rank constraint in the GSCA model has some issues. First, the maximum likelihood estimation of this model easily leads to overfitting. Given the constraint that $\mathbf{A}^{\text{T}}\mathbf{A}=I\mathbf{I}$, overfitting represents itself in a way that some elements in $\mathbf{B}_1$ tend to diverge to plus or minus infinity. In addition, the exact low rank $R$ in the GSCA model is commonly unknown and its selection is not straightforward.\\
In this paper, we take a penalty based approach to control the scale of estimated parameters and to induce a low rank structure simultaneously. The low rank constraint on $\mathbf{Z}$ is obtained by a penalty function $g(\mathbf{Z})$, which shrinks the singular values of $\mathbf{Z}$ to achieve a low rank structure. The most widely used convex surrogate of a low rank constraint is the nuclear norm penalty, which is simply the sum of singular values, $g(\mathbf{Z}) = \sum_{r} \xi_r(\mathbf{Z})$ \cite{koltchinskii2011nuclear}, where $\xi_r(\mathbf{Z})$ represents the $r$-th singular value of $\mathbf{Z}$. The nuclear norm penalty was also used in a related work \cite{wu2015fast}. Although the convex nuclear norm penalty is easy to optimize, the same amount of shrinkage is applied to all the singular values, leading to biased estimates of the large singular values. Recent work \cite{gavish2017optimal, lu2015generalized} already showed the superiority of concave surrogates of a low rank constraint under Gaussian noise compared to the nuclear norm penalty. We take $g(\mathbf{Z}) = \sum_{r} g(\xi_r(\mathbf{Z}))$ as our concave surrogate of a low rank constraint on $\mathbf{Z}$, where $g(\xi_r)$ is a concave penalty function of $\xi_r$. After replacing the low rank constraint in equation (\ref{eq4}) by $g(\mathbf{Z})$, our model becomes, \begin{equation}\label{eq5} \begin{aligned}
\min_{\bm{\mu},\mathbf{Z},\sigma^2} \quad & f_1(\mathbf{\Theta}_1) + f_2(\mathbf{\Theta}_2,\sigma^2) + \lambda g(\mathbf{Z}) \\
\text{s.t.~} \mathbf{\Theta} &= \mathbf{1}\bm{\mu}^{\text{T}} + \mathbf{Z} \\
\mathbf{\Theta} &= [\mathbf{\Theta}_1 ~ \mathbf{\Theta}_2] \\
\mathbf{1}^{\text{T}}\mathbf{Z} &= \mathbf{0}. \end{aligned} \end{equation}
The most commonly used non-convex surrogates of a low rank constraint are concave functions, including $L_{q:0 < q < 1}$ (bridge penalty) \cite{fu1998penalized,liu2007support}, smoothly clipped absolute deviation (SCAD) \cite{fan2001variable}, a frequentist version of the generalized double Pareto (GDP) shrinkage \cite{armagan2013generalized} and others \cite{lu2015generalized}. We include the first three concave penalties in our algorithm. Their formulas and supergradients (the counterpart concept of subgradient in convex analysis, which will be used in the derivation of the algorithm) are shown in Table \ref{tab1}, and their thresholding properties are shown in Fig.~1. Since the nuclear norm penalty is a linear function of the singular values, it is both convex and concave. Also, it is a special case of $L_q$ penalty when setting $q=1$. Thus, the algorithm developed in this paper also applies to nuclear norm penalty.\\
\begin{table}[htbp] \centering \caption{\label{tab1} Some commonly used concave penalty functions. $\eta$ is taken as the singular value and $q$, $\lambda$ and $\gamma$ are tuning parameters.} \begin{tabular}{lll}
\toprule Penalty & Formula & Supergradient \\
\midrule
Nuclear norm & $ \lambda \eta $ & $\lambda$ \\
$L_{q}$ & $ \lambda \eta^q $ & $\left\{ \begin{array}{ll} +\infty &\textrm{$\eta=0$}\\
\lambda q \eta^{q-1} &\textrm{$\eta>0$}\\ \end{array} \right.$ \\
SCAD & $\left\{ \begin{array}{ll} \lambda \eta &\textrm{$\eta \leq \lambda$}\\
\frac{-\eta^2+2\gamma \lambda \eta - \lambda^2}{2(\gamma-1)} &\textrm{$\lambda < \eta \leq \gamma \lambda$}\\
\frac{\lambda^2(\gamma+1)}{2} &\textrm{$\eta > \gamma \lambda$}\\ \end{array} \right.$ &
$\left\{ \begin{array}{ll} \lambda &\textrm{$\eta \leq \lambda$}\\
\frac{\gamma \lambda - \eta}{\gamma-1} &\textrm{$\lambda < \eta \leq \gamma \lambda$}\\
0 &\textrm{$\eta > \gamma \lambda$}\\ \end{array} \right.$ \\
GDP & $ \lambda \log(1+\frac{\eta}{\gamma}) $ & $\frac{\lambda}{\gamma + \eta}$ \\
\bottomrule \end{tabular} \end{table}
\begin{figure}\label{Fig:1}
\end{figure}
\section{Algorithm} Parameters $\bm{\mu}$, $\mathbf{Z}$, and $\sigma^2$ of the joint loss function in equation (\ref{eq5}) are updated alternatingly while fixing other parameters until reaching predefined stopping criteria. If the updating sequence follows the same order, the joint loss function is guaranteed to decrease monotonically. However, even when fixing parameters $\bm{\mu}$ and $\sigma^2$, the minimization of equation (\ref{eq5}) with respect to $\mathbf{Z}$ is still a non-smooth and non-convex problem. We solve this problem using the MM principle \cite{de1994block,hunter2004tutorial}.\\
\subsection{The majorization of $f_1(\mathbf{\Theta}_1) + f_2(\mathbf{\Theta}_2,\sigma^2) + \lambda g(\mathbf{Z})$} When fixing $\sigma^2$, we can majorize $f(\mathbf{\Theta}) = f_1(\mathbf{\Theta}_1) + f_2(\mathbf{\Theta}_2)$ to a quadratic function of the parameter $\mathbf{\Theta}$. In addition, the concave penalty function $g(\mathbf{Z})$ can be majorized to a linear function of the singular values by exploiting the concavity. The resulting majorized problem can be analytically solved by weighted singular value thresholding \cite{lu2015generalized}. In the following derivation, the symbol $c$ represents a constant doesn't depend on any unknown parameters, rather than a specific value.\\
\subsubsection*{The majorization of $f(\mathbf{\Theta})$} Both $f_1(\mathbf{\Theta}_1)$ and $f_2(\mathbf{\Theta}_2)$ can be expressed as $f_1(\mathbf{\Theta}_1) = \sum_{i}^{I}\sum_{j}^{J_1} q_{1ij} f_{1ij}(\theta_{1ij})$ and $f_2(\mathbf{\Theta}_2) = \sum_{i}^{I}\sum_{j}^{J_2} q_{2ij} f_{2ij}(\theta_{2ij})$, in which $f_{1ij}(\theta_{1ij}) = -\left[x_{1ij}\log(\phi(\theta_{1ij})) + (1-x_{1ij})\log(1-\phi(\theta_{1ij}))\right]$ and $f_{2ij}(\theta_{2ij}) = \frac{1}{2\sigma^2} (x_{2ij}-\theta_{2ij})^2 + c$. When logit link is used, the following results can be easily derived out, $\nabla f_{1ij}(\theta_{1ij}) = \phi(\theta_{1ij}) - x_{1ij}$, $\nabla^2 f_{1ij}(\theta_{1ij}) = \phi(\theta_{1ij})(1-\phi(\theta_{1ij}))$, $\nabla f_{2ij}(\theta_{2ij}) = \frac{1}{\sigma^2} (\theta_{2ij} - x_{2ij})$, $\nabla^2 f_{2ij}(\theta_{2ij}) = \frac{1}{\sigma^2}$. Assume that both $\nabla^2 f_{1ij}(\theta_{1ij})$ and $\nabla^2 f_{2ij}(\theta_{2ij})$ are upper bounded by a constant $L$. Since $\nabla^2 f_{1ij}(\theta_{1ij}) \leq 0.25$ when logit link is used \cite{de2006principal}, we can set $L=\text{max}(0.25, 1/\sigma^2)$. Take $f(\theta)$ as the general representation of $f_{1ij}(\theta_{1ij})$ and $f_{2ij}(\theta_{2ij})$. According to the Taylor's theorem and the assumption that $\nabla^2 f(\theta) \leq L$ for $\theta \in \text{domain}f$, we have the following inequality, \begin{equation}\label{eq6} \begin{aligned} f(\theta) &= f(\theta^k) + <\nabla f(\theta^k), \theta-\theta^k> + \frac{1}{2}(\theta-\theta^k)^{\text{T}} \nabla^{2}f(\theta^k + t(\theta-\theta^k))(\theta-\theta^k) \\
&\leq f(\theta^k) + <\nabla f(\theta^k), \theta-\theta^k> + \frac{L}{2}(\theta-\theta^k)^2 \\
&= \frac{L}{2}(\theta-\theta^k + \frac{1}{L}\nabla f(\theta^k))^2 + c,\\ \end{aligned} \end{equation} where $\theta^k$ is the $k$-th approximation of $\theta$, $t$ is an unknown constant and $t\in[0,1]$. Therefore, we have the following inequalities about $f_{1ij}(\theta_{1ij})$ and $f_{2ij}(\theta_{2ij})$, $f_{1ij}(\theta_{1ij}) \leq \frac{L}{2}(\theta_{1ij} - \theta_{1ij}^k + \frac{1}{L}\nabla f_{1ij}(\theta_{1ij}^k ))^2 + c$ and $f_{2ij}(\theta_{2ij}) \leq \frac{L}{2}(\theta_{2ij} - \theta_{2ij}^k + \frac{1}{L} \nabla f_{2ij}(\theta_{2ij}^k))^2 + c$.\\
Assume $\nabla f_1(\mathbf{\Theta}_1^k)$ and $\nabla f_1(\mathbf{\Theta}_1^k)$ are the matrix forms of $\nabla f_{1ij}(\theta_{1ij}^k )$ and $\nabla f_{2ij}(\theta_{2ij}^k )$ respectively. The inequality of $f_1(\mathbf{\Theta}_1)$ can be derived out as $f_1(\mathbf{\Theta}_1)\leq \frac{L}{2} \sum_{i}^{I}\sum_{j}^{J_1} q_{1ij}[(\theta_{1ij} - \theta_{1ij}^k + \frac{1}{L}\nabla f_{1ij}(\theta_{1ij}^k ))^2] + c = \frac{L}{2} ||\mathbf{Q}_1 \odot (\mathbf{\Theta}_1 - \mathbf{\Theta}_1^k + \frac{1}{L} \nabla f_1(\mathbf{\Theta}_1^k))||_F^2 + c$. In the same way, the inequality of $f_2(\mathbf{\Theta}_2)$ is $f_2(\mathbf{\Theta}_2)\leq \frac{L}{2} ||\mathbf{Q}_2 \odot (\mathbf{\Theta}_2 - \mathbf{\Theta}_2^k + \frac{1}{L} \nabla f_2(\mathbf{\Theta}_2^k))||_F^2 + c$. Based on these two inequalities, we can derive out the upper bound of $f(\mathbf{\Theta})$ at the $k$-th approximated parameter $\mathbf{\Theta}^k = [\mathbf{\Theta}_1^k ~ \mathbf{\Theta}_2^k]$ as follows, \begin{equation}\label{eq7} \begin{aligned} f(\mathbf{\Theta}) &= f_1(\mathbf{\Theta}_1) + f_2(\mathbf{\Theta}_2)\\
&\leq \frac{L}{2} ||\mathbf{Q}_1 \odot (\mathbf{\Theta}_1 - \mathbf{\Theta}_1^k + \frac{1}{L} \nabla f_1(\mathbf{\Theta}_1^k))||_F^2 + \frac{L}{2} ||\mathbf{Q}_2 \odot (\mathbf{\Theta}_2 - \mathbf{\Theta}_2^k + \frac{1}{L} \nabla f_2(\mathbf{\Theta}_2^k))||_F^2 + c\\
& = \frac{L}{2} ||\mathbf{Q} \odot(\mathbf{\Theta} - \mathbf{\Theta}^k + \frac{1}{L} \nabla f(\mathbf{\Theta}^k))||_F^2 + c, \end{aligned} \end{equation} where $\nabla f(\mathbf{\Theta}^k) = [\nabla f_1(\mathbf{\Theta}_1^k) ~ \nabla f_2(\mathbf{\Theta}_2^k)]$, $\nabla f_1(\mathbf{\Theta}_1^k) = \phi(\mathbf{\Theta}_1^k - \mathbf{X}_1)$ and $\nabla f_2(\mathbf{\Theta}_2^k) = \frac{1}{\sigma^2} (\mathbf{\Theta}_2^k - \mathbf{X}_2)$. Following \cite{kiers1997weighted}, we further majorize the weighted least-squares in equation (\ref{eq7}) into a quadratic function of $\mathbf{\Theta}$ as \begin{equation}\label{eq8} \begin{aligned}
&\frac{L}{2} ||\mathbf{Q} \odot(\mathbf{\Theta} - \mathbf{\Theta}^k + \frac{1}{L} \nabla f(\mathbf{\Theta}^k))||_F^2 \\
&\leq \frac{L}{2} ||\mathbf{\Theta}-\mathbf{H}^k||_F^2 + c, \\ \end{aligned} \end{equation} where $\mathbf{H}^k = \mathbf{Q} \odot(\mathbf{\Theta}^k - \frac{1}{L} \nabla f(\mathbf{\Theta}^k)) + (\mathbf{1}\mathbf{1}^{\text{T}}-\mathbf{Q})\odot \mathbf{\Theta}^k = \mathbf{\Theta}^k - \frac{1}{L}(\mathbf{Q} \odot \nabla f(\mathbf{\Theta}^k))$.\\
\subsubsection*{The majorization of $g(\mathbf{Z})$} Let $g(\xi_r)$ be a concave function. From the definition of concavity \cite{boyd2004convex}, we have $g(\xi_r) \leq g(\xi_r^k) + \omega_r^k(\xi_r - \xi_r^k) = \omega_r^k \xi_r + c$, in which $\xi_r^k = \xi_r(\mathbf{Z}^k)$ is the $r$-th singular value of the $k$-th approximation $\mathbf{Z}^k$ and $c$ is a constant doesn't depend on any unknown parameter. Also, $\omega_r^k \in \partial g(\xi_r^k)$ and $\partial g(\xi_r^k)$ is the set of supergradients of function $g()$ at $\xi_r^k$. For all the concave penalties used in our paper, their supergradient is unique, thus $\omega_r^k = \partial g(\xi_r^k)$. Therefore, $g(\mathbf{Z})= \sum_{r}g(\xi_r(Z))$ can be majorized as follows \begin{equation}\label{eq9} \begin{aligned} g(\mathbf{Z}) &= \sum_{r}g(\xi_r(\mathbf{Z}))\\
&\leq \sum_{r}\omega_{r}^k \xi_r(\mathbf{Z}) + c\\
\omega_r^k &= \partial g(\xi_r(\mathbf{Z}^k)). \end{aligned} \end{equation}
\subsubsection*{The majorization of $f(\mathbf{\Theta}) + \lambda g(\mathbf{Z})$} To summarize the above results, $f(\mathbf{\Theta}) + \lambda g(\mathbf{Z})$ has been majorized to the following function. \begin{equation}\label{eq10} \begin{aligned}
& \frac{L}{2} ||\mathbf{\Theta} - \mathbf{H}^k||_F^2 + \lambda \sum_{r}\omega_{r}^k \xi_r(\mathbf{Z}) + c\\
\mathbf{\Theta} &= \mathbf{1}\bm{\mu}^{\text{T}} + \mathbf{Z} \\
\mathbf{H}^k &= \mathbf{\Theta}^k - \frac{1}{L}(\mathbf{Q} \odot \nabla f(\mathbf{\Theta}^k)) \\
\omega_r^k &= \partial g(\xi_r(\mathbf{Z}^k))\\
\mathbf{1}^{\text{T}}\mathbf{Z} &= \mathbf{0}. \end{aligned} \end{equation}
\subsection{Block coordinate descent} We optimize $\bm{\mu}$, $\mathbf{Z}$ and $\sigma^2$ alternatingly while fixing the other parameters. However, updating $\bm{\mu}$ and $\mathbf{Z}$ depend on solving the majorized problem in equation (\ref{eq10}) rather than solving the original problem in equation (\ref{eq5}). Because of the MM principle, this step will also monotonically decrease the original loss function in equation (\ref{eq5}).\\
\subsubsection*{Updating $\bm{\mu}$} The analytical solution of $\bm{\mu}$ in equation (\ref{eq10}) is simply the column mean of $\mathbf{H}^k$, $\bm{\mu} = \frac{1}{I} (\mathbf{H}^k)^{\text{T}} \mathbf{1}$.\\
\subsubsection*{Updating $\mathbf{Z}$}
After deflating the offset term $\bm{\mu}$, the loss function in equation (\ref{eq10}) becomes $\frac{L}{2} ||\mathbf{Z} - \mathbf{J} \mathbf{H}^k||_F^2 + \lambda \sum_{r}\omega_{r}^k \xi_r$, in which $\mathbf{J} = \mathbf{I} - \frac{1}{I} \mathbf{1} \mathbf{1}^{\text{T}}$ is the column centering matrix. The solution of the resulting problem is equivalent to the proximal operator of the weighted sum of singular values, which has an analytical form solution \cite{lu2015generalized}. Suppose $\mathbf{USV}^{\text{T}} = \mathbf{J} \mathbf{H}^k$ is the SVD decomposition of $\mathbf{J} \mathbf{H}^k$, the analytical form solution of $\mathbf{Z}$ is $\mathbf{Z} = \mathbf{US}_{\omega \lambda /L}\mathbf{V}^{\text{T}}$, in which $\mathbf{S}_{\omega \lambda /L} = \text{Diag}\{(s_{rr}-\lambda \omega_r /L)_{+}\}$ and $s_{rr}$ is the $i$-th diagonal element in $\mathbf{S}$.\\
\subsubsection*{Updating $\sigma^2$}
By setting the gradient of $f(\mathbf{\Theta},\sigma^2)$ in equation (\ref{eq5}) with respect to $\sigma^2$ to be 0, we have the following analytical solution of $\sigma^2$, $\sigma^2= \frac{1}{||\mathbf{Q}_2||_0} ||\mathbf{Q}_2 \odot (\mathbf{X}_2 - \mathbf{\Theta}_2)||_F^2$. When no low rank estimation of $\mathbf{Z}$ can be achieved, the constructed model is close to a saturated model and the estimated $\hat{\sigma}^2$ is close to 0. In that case, when $\hat{\sigma}^2<0.05$, the algorithm stops and gives a warning that a low rank estimation has not been achieved.\\
\subsubsection*{Initialization and stopping criteria} Random initialization is used. All the elements in $\mathbf{Z}^0$ are sampled from the standard uniform distribution, $\bm{\mu}^0$ is set to 0 and $(\sigma^2)^0$ is set to 1. The relative change of the objective value is used as the stopping criteria. Pseudocode of the algorithm described above is shown in Algorithm \ref{alg1}. $\epsilon_f$ is the tolerance of relative change of the loss function.\\
\begin{algorithm}[htb]
\caption{A MM algorithm for fitting the GSCA model with concave penalties.}
\label{alg1}
\begin{algorithmic}[1]
\Require
$\mathbf{X}_1$, $\mathbf{X}_2$, penalty, $\lambda$, $\gamma$;
\Ensure
$\hat{\bm{\mu}}$, $\hat{\mathbf{Z}}$, $\hat{\sigma}^2$;
\State Compute $\mathbf{Q}_1$, $\mathbf{Q}_2$ for missing values in $\mathbf{X}_1$ and $\mathbf{X}_2$, and $\mathbf{Q} = [\mathbf{Q}_1 ~ \mathbf{Q}_2]$;
\State Initialize $\bm{\mu}^0$, $\mathbf{Z}^0$, $(\sigma^2)^0$;
\State $k = 0$;
\While{$(f^{k-1}-f^{k})/f^{k-1}>\epsilon_f$}
\State $\nabla f_1(\mathbf{\Theta}_1^k) = \phi(\mathbf{\Theta}_1^k) - \mathbf{X}_1$; $\nabla f_2(\mathbf{\Theta}_2^k) = \frac{1}{(\sigma^2)^k} (\mathbf{\Theta}_2^k - \mathbf{X}_2)$;
\State $\nabla f(\mathbf{\Theta}^k) = [\nabla f_1(\mathbf{\Theta}_1^k) ~ \nabla f_2(\mathbf{\Theta}_2^k)]$;
\State $L_k=\text{max}(0.25,1/(\sigma^2)^k)$;
\State $\mathbf{H}^{k} = \mathbf{\Theta}^{k}- \frac{1}{L_{k}} (\mathbf{Q} \odot \nabla f(\mathbf{\Theta}^{k}))$;
\State $\omega_r^k = \partial g(\xi_r(\mathbf{Z}^k))$;
\State $\bm{\mu}^{k+1} = \frac{1}{I} (\mathbf{H}^{k})^{\text{T}} \mathbf{1}$;
\State $\mathbf{USV}^{\text{T}} = \mathbf{J}\mathbf{H}^{k}$;
\State $\mathbf{S}_{\lambda \omega /L_{k}} = \text{Diag}\{ (s_{rr} - \lambda \omega_r^k /L_{k})_{+}\}$;
\State $\mathbf{Z}^{k+1} = \mathbf{US}_{\lambda \omega/L_k}\mathbf{V}^{\text{T}}$;
\State $\mathbf{\Theta}^{k+1} = \mathbf{1}(\bm{\mu}^{k+1})^{\text{T}} + \mathbf{Z}^{k+1}$;
\State $[\mathbf{\Theta}_1^{k+1} ~ \mathbf{\Theta}_2^{k+1}] = \mathbf{\Theta}^{k+1}$;
\State $(\sigma^2)^{k+1} = \frac{1}{||\mathbf{Q}_2||_0} ||\mathbf{Q}_2 \odot (\mathbf{X}_2 - \mathbf{\Theta}_2^{k+1})||_F^2$
\State $k=k+1$;
\EndWhile
\end{algorithmic} \end{algorithm}
\section{Simulation} To see how well the GSCA model is able to reconstruct data generated according to the model, we do a simulation study with similar characteristics as a typical empirical data set. We first simulate the imbalanced binary $\mathbf{X}_1$ and quantitative $\mathbf{X}_2$ following the GSCA model with logit link and low signal-to-noise ratio (SNR). After that, we evaluate the GSCA model with respect to 1) the quality of the reconstructed low rank structure from the model, and 2) the reconstruction of true number of dimensions.\\
\subsection{Data generating process}
Motivated by \cite{davenport20141}, we define the SNR for generating binary data according to the latent variable interpretation of the generalized linear models of binary data. Elements in $\mathbf{X}_1$ are independent and indirect binary observations of the corresponding elements in an underlying quantitative matrix $\mathbf{X}_1^{\ast}$($I\times J_1$), $x_{1ij} = 1$ if $x_{1ij}^{\ast}>0$ and $x_{1ij} = 0$ otherwise. $\mathbf{X}_1^{\ast}$ can be expressed as $\mathbf{X}_1^{\ast} = \mathbf{\Theta}_1 + \mathbf{E}_1$, in which $\mathbf{\Theta}_1 = \mathbf{1}\bm{\mu}_1^{\text{T}} + \mathbf{AB}_1^{\text{T}}$, and elements in $\mathbf{E}_1$ follow the standard logistic distribution, $\epsilon_{1ij} \sim \text{Logistic}(0,1)$. The SNR for generating binary data $\mathbf{X}_1$ is defined as $\text{SNR}_1 = ||\mathbf{AB}_1^{\text{T}}||_{F}^2/||\mathbf{E}_1||_{F}^2$. Assume the quantitative $\mathbf{X}_2$ is simulated as $\mathbf{X}_2 = \mathbf{\Theta}_2 + \mathbf{E}_2$, in which $\mathbf{\Theta}_2 = \mathbf{1}\bm{\mu}_2^{\text{T}} + \mathbf{AB}_2^{\text{T}}$ and elements in $\mathbf{E}_2$ follow a normal distribution with 0 mean and $\sigma^2$ variance, $\epsilon_{2ij} \sim N(0,\sigma^2)$. The SNR for generating quantitative $\mathbf{X}_2$ is defined as $\text{SNR}_2 = ||\mathbf{AB}_2^{\text{T}}||_{F}^2/||\mathbf{E}_2||_{F}^2$.\\
After the definition of the SNR, we simulate the coupled binary $\mathbf{X}_1$ and quantitative $\mathbf{X}_2$ as follows. $\bm{\mu}_1$ represents the logit transform of the marginal probabilities of binary variables and $\bm{\mu}_2$ represents the mean of the marginal distributions of quantitative variables. They will be simulated according to the characteristic of real biological data set. The score matrix $\mathbf{A}$ and loading matrices $\mathbf{B}_1$, $\mathbf{B}_2$ are simulated as follows. First, we express $\mathbf{A}\mathbf{B}_1^{\text{T}}$ and $\mathbf{A}\mathbf{B}_2^{\text{T}}$ in a SVD type as $\mathbf{A}\mathbf{B}_1^{\text{T}} = \mathbf{U}\mathbf{D}_1\mathbf{V}_1^{\text{T}}$ and $\mathbf{A}\mathbf{B}_2^{\text{T}} = \mathbf{U}\mathbf{D}_2\mathbf{V}_1^{\text{T}}$, in which $\mathbf{U}^{\text{T}}\mathbf{U} = \mathbf{I}_R$, $\mathbf{D}_1$ and $\mathbf{D}_2$ are diagonal matrices, $\mathbf{V}_1^{\text{T}}\mathbf{V}_1 = \mathbf{I}_R$ and $\mathbf{V}_2^{\text{T}}\mathbf{V}_2 = \mathbf{I}_R$. All the elements in $\mathbf{U}$, $\mathbf{V}_1$ and $\mathbf{V}_2$ are independently sampled from the standard normal distribution. Then, $\mathbf{U}$, $\mathbf{V}_1$ and $\mathbf{V}_2$ are orthogonalized by the QR algorithm. The diagonal matrix $\mathbf{D}$($R\times R$) is simulated as follows. $R$ elements are sampled from standard normal distribution, their absolute values are sorted in decreasing order. To satisfy the pre-specified $\text{SNR}_1$ and $\text{SNR}_2$, $\mathbf{D}$ is scaled by positive scalars $c_1$ and $c_2$ as $\mathbf{D}_1 = c_1\mathbf{D}$ and $\mathbf{D}_2 = c_2\mathbf{D}$. Then, binary elements in $\mathbf{X}_1$ are sampled from the Bernoulli distribution with corresponding parameter $\phi(\theta_{1ij})$, in which $\phi()$ is inverse logit function and $\mathbf{\Theta}_1 = \mathbf{1}\bm{\mu}_1^{\text{T}} + \mathbf{AB}_1^{\text{T}}$. Quantitative data set $\mathbf{X}_2$ is generated as $\mathbf{X}_2 = \mathbf{\Theta}_2 + \mathbf{E}_2$, in which $\mathbf{\Theta}_2 = \mathbf{1}\bm{\mu}_2^{\text{T}} + \mathbf{AB}_2^{\text{T}}$ and elements in $\mathbf{E}_2$ are sampled from $N(0,\sigma^2)$. Take $\mathbf{Z} = \mathbf{A}\mathbf{B}^{\text{T}}$, $\mathbf{B} = [\mathbf{B}_1 ~ \mathbf{B}_2]$. In order to make $\mathbf{1}^{\text{T}}\mathbf{Z} = \mathbf{0}$, we further deflate the column offset of $\mathbf{Z}$ to the simulated $\bm{\mu}$, $\bm{\mu} = [\bm{\mu}_1^{\text{T}} ~ \bm{\mu}_2^{\text{T}}]^{\text{T}}$. This step will not change the value of $\mathbf{\Theta}_1$ and $\mathbf{\Theta}_2$, thus doesn't effect the simulation of $\mathbf{X}_1$ and $\mathbf{X}_2$.\\
\subsection{Evaluation metric and model selection}
As for simulated data sets, the true parameters $\mathbf{\Theta} = [\mathbf{\Theta}_1~\mathbf{\Theta}_2]$, $\bm{\mu} = [\bm{\mu}_1^{\text{T}} \bm{\mu}_1^{\text{T}}]^{\text{T}}$ and $\mathbf{Z} = \mathbf{A}\mathbf{B}^{\text{T}}$ are available. Therefore, the generalization error of the constructed model can be evaluated by comparing the true parameters and their model estimates. Thus, the evaluation metric is defined as the relative mean squared error (RMSE) of the model parameters. The RMSE of estimating $\mathbf{\Theta}$ is defined as $\text{RMSE}(\mathbf{\Theta}) = ||\mathbf{\Theta}-\hat{\mathbf{\Theta}}||_F^2/||\mathbf{\Theta}||_F^2$, where $\mathbf{\Theta}$ represents the true parameter and $\hat{\mathbf{\Theta}}$ its GSCA model estimate. The RMSE of $\bm{\mu}$ and $\mathbf{Z}$, are expressed as $\text{RMSE}(\bm{\mu})$ and $\text{RMSE}(\mathbf{Z})$ and they are defined in the same way as for $\mathbf{\Theta}$.\\
For real data sets, K-fold missing value based cross validation (CV) is used to estimate the generalization error of the constructed model. To make the prediction of the left out fold elements independent to the constructed model based on the reminding folds, the data is partitioned into K folds of elements which are selected in a diagonal style rather than row wise from $\mathbf{X}_1$ and $\mathbf{X}_2$ respectively, similar to the leave out patterns described by Wold \cite{wold1978cross, bro2008cross}. The test set elements of each fold in $\mathbf{X}_1$ and $\mathbf{X}_2$ are taken as missing values, and the remaining data are used to construct a GSCA model. After estimation of $\hat{\mathbf{\Theta}}$ and $\hat{\sigma^2}$ are obtained from the constructed GSCA model, the negative log likelihood of using $\hat{\mathbf{\Theta}}$, $\hat{\sigma^2}$ to predict the missing elements (left out fold) is recorded. This negative log likelihood is scaled by the number of missing elements. This process is repeated K times until all the K folds have been left out once. The mean of the K scaled negative log likelihoods is taken as the CV error.\\
When we define $\mathbf{X}=[\mathbf{X}_1 ~ \mathbf{X}_2]$ and $J=J_1+J_2$, the penalty term $\lambda g(\mathbf{Z})$ is not invariant to the number of non-missing elements in $\mathbf{X}$, as the joint loss function (equation (\ref{eq4})) is the sum of the log likelihoods for fitting all the non-missing elements in the data $\mathbf{X}$. Therefore, we effectively follow a similar approach as Fan \cite{fan2001variable} by adjusting the penalty strength parameter $\lambda$ for the relative number observations. By setting one fold of elements to be missing during the CV process, $\lambda||\mathbf{X}||_0/(I\times J)$ rather than $\lambda$ is used as the amount of penalty. During the K-fold CV process, a warm start strategy, using the results of previous constructed model as the initialization of next model, is applied. In this way, the K-fold CV can be greatly accelerated. The speed of the GSCA models with different penalties using different stopping criteria, and the corresponding CV procedure, are fully characterized in Table S1. All the computations are performed on a laptop with an i5-5300U CPU, 8GB RAM, 64-bit Windows 10 system and MATLAB of R2015a.\\
In the model selection process, the tuning parameter $\lambda$ and hyper-parameters ($q$ in $L_{q}$ and $\gamma$ in SCAD and GDP) can be selected by a grid search. However, previous work of using these penalty functions in supervised learning context \cite{fu1998penalized,fan2001variable,armagan2013generalized} and our experiments have shown that the results are not very sensitive to the selection of these hyper-parameters, and thus a default value can be set. On the other hand, the selection of tuning parameter $\lambda$ does have a significant effect on the results, and should be optimized by the grid search.\\
\subsection{Experiments} \subsubsection{Overfitting of the GSCA model with a fixed rank and no penalty} The real data sets from the Section 5 are used to show how the GSCA model with a fixed rank and no penalty will overfit the data. The algorithm (details are in the supplementary Section 1) used to fit the GSCA model (with an exact low rank constraint and orthogonality constraint $\mathbf{A}^{\text{T}}\mathbf{A} = I\mathbf{I}$) is a modification of the developed algorithm in Section 3. GSCA models with three components are fitted using stopping criteria $\epsilon_f = 10^{-5}$ and $\epsilon_f=10^{-8}$. Exactly the same initialization is used for these two models. As shown in Fig.~2, different stopping criteria can greatly effect the estimated $\hat{\mathbf{B}}_1$ from the GSCA models. Furthermore, the number of iterations to reach convergence increases from 141 to 23991. Similar phenomenon, some estimated parameters tend to divergence to plus or negative infinity, has been observed in logistic linear regression model and logistic PCA model \cite{de2006principal, song2017principal}. In logistic linear regression, the estimated coefficients corresponding to the directions where two classes are linearly separable tend to go to plus infinity or minus infinity. The overfitting issue of the GSCA model with exact low rank constraint can be interpreted in the same way by taking the columns of score matrix $\mathbf{A}$ as the latent variables and the loading matrix $\mathbf{B}_1$ as the coefficients to fit the binary $\mathbf{X}_1$. This result suggests that if an exact low rank constraint is preferred in the GSCA model, an extra scale penalty should be added on $\mathbf{B}_1$ to avoid overfitting.\\
\begin{figure}\label{Fig:2}
\end{figure}
\subsubsection{Comparing the generalization errors of the GSCA models with nuclear norm and concave penalties} To evaluate the performance of the GSCA model in recovering the underlying structure, we set up the realistic simulation (strongly imbalanced binary data and low SNR) as follows. The simulated $\mathbf{X}_1$ and $\mathbf{X}_2$ have the same size as the real data sets in the Section 5, $I=160$, $J_1=410$, $J_2 = 1000$. The logit transform of the empirical marginal probabilities of the CNA data set in the Section 5 is set as $\bm{\mu}_1$. Elements in $\bm{\mu}_2$ are sampled from the standard normal distribution. The simulated low rank is set to $R=10$; $\sigma^2$ is set to 1; $\text{SNR}_1$ and $\text{SNR}_2$ are set to 1. After the simulation of $\mathbf{X}_1$, there are two columns with identical ``0'' elements, which are removed as they provide no information (no variation).\\
As the GSCA model with the nuclear norm penalty is a convex problem, a global optimum can be obtained. The nuclear norm penalty is therefore used as the baseline in the comparison with other penalties. An interval from $\lambda_0$, which is large enough to achieve an estimated rank of at most rank 1, to $\lambda_{t}$, which is small enough to achieve an estimated rank of 159, is selected based on low precision models ($\epsilon_f=10^{-2}$). 30 log-spaced $\lambda$s are selected equally from the interval $[\lambda_{t},\lambda_{0}]$. The convergence criterion is set as $\epsilon_f = 10^{-8}$. The results are shown in Fig.~3. With decreasing $\lambda$, the estimated rank of $\hat{\mathbf{Z}}$ increased from 0 to 159, and the estimated $\hat{\sigma}^2$ decreased from 2 to close to 0. The minimum $\text{RMSE}(\mathbf{\Theta})$ of 0.184 (the corresponding $\text{RMSE}(\mathbf{\Theta}_1)=0.229$, $\text{RMSE}(\mathbf{\Theta}_2)=0.054$, $\text{RMSE}(\bm{\mu}) = 0.072$ and $\text{RMSE}(\mathbf{Z})=0.446$) can be achieved at $\lambda=38.3$, which corresponds to $\text{rank}(\hat{\mathbf{Z}})=52$ and $\hat{\sigma}^2=0.9271$. There are sharp transitions in all the three subplots near the point $\lambda=40$. The reason is that when the penalty is not large enough, the estimated rank becomes 159, and the constructed GSCA model is almost a saturated model. Thus the model has high generalization error and the estimated $\hat{\sigma}^2$ also becomes close to 0. Given that we only have indirect binary observation $\mathbf{X}_1$ and highly noisy observation $\mathbf{X}_2$ of the underlying structure $\mathbf{\Theta}$, the performance of the GSCA model with nuclear norm penalty is reasonable. However, results can be greatly improved by using concave penalties.\\
\begin{figure}\label{Fig:3}
\end{figure}
For concave penalties, different values of the hyper-parameters, $q$ in $L_q$, $\gamma$ in SCAD and GDP, are selected according to their thresholding properties. For each value of the hyper-parameter, values of tuning parameter $\lambda$ are selected in the same manner as described above. The minimum $\text{RMSE}(\mathbf{\Theta})$ achieved and the corresponding $\text{RMSE}(\bm{\mu})$ and $\text{RMSE}(\mathbf{Z})$ for different values of hyper-parameter of the GSCA models with different penalty functions are shown in Fig.~4. The relationship between RMSEs, $\lambda$ and hyper-parameter for the GSCA model with $L_q$, SCAD and GDP penalty functions are fully characterized in Fig.~S2, Fig.~S3 and Fig.~S4 respectively. As shown in Fig.~4, all GSCA models with concave penalties can achieve much lower RMSEs in estimating $\mathbf{\Theta}$, $\bm{\mu}$ and $\mathbf{Z}$ compared to the convex nuclear norm penalty ($L_{q:q=1}$ in the plot). Among the three concave penalties used, $L_{q}$ and GDP have better performance.\\
\begin{figure}\label{Fig:4}
\end{figure}
If we get access to the full information, the underlying quantitative data $\mathbf{X}_1^{\ast}$ rather than the binary observation $\mathbf{X}_1$, the SCA model on $\mathbf{X}_1^{\ast}$ and $\mathbf{X}_2$ is simply a PCA model on $[\mathbf{X}_1^{\ast} ~ \mathbf{X}_2]$. From this model, we can get an estimation of $\mathbf{\Theta}$, $\bm{\mu}$ and $\mathbf{Z}$. We compared the results derived from the SCA model on the full information, the GSCA models with nuclear norm, $L_{q:q=0.1}$, SCAD ($\gamma=5$) and GDP ($\gamma=1$) penalties. All the models are selected to achieve the minimum $\text{RMSE}(\mathbf{\Theta})$. The RMSEs of estimating $\mathbf{\Theta}$, $\mathbf{\Theta}_1$, $\mathbf{\Theta}_2$, $\bm{\mu}$ and $\mathbf{Z}$ and the rank of estimated $\hat{\mathbf{Z}}$ from different models are shown in Table 2. Here we can see that the GSCA models with $L_{q:q=0.1}$ and GDP ($\gamma=1$) penalties have better performance in almost all criteria compared to the nuclear norm and SCAD penalties, and even comparable with the SCA model on full information. The singular values of the true $\mathbf{Z}$, estimated $\hat{\mathbf{Z}}$ from the above models and the noise terms $\mathbf{E} = [\mathbf{E}_1~\mathbf{E}_2]$ are shown in Fig.~5. Only the first 15 singular values are shown to have higher resolution of the details. Since the $10$-th singular value of the simulated data $\mathbf{Z}$ is smaller than the noise level, the best achievable rank estimation is 9. Both the $L_{q:q=0.1}$ and GDP ($\gamma=1$) penalties successfully find the correct rank 9, and they have a very good approximation of the first 9 singular values of $\mathbf{Z}$. On the other hand, the nuclear norm penalty shrinks all the singular values too much. Furthermore, the SCAD penalty overestimates the first three singular values and therefore shrinks all the other singular values too much. These results are easily understandable if taking their thresholding properties in Fig.~2 into account. Both the $L_{q}$ and the GDP penalties have very good performance in this simulation experiment.\\
\begin{table}[htbp] \centering \caption*{\label{tabS1} Table 2: The RMSEs of estimating $\mathbf{\Theta}$, $\bm{\mu}$ and $\mathbf{Z}$ and the rank of estimated $\hat{\mathbf{Z}}$ from different models.}
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
& $\text{RMSE}(\mathbf{\Theta})$ & $\text{RMSE}(\mathbf{\Theta}_1)$ & $\text{RMSE}(\mathbf{\Theta}_2)$ & $\text{RMSE}(\bm{\mu})$ & $\text{RMSE}(\mathbf{Z})$ & $\text{rank}(\hat{\mathbf{Z}})$\\
\hline
$L_{q:q=1}$ & 0.1840 & 0.2288 & 0.0537 & 0.0724 & 0.4456 & 52 \\
$L_{q:q=0.1}$ & 0.0598 & 0.0682 & 0.0353 & 0.0168 & 0.1606 & 9\\
SCAD($\gamma=5$) & 0.1093 & 0.1334 & 0.0395 & 0.0376 & 0.2777 & 24 \\
GDP($\gamma=1$) & 0.0593 & 0.0675 & 0.0354 & 0.0160 & 0.1610 & 9 \\
full information & 0.0222 & 0.0675 & 0.0354 & 0.0030 & 0.0674 & 9 \\
\hline \end{tabular} \end{table}
\begin{figure}\label{Fig:5}
\end{figure}
\subsubsection{Comparing the GSCA model with GDP penalty and the iClusterPlus model} A detailed theoretical comparison of our method to the iClusterPlus model \cite{mo2013pattern} and a related work \cite{wu2015fast} can be found in the supplementary Section 3. After that, we compared our GSCA model with GDP penalty to the iClusterPlus model on the simulated data sets. The parameters for the GSCA model with GDP penalty is the same as described above. The running time is 60.61s when $\epsilon_f=10^{-8}$, and 9.98s when $\epsilon_f=10^{-5}$. For the iClusterPlus model, 9 latent variables are specified. The tuning parameter of the lasso type constraint on the data specific coefficient matrices are set to 0. The default convergence criterion is used, that is the maximum of the absolute changes of the estimated parameters in two subsequent iterations is less than $10^{-4}$. The running time of the iClusterPlus model is close to 3 hours. The constructed iClusterPlus model provides the estimation of column offset $\hat{\bm{\mu}}$, the common latent variables $\hat{\mathbf{A}}$, and data set specific coefficient matrices $\hat{\mathbf{B}}_1$ and $\hat{\mathbf{B}}_2$. The estimated $\hat{\mathbf{Z}}$ and $\hat{\mathbf{\Theta}}$ are computed in the same way as defined in the model section. The RMSEs in estimating $\mathbf{\Theta}$, $\bm{\mu}$ and $\mathbf{Z}$ for iClusterPlus are 2.571, 2.473 and 3.060 respectively. Compared to the results from the GSCA models in Table 2, iClusterPlus is unable to provide good results on the simulated data sets. Fig.~S5 compares the estimated $\hat{\mu}_1$ from the GSCA model with GDP penalty and iClusterPlus model. As shown in Fig.~S5(right), the iClusterPlus model is unable to estimate the offset $\bm{\mu}$ correctly. Many elements of estimated $\hat{\bm{\mu}_1}$ are exactly 0, which corresponds to an estimated marginal probability of 0.5. In addition, as shown in Fig.~6(left), the singular values of the estimated $\hat{\mathbf{Z}}$ from the iClusterPlus model are clearly overestimated. These undesired results from the iClusterPlus model are due mainly to the imbalancedness of the simulated binary data set. If the offset term $\bm{\mu}_1$ in the simulation is set to 0, which corresponds to balanced binary data simulation, and fix all the other parameters in the same way as in the above simulation, the results of iClusterPlus and the GSCA with GDP penalty are more comparable. In that case the RMSEs of estimating $\mathbf{\Theta}$, $\mathbf{Z}$ in the GSCA model with GDP penalty are 0.071 and 0.091 respectively, while the RMSEs of the iClusterPlus model are 0.107 and 0.142 respectively. As shown in Fig.~6(right), the singular values of estimated $\hat{\mathbf{Z}}$ from the iClusterPlus model are much more accurate compared to the imbalanced case. However, iClusterPlus still overestimates the singular values compared to the GSCA model with GDP penalty. This phenomenon is related to the fact that exact low rank constraint is also used in the iClusterPlus model. These results suggest that compared to iClusterPlus, the GSCA model with GDP penalty is more robust to the imbalanced binary data and has better performance in recovering the underlying structure in the simulation experiment.\\
\begin{figure}\label{Fig:6}
\end{figure}
\subsubsection{The performance of the GSCA model for the simulation with different SNRs} We will explore the performance of the GSCA model for the simulated binary and quantitative data sets with varying noise levels in the following experiment. Equal SNR levels are used in the simulation for $\mathbf{X}_1$ and $\mathbf{X}_2$. 20 log spaced SNR values are equally selected from the interval $[0.1, 100]$. Then we simulated coupled binary data $\mathbf{X}_1$ and quantitative $\mathbf{X}_2$ using the different SNRs in the same way as described above. During this process, except for the parameters $c_1$ and $c_2$, which are used to adjust the SNRs, all other parameters used in the simulation were kept the same. The GSCA models with GDP penalty ($\gamma=1$), $L_{q}$ penalty ($q=0.1$), nuclear norm penalty, and the SCA model on the full information (defined above) are used in these simulation experiments. For these three models, the model selection process was done in the same way as described in above experiment. The models with the minimum $\text{RMSE}(\mathbf{\Theta})$ are selected. As shown in Fig.~7, the GSCA models with concave GDP and $L_{q}$ penalties always have better performance than the convex nuclear norm penalty, and they are comparable to the situation where the full information is available. With the increase of SNR, the $\text{RMSE}(\mathbf{Z})$ derived from the GSCA model, which is used to evaluate the performance of the model in recovering the underlying low dimensional structure, first decreases to a minimum and then increases. As shown in bottom center and right, this pattern is mainly caused by how $\text{RMSE}(\mathbf{Z}_1)$ changes with respect to SNRs. Although this result counteracts the intuition that larger SNR means higher quality of data, it is in line with previous results on logistic PCA model of binary data set \cite{davenport20141}. In order to understand this effect, considering the S-shaped logistic curve, the plot of the function $\phi(\theta) = (1+\exp(-\theta))^{-1}$. This curve almost becomes flat when $\theta$ becomes very large. There is no resolution anymore in these flat regimes. A large deviation in $\theta$ has almost no effect on the logistic response. When the SNR becomes extremely large, the scale of the simulated parameter $\theta$ is very extreme, then even if we have a good estimation of the probability $\hat{\pi} = \phi(\hat{\theta})$, the scale of estimated $\hat{\theta}$ can be far away from the simulated $\theta$. We refer \cite{davenport20141} for a detailed interpretation of this phenomenon.\\
\begin{figure}\label{Fig:7}
\end{figure}
\subsubsection{Assessing the model selection procedure} The cross validation procedure and the cross validation error have been defined in the model selection section. The GSCA model with GDP penalty is used as an example to assess the model selection procedure. $\epsilon_f=10^{-5}$ is used as the stopping criteria for all the following experiments to save time. The values of $\lambda$ and $\gamma$ are selected in the same way as was described in Section 4.2. Fig.~8 shows the minimum $\text{RMSE}(\mathbf{\Theta})$ and minimum CV error achieved for different values of the hyper-parameter $\gamma$. The minimum CV error changes in a similar way as the minimum $\text{RMSE}(\mathbf{\Theta})$ with respect to the values of $\gamma$. However, taking into account the uncertainty of estimated CV errors, the difference of the minimum CV errors for different $\gamma$ is very small. Thus, we recommend to fix $\gamma$ to be 1, rather than using cross validation to select it. Furthermore, setting $\gamma = 1$ as the default value for the GDP penalty has a probabilistic interpretation, see in \cite{armagan2013generalized}.\\
\begin{figure}\label{Fig:8}
\end{figure}
Whenever the GSCA model is used for exploratory data analysis, there is no need to select $\lambda$ explicitly. It is sufficient to find a proper value to achieve a two or three component GSCA model, in order to visualize the estimated score and loading matrices. If the goal is confirmatory data analysis, it is possible to select the tuning parameter $\lambda$ explicitly by the proposed cross validation procedure. Fig.~9 shows how the tuning parameter $\lambda$ affects the CV errors, $\text{RMSE}(\mathbf{\Theta})$ and the estimated ranks. The minimum CV error obtained is close to the Bayes error, which is the scaled negative log likelihood in cases where the true parameters $\mathbf{\Theta}$ and $\sigma^2$ are known. Even through, inconsistence exists between CV error plot (Fig.~9, left) and the $\text{RMSE}(\mathbf{\Theta})$ plot (Fig.~9, center), the selected model corresponding to minimum CV error can achieve very low $\text{RMSE}(\mathbf{\Theta})$ and correct rank estimation (Fig.~9, right). Therefore, we suggest to use the proposed CV procedure to select the value of $\lambda$ at which the minimum CV error is obtained. Finally, we fit a model on full data set without missing elements using the selected value of $\lambda$ and the outputs of the selected model with minimum CV error as the initialization.\\
\begin{figure}\label{Fig:9}
\end{figure}
\section{Empirical illustration} \subsection{Real data set} The Genomic Determinants of Sensitivity in Cancer 1000 (GDSC1000) \cite{iorio2016landscape} contains 926 tumor cell lines with comprehensive measurements of point mutation, CNA, methylation and gene expression. We selected the binary CNA and quantitative gene expression measurements on the same cell lines (each cell line is a sample) as an example to demonstrate the GSCA model. To simplify the interpretation of the derived model, only the cell lines of three cancer types are included: BRCA (breast invasive carcinoma, 48 cell lines), LUAD (lung adenocarcinoma, 62 cell lines) and SKCM (skin cutaneous melanoma, 50 cell lines). The CNA data set has 410 binary variables. Each variable is a copy number region, in which ``1'' indicates the presence and ``0'' indicates the absence of an aberration. Note that, the CNA data is very imbalanced: only $6.66\%$ the elements are ``1''. The empirical marginal probabilities of binary CNA variables are shown in Fig.~S1. The quantitative gene expression data set contains 17,420 variables, of which 1000 gene expression variables with the largest variance are selected. After that, the gene expression data is column centered and scaled by the standard deviation of the each variables to make it more consistent with the assumption of the GSCA model.\\
\subsection{Exploratory data analysis of the coupled CNA and gene expression data sets} We applied the GSCA model (with GDP penalty and $\gamma$=1) to the GDSC data set of 160 tumor cell lines that have been profiled for both binary CNA ($160 \times 410$) and quantitative gene expression ($160 \times 1000$). The results of model selection (Fig.~S6) validate the existence of a low dimensional common structure between CNA and gene expression data sets. For exploratory purposes, we will construct a three component model instead.\\
We first considered the score plot resulting from this GSCA model. The first two PCs show a clear clustering by cancer type (Fig.~10, left), and in some cases even subclusters (i.e. hormone-positive breast cancer, MITF-high melanoma). These results suggest that the GSCA model captures the relevant biology in these data. Interestingly, when we performed PCA on the gene expression data, we obtained score plots that were virtually identical to those resulting from the GSCA model (Fig.~S7, left; modified RV coefficient: 0.9998), suggesting that this biological relevance is almost entirely derived from the gene expression data.\\
\begin{figure}\label{Fig:10}
\end{figure}
We then wondered whether the GSCA model could leverage the gene expression data to help us gain insight into the CNA data. To test this, we first established how much insight could be gained from the CNA data in isolation. Fig.~S8 shows the scores and loadings of the first two components from a three component logistic PCA model \cite{de2006principal} applied to the CNA data. While these do seem to contain structure in the loading plot, we believe that they mostly explain technical characteristics of the data. For example, deletions and amplifications are almost perfectly separated from each other by the PC1=0 line in the loading plot (Fig.~S9). Additionally, the scores on PC1 are strongly associated to the number of copy number aberrations (i.e., to the number of ones) in a given sample (Fig.~S10). Finally, the clusters towards the left of the loading plot suggested two groups of correlated features, but these could trivially be explained by genomic position, that is, these features correspond with regions on the same chromosome arm, which are often completely deleted or amplified (Fig.~S11). Following these observations, we believe that a study of the CNA data in isolation provides little biological insight.\\
On the other hand, using the GCSA model’s CNA loadings (Fig.~10, center), we could more easily relate the features to the biology. Let us focus on features with extreme values on PC1 and for which the corresponding chromosomal region contains a known driver gene. For example, the position of MYC amplifications in the loading plot indicates that MYC amplifications occur mostly in lung adenocarcinoma and breast cancer samples (Fig.~10, center; Fig.~S12). Similarly, ERBB2 amplifications occur mainly in breast cancer samples (Fig.~10, center; Fig.~S12). Finally, PTEN deletions were enriched in melanomas, though the limited size of the loading also indicates that they are not exclusive to melanomas (Fig.~10, center; Fig.~S12). Importantly, these three findings are in line with known biology \cite{akbani2015genomic,cancer2014comprehensive,cancer2012comprehensive} and hence exemplify how GSCA could be used to interpret the CNA data. Altogether, using the GSCA model, we were able to 1) capture the biological relevance in the gene expression data, and 2) leverage that biological relevance from the gene expression to gain a better understanding of the CNA data.\\
\section{Discussion} In this paper, we generalized the standard SCA model to explore the dependence between coupled binary and quantitative data sets. However, the GSCA model with exact low rank constraint overfits the data, as some estimated parameters tend to divergence to positive infinity or negative infinity. Therefore, concave penalties are introduced in the low rank approximation framework to achieve low rank approximation and to mitigate the overfitting issues of the GSCA model. An efficient algorithm framework with analytical form updates for all the parameters is developed to optimize the GSCA model with any concave penalties. All concave penalties used in our experiments have better performance with respect to generalization error and estimated low rank of the constructed GSCA model compared to the nuclear norm penalty. Both $L_{q}$ and GDP penalties with proper model selection can recover the simulated low rank structures almost exactly only from indirect binary observation $\mathbf{X}_1$ and noisy quantitative observation $\mathbf{X}_2$. Furthermore, we have shown that the GSCA model outperforms the iClusterPlus model with respect to speed and accuracy of the estimation of the model parameters.\\
Why the GSCA models with concave penalties have better performance? The exact low rank constraint thresholds the singular values in a hard manner and, therefore, only the largest $R$ singular values are kept. On the other hand, the nuclear norm penalty works in a soft manner, in which all the singular values are shrunk by the same amount of $\lambda$. The thresholding properties of the concave penalties discussed in this paper lie in between these two approaches. As $\mathbf{Z}=\mathbf{A}\mathbf{B}^{\text{T}}$ and $\mathbf{A}^{\text{T}}\mathbf{A}= I\mathbf{I}_R$, the scale of the loadings is related to the scale of the singular values of $\mathbf{Z}$. Thus, we can shrink the singular values of $\mathbf{Z}$ to control the scale of estimated loading matrices in an indirect way. The exact low rank constraint kept the $R$ largest singular values but without control of the scale of the estimated singular values, leading to overfitting. On the other hand, nuclear norm penalty shrinks all the singular values by the same amount of $\lambda$, leading to biased estimation of the singular values. A concave penalty, like $L_q$ or GDP, achieves a balance in thresholding the singular values. Among the concave penalties we used in the experiment, the SCAD penalty does not work well in the simulation study. The reason is that the SCAD penalty does not shrink the large singular values, which therefore tend to be overfitted, while the smaller singular values are shrunk too much.\\
Compared to the iClusterPlus method, only the option of binary and quantitative data sets are included in our GSCA model, and at the moment no sparsity can be imposed for the integrative analysis of binary and quantitative data sets. However, the GSCA model with GDP penalty is optimized by a more efficient algorithm, it is much more robust to the imbalanced nature of the biological binary data and it provides a much better performance for the simulation experiments in this paper. Furthermore, the exploratory analysis of the GDSC coupled CNA and gene expression data sets provided important information on the binary CNA data that was not obtained by a separate analysis.\\
\section*{Supplementary material} \subsection*{GSCA model with exact low rank constraint} The exact low rank constraint on $\mathbf{Z}$ can be expressed as the multiplication of two low rank matrices $\mathbf{A}$ and $\mathbf{B}$. The optimization problem related to the GSCA model with exact low rank constraint can be expressed as \begin{equation} \begin{aligned}
\min_{\bm{\mu},\mathbf{Z},\sigma^2} \quad & f_1(\mathbf{\Theta}_1) + f_2(\mathbf{\Theta}_2,\sigma^2) \\
\text{s.t.~} \mathbf{\Theta} &= \mathbf{1}\bm{\mu}^{\text{T}} + \mathbf{Z} \\
\mathbf{\Theta} &= [\mathbf{\Theta}_1 ~ \mathbf{\Theta}_2] \\
\text{rank}(\mathbf{Z}) &= R \end{aligned} \end{equation}
The developed algorithm in the paper can be slightly modified to fit this model. Same as in the paper, the above optimization problem can majorized to the following problem. \begin{equation} \begin{aligned}
& \frac{L}{2} ||\mathbf{\Theta} - \mathbf{H}^k||_F^2+ c\\
\mathbf{\Theta} &= \mathbf{1}\bm{\mu}^{\text{T}} + \mathbf{Z} \\
\mathbf{H}^k &= \mathbf{\Theta}^k - \frac{1}{L} (\mathbf{Q} \odot \nabla f(\mathbf{\Theta}^k))\\
\mathbf{1}^{\text{T}}\mathbf{Z} &= 0 \\
\text{rank}(\mathbf{Z}) &= R. \end{aligned} \end{equation}
The analytical solution of the $\bm{\mu}$ is also the column mean of $\mathbf{H}^k$. After deflating out the offset term $\bm{\mu}$, the majorized problem becomes $\min_{\mathbf{Z}} \frac{L}{2} ||\mathbf{Z} - \mathbf{J} \mathbf{H}^k||_F^2 \quad \text{s.t.} \quad \text{rank}(\mathbf{Z}) = R$, $\mathbf{1}^{\text{T}}\mathbf{Z} = 0$. The global optimal solution is the $R$ truncated SVD of $\mathbf{J} \mathbf{H}^k$. Other steps in the algorithm to fit the GSCA model with exact low rank constraint are exactly the same the algorithm developed in the paper to fit the GSCA model with concave penalties.\\
\subsection*{Figures and tables}
\begin{table}[htbp] \centering \caption*{\label{tabS1} Table S1: Comparison of the average computational time (in seconds) of the GSCA model with different penalties and the corresponding 7-fold CV procedure. The binary CNA and quantitative gene expression data sets are used as an example. ``fit'': a three components GSCA model; ``CV'': 7-fold CV procedure. All the models are repeated 5 times, the average computational time is recorded.}
\begin{tabular}{|l|l|l|l|}
\hline
& fit: $\epsilon_f=10^{-5}$ & CV: $\epsilon_f=10^{-5}$ & fit: $\epsilon_f=10^{-8}$\\
\hline
$L_{1}$ & 9.68 & 18.33 & 57.48\\
$L_{0.1}$ & 11.28 & 25.06 & 67.47\\
SCAD($\gamma=5$) & 9.96 & 18.44 & 57.58 \\
GDP($\gamma=1$) & 11.90 & 27.18 & 69.66 \\
\hline \end{tabular} \end{table}
\begin{figure}\label{Fig:S1}
\end{figure}
\begin{figure}\label{Fig:S2}
\end{figure}
\begin{figure}\label{Fig:S3}
\end{figure}
\begin{figure}\label{Fig:S4}
\end{figure}
\begin{figure}\label{Fig:S5}
\end{figure}
\begin{figure}\label{Fig:S6}
\end{figure}
\begin{figure}\label{Fig:S7}
\end{figure}
\begin{figure}\label{Fig:S8}
\end{figure}
\begin{figure}\label{Fig:S9}
\end{figure}
\begin{figure}\label{Fig:S10}
\end{figure}
\begin{figure}\label{Fig:S11}
\end{figure}
\begin{figure}\label{Fig:S12}
\end{figure}
\end{document} | arXiv |
At the national curling championships, there are three teams of four players each. After the championships are over, the very courteous participants each shake hands three times with every member of the opposing teams, and once with each member of their own team.
How many handshakes are there in total?
For each participant, there are 8 opponents to shake hands with, and 3 team members to shake hands with, giving $3\times8+3=27$ handshakes for each individual participant.
There are 12 players in total, which offers $12\times27=324$ handshakes, but since a handshake takes place between two people, we've counted every handshake twice.
The final answer is $\dfrac{324}{2}=\boxed{162}$ handshakes. | Math Dataset |
\begin{document}
\author{Amartya Goswami}
\address{Department of Mathematics and Applied Mathematics,\\University of Limpopo, Sovenga 0727, South Africa}
\email{[email protected]}
\title{Salamander lemma for non-abelian group-like structures}
\begin{abstract} It is well known that the classical diagram lemmas of homological algebra for abelian groups can be generalized to non-abelian group-like structures, such as groups, rings, algebras, loops, etc. In this paper we establish such a generalization of the ``salamander lemma'' due to G.~M.~Bergman, in a self-dual axiomatic context (developed originally by Z.~Janelidze), which applies to all usual non-abelian group-like structures and also covers axiomatic contexts such as semi-abelian categories in the sense of G.~Janelidze, L.~M\'arki and W.~Tholen and exact categories in the sense of M.~Grandis. \end{abstract}
\makeatletter \@namedef{subjclassname@2020}{ \textup{2020} Mathematics Subject Classification} \makeatother
\keywords{diagram lemma; exact sequence; duality for groups; salamander lemma.}
\subjclass[2010]{18G50, 20J05, 08A30}
\maketitle
\msection{Introduction}\label{sec-Introduction}
The salamander lemma is a diagram lemma on a double complex formulated for abelian categories in \cite{B12}, where it has been shown that the other diagram lemmas of homological algebra (specifically, the $3\times 3$ lemma, the four lemma, the snake lemma, Goursat theorem, and the lemma on the long exact sequence of homology associated with a short exact sequence of complexes) can be recovered from the salamander lemma. In this paper we formulate and prove a non-abelian version of the salamander lemma, in a self-dual axiomatic context presented in \cite{GJ17}, which includes all semi-abelian categories \cite{JMT02} and Grandis exact categories \cite{G84,G92,G12}. These categories are two separate generalizations of abelian categories, which in turn include many important non-abelian categories of group-like structures. Hence the context in which we prove the non-abelian salamander lemma can be applied to groups, rings, loops, Lie algebras, modules and vector spaces in particular, projective spaces, graded abelian groups, and many others. In this paper, the proof of the salamander lemma reduces to a proposition on exactness of a sequence of subquotients (see Proposition~\ref{ext1} below).
Two corollaries, (namely Corollary 2.1 and Corollary 2.2) of the salamander lemma in \cite{B12} have been used to recover above mentioned diagram lemmas of homological algebra. We give a reformulation of these two corollaries in the present context, and as an example we apply them to prove $3\times 3$ lemma.
\msection{The context}
After almost seventy years since S.~Mac~Lane had the idea (see \cite{M50}) to revisit basic homomorphism theorems of groups in a self-dual axiomatic framework, it is in \cite{GJ17} where a convincing framework has been described in order to achieve this goal.
The case of abelian groups led to the notion of an \emph{abelian category} (refined by Buchsbaum in \cite{B55}), which completely addressed the problem in the abelian case. After the work of Grothendieck \cite{G57}, this became the central context for homological algebra. Duality allows to get two dual results out of one. For non-abelian groups, some developments (see \cite{W66,W71}) have been made, but without major success in respect to duality. Instead, a non-dual category-theoretic treatment of groups and group-like structures flourished, which culminated with introduction \emph{semi-abelian category}, introduced in \cite{JMT02}. The context of semi-abelian categories allows a unified treatment (see e.g.~\cite{BB04}) of all standard homomorphism theorems (i.e. isomorphism theorems and diagram lemmas of homological algebra).
The paper \cite{GJ17} shows that the difficulties in expressing duality phenomenon for group-like structures can be overcome by using functorial duality in the place of categorical duality. This idea originated in the work \cite{J14} (see also \cite{JW14, JW16, JW17}) on the comparison between semi-abelian categories with those appearing in the work of Grandis in homological algebra \cite{G84,G92,G12,G13}.
The ``self-dual theory'' of \cite{GJ17} is based on five self-dual axioms which we recall in the next section after introducing the necessary language of this set up. Among the consequences of these axioms, we mention only a few (Lemma \ref{A} - Lemma \ref{rml}) that required in proving the salamander lemma (in this context) and for details about these and other consequences, we refer our readers to \cite{GJ17}.
In this section we briefly recall the axiomatic context introduced in \cite{GJ17}. This context consists of abstract objects, called ``groups'', which in concrete cases could be groups, rings, modules, or some other group-like structures. The abstract objects form a category whose maps are called ``morphisms'' (in the case of a particular type of group-like structures these are the usual morphisms of those structures). For each group, there is a specified bounded lattice of ``subgroups'', whose partial order is written as ``$\subseteq$'' (again, in the general context these lattices are given abstractly, and in concrete contexts they are the usual substructure lattices). To each morphism $f\colon G\to H$ there is an associated Galois connection between subgroup lattices \newcommand\mapsfrom{\mathrel{\reflectbox{\ensuremath{\mapsto}}}} \begin{align*} \mathtt{Sub}\;\!G&\to \mathtt{Sub}\,\!H,\\ S&\mapsto f S,\\ f^{-1} T&\mapsfrom T, \end{align*} which for concrete group-like structures is the Galois connection between substructure lattices given by the direct and inverse images of substructures along the morphism $f$ (In the general case, we use the same terminology and call $f S$ the direct image of $S$ under $f$ and $f^{-1} T,$ the inverse image of $T$ under $f$). This data is subject to axioms recalled below. The axioms are invariant under duality, which extends the usual categorical duality and is summarised by the following table (it is in fact an instance of a ``functorial duality'' as explained in \cite{GJ17}): $$\xymatrix@R=2pt{ \textrm{Expression} & \textrm{Dual Expression} \\ \textrm{$G$ is a group} & \textrm{$G$ is a group} \\ \textrm{$S\in\mathtt{Sub}\;\!G$} & \textrm{$S\in\mathtt{Sub}\;\!G$}\\ \textrm{$S\subseteq T$ in $\mathtt{Sub}\;\!G$} & \textrm{$T\subseteq S$ in $\mathtt{Sub}\;\!G$} \\ f\colon G\to H & f\colon H\to G \\ f g & g f \\ f S & f^{-1} S\\ f^{-1} T & f T. }$$ In this context, for a group $G$ by $1$ we denote the bottom element of $\mathtt{Sub}\,G$ (and we call it the \emph{smallest subgroup} of $G$), and by $G$ we denote its top element (calling it the \emph{largest subgroup} of $G$). The \emph{image} of a group morphism $f\colon G\to H$ is defined as $\mathtt{Im}\,\!f=f G$. The dual notion is that of a \emph{kernel} of a group morphism, $\mathtt{Ker}\,\!f=f^{-1} 1$. When $G= \mathtt{Ker}\,\! f,$ we call $f$ a \emph{zero morphism} and denote it by $0.$ The \emph{identity morphism} $1_G\colon G\to G$ for a group $G,$ is the morphism such that $1_Gf=f$ and $g1_G=g$ for arbitrary morphisms $f\colon F\to G$ and $g:G\to H.$ An \emph{isomorphism} is a morphism $f\colon X\to Y$ such that $fg=1_Y$ and $gf=1_X$ for some morphism $g\colon Y\to X.$ A \emph{normal subgroup} of a group $G$ is its subgroup $S$ which is the kernel of some group morphism $f\colon G\to H$ and dually, a \emph{conormal subgroup} $S$ of a group $G$ is a subgroup of $G$ which appears as the image of some group morphism $f\colon F\rightarrow G.$ In standard examples, all subgroups are conormal. In the general theory, however, we do not want to require this since its dual would force all subgroups to be normal. The axioms of our ``self-dual theory'' are as follows: \begin{axiom}\label{ax1} Assigning to each group morphism $f\colon G\to H$ the Galois connection $$\xymatrix{ \mathtt{Sub}\;\!G\ar@<2pt>[r] & \mathtt{Sub}\,\!H\ar@<2pt>[l]}$$ given by direct and inverse image maps under $f,$ defines a functor from the category of groups to the category of posets and Galois connections. \end{axiom} \begin{axiom}\label{ax2} For any group morphism $f\colon G\to H$ and subgroups $A$ of $G$ and $B$ of $H$ we have $ff^{-1} B=B\wedge \mathtt{Im}\,\!f$ and $f^{-1}f A=A\vee \mathtt{Ker}\,\!f.$ \end{axiom} \begin{axiom} \label{ax3} Each conormal subgroup $S$ of a group $G$ admits an embedding $\iota_S\colon S/1 \to G$ such that $\mathtt{Im}\iota_S\subseteq S$ and for arbitrary group morphism $f\colon U\to G$ such that $\mathtt{Im}f\subseteq S,$ we have $f=\iota_Su$ for a unique homomorphism $u\colon U\to S/1$. Dually, each normal subgroup $S$ of a group $G$ admits a projection $\pi_S\colon G\to G/S$ such that $S\subseteq\mathtt{Ker}\pi_S$ and for an arbitrary group homomorphism $g\colon G\to V$ such that $S\subseteq\mathtt{Ker}g,$ we have $g=v\pi_S$ for a unique homomorphism $v\colon G/S\to V.$ \end{axiom}
In classical group theory, Axiom~\ref{ax3} tells us that $\iota_{S}$ is the embedding of the group $S$ into the group $G,$ and $\pi_{S}$ is the quotient map from $G$ to the quotient of $G$ by the normal subgroup generated by $S.$
Back in the general context, a subgroup $B$ of a group $G$ is said to be \emph{normal to} a subgroup $A$ of $G$ when (i) $B\subseteq A,$ (ii) $A$ is a conormal subgroup of $G,$ and (iii) $\iota_{A}^{-1} B$ is a normal subgroup of the domain of $\iota_{A}^{-1}$. When a subgroup $B$ is normal to a conormal subgroup $A,$ we denote the codomain of $\pi_{\iota_{A}^{-1}B}$ as $A/B.$ We also write $B\triangleleft A$ when this relation holds.
\begin{axiom} \label{ax4} Any group morphism $f\colon G\to H$ factorizes as $f=\iota_{\mathtt{Im}\,\!f}h\pi_{\mathtt{Ker}\,\!f}$ where $h$ is an isomorphism. \end{axiom} \begin{axiom} \label{ax5} The join of any two normal subgroups of a group is normal and the meet of any two conormal subgroups is conormal. \end{axiom} Recall from \cite{GJ17} that among the consequences of the axioms above are the following lemmas.
\begin{lemma} \label{A} The direct image map will always preserve joins of subgroups and the inverse image map will always preserve meets of subgroups. \end{lemma} \begin{lemma}\label{A1} Any embedding is a monomorphism, i.e.~if $mu=mu'$ then $u=u',$ for any embedding $m\colon M\to G$ and any pair of parallel homomorphisms $u$ and $u'$ with codomain $M.$ \end{lemma}
\begin{lemma} \label{B} The embedding of an image has trivial kernel and dually, the image of a projection is the largest subgroup of its codomain. \end{lemma} \begin{lemma}\label{B1}
A morphism is both an embedding and a projection if and only if it is an isomorphism. \end{lemma} \begin{lemma} \label{C} Whenever $A\vee B\subseteq S,$ where $S$ is conormal in some group $G$, we have: $\iota_{S}^{-1}(A\vee B)=\iota_{S}^{-1} A\vee \iota^{-1}_{S} B.$ \end{lemma} \begin{lemma}\label{B2}
Normal subgroups are stable under direct images along projections and conormal subgroups are stable under inverse images along embeddings. \end{lemma} \begin{lemma}[Restricted Modular Law]\label{rml} For any three subgroups $X,$ $Y,$ and $Z$ of a group $G,$ if either $Y$ is normal and $Z$ is conormal, or $Y$ is conormal and $X$ is normal, then we have: $X\subseteq Z \; \Rightarrow \; X\vee(Y\wedge Z)= (X\vee Y)\wedge Z.$ \end{lemma} \msection{Exact sequences of subquotients} \begin{proposition} \label{pro1} Let $f\colon G\to H$ be a group morphism and let $Y\vartriangleleft X$ be subgroups of $G.$ Let $\smash{V\vartriangleleft U}$ be subgroups of $H$. If $f Y\subseteq V$ and $fX\subseteq U$ then there is a morphism $f'\colon X/Y \to U/V$ such that for any subgroup $S$ of $X/Y,$ we have \begin{equation} \label{conn1} f' S= \pi_{\iota^{-1}_U V} \iota^{-1}_Uf\iota_X\pi^{-1}_{\iota^{-1}_X Y} S. \end{equation} \end{proposition} \begin{remark} The right hand side of the identity (\ref{conn1}) represents the result of ``chasing'' a subgroup $S$ of $X/Y$ along the zigzag of solid morphisms in the following diagram: \begin{equation} \label{ladder} \vcenter{ \xymatrix@=2pc{G\ar[r]^{f} & H\\ X/1\ar[u]^{\iota_X}\ar@{..>}[r]_{f''}\ar[d]_{{\pi_{\iota^{-1}_X Y}}} & U/1\ar[u]_{\iota_U}\ar[d]^{\pi_{\iota^{-1}_U V}}\\ X/Y\ar@{..>}[r]_{f'} & U/V. }} \end{equation} \end{remark}
\begin{proof} Since $f X \subseteq U,$ by the universal property of $\iota_U,$ there exists a unique morphism $f''\colon X/1 \to U/1$ such that the top square of diagram (\ref{ladder}) commutes. Since for any subgroup $S$ of $X/1,$ $\iota^{-1}_U\iota_U f'' S=f'' S\vee \mathtt{Ker}\,\!\iota_U=f'' S$ (where the triviality of $\mathtt{Ker}\,\!\iota_U$ follows from Lemma \ref{B}), we have $\iota^{-1}_Uf\iota_X S=f''S.$ From $Y \subseteq f^{-1} V$ we obtain $$\iota^{-1}_X Y \subseteq \iota^{-1}_{X} f^{-1}V= f''^{-1}\iota^{-1}_{U}V \subseteq f''^{-1} \mathtt{Ker}\,\!\pi_{\iota^{-1}_U V} = \mathtt{Ker}\,\!\pi_{\iota^{-1}_U V} f'',$$ and by the universal property of $\pi_{\iota^{-1}_X Y},$ there exists a unique morphism $f'\colon X/Y \to U/V$ such that the bottom square of (\ref{ladder}) commutes. Finally, for any subgroup $S$ of $X/Y,$ we have $$f' S=f'\pi_{\iota^{-1}_X Y}\pi^{-1}_{\iota^{-1}_X Y} S=\pi_{\iota^{-1}_U V} f''\pi^{-1}_{\iota^{-1}_X Y }S=\pi_{\iota^{-1}_U V} \iota^{-1}_Uf\iota_X\pi^{-1}_{\iota^{-1}_X Y} S,$$ where the first equality follows from Lemma \ref{B} and Axiom \ref{ax2}. \end{proof}
By taking $f$ in Propostion \ref{pro1} to be an identity morphism, we obtain:
\begin{corollary} \label{cor1} Let $G$ be a group and let $X,$ $Y,$ $U,$ $V$ be subgroups of $G$ such that $Y\vartriangleleft X$ and $V\vartriangleleft U.$ If $Y \subseteq V$ and $X \subseteq U$ then there exists a morphism $f'\colon X/Y \to U/V$ such that for any subgroup $S$ of $X/Y,$ we have $f' S= \pi_{\iota^{-1}_U V} \iota^{-1}_U\iota_X\pi^{-1}_{\iota^{-1}_X Y} S.$ \end{corollary} \begin{definition} A sequence $G \xrightarrow {f} H\xrightarrow{g} I$ of group morphisms is called \emph{exact at $H$} if $\mathtt{Im}\,\!f=\mathtt{Ker}\,\!g.$ \end{definition} \begin{proposition}\label{ext1} Let $G \xrightarrow {f} H\xrightarrow{g} I$ be group morphisms. Let $V\vartriangleleft U$ be subgroups of $G,$ let $X\vartriangleleft W$ be subgroups of $H,$ and let $Z\vartriangleleft Y$ be subgroups of $I.$ Suppose $f V\subseteq X,$ $fU \subseteq W,$ $ g X\subseteq Z$ and $ gW \subseteq Y.$ Then, there is a sequence \begin{equation} \label{seq} U/V\to W/X\to Y/Z \end{equation} of morphisms which is exact at $W/X$ if and only if \begin{equation}\label{ext} f U \vee X = g^{-1} Z\wedge W. \end{equation} \end{proposition} \begin{proof} Let us consider the following diagram: \begin{equation*}\label{ladder2} \vcenter{ \xymatrix@=2pc{G\ar[r]^{f} & H\ar[r]^{g} & I\\ U/1\ar[u]^{\iota_U}\ar@{..>}[r]\ar[d]_{{\pi_{\iota^{-1}_U V}}} & W/1\ar@{..>}[r]\ar[u]_{\iota_W}\ar[d]^{\pi_{\iota^{-1}_W X}}& Y/1\ar[u]_{\iota_{Y}}\ar[d]^{\pi_{\iota^{-1}_Y Z}}\\ U/V\ar@{..>}[r] & W/X\ar@{..>}[r]& Y/Z. }} \end{equation*} The existence of the group morphisms of the sequence (\ref{seq}) follows from Proposition \ref{pro1}.
Let us assume that the identity (\ref{ext}) holds and we will prove the exactness at $W/X$ and for that it is sufficient to show the image of $U/V\to W/X$ is equal to the kernel of $W/X\to Y/Z.$ We observe that the image of $U/V\to W/X$ is the image of the largest subgroup of $U/V,$ which by Proposition \ref{pro1} is same chasing the largest subgroup of $U/V$ along the zigzag of solid arrows in the diagram above up to $W/X.$ By doing the chasing, we obtain \begin{align*}
\pi_{\iota^{-1}_W X} \iota^{-1}_Wf \iota_U \pi_{\iota^{-1}_UV}U/V &= \pi_{\iota^{-1}_W X} \iota^{-1}_Wf \iota_U U\\
&=\pi_{\iota^{-1}_W X} \iota^{-1}_Wf U\\
&= \pi_{\iota^{-1}_W X} \iota^{-1}_Wf U \vee \pi_{\iota^{-1}_W X} \iota^{-1}_W X \qquad\qquad [\mathtt{Ker}\,\!\pi_{\iota^{-1}_W X} \supseteq \iota^{-1}_W X]\\
&=\pi_{\iota^{-1}_W X} (\iota^{-1}_W f U \vee \iota^{-1}_W X) \qquad\qquad\qquad\qquad\;\; [\mathrm{Lemma}\;\ref{A}]\\
&=\pi_{\iota^{-1}_W X} \iota^{-1}_W (f U \vee X) \qquad\qquad\qquad\qquad\;\quad\;\;\, [\mathrm{Lemma}\;\ref{C}] \end{align*} Similarly, the kernel of $W/X\to Y/Z$ is the inverse image of the smallest subgroup of $Y/Z,$ which by Proposition \ref{pro1} is same as chasing the smallest subgroup of $Y/Z$ along the zigzag of the solid arrows in the diagram above up to $W/X.$ By doing so, we get \begin{align*} \pi_{\iota^{-1}_WX}\iota^{-1}_Wg^{-1}\iota_Y\pi^{-1}_{\iota^{-1}_YZ} 1 &= \pi_{\iota^{-1}_WX}\iota^{-1}_Wg^{-1}\iota_YZ\\ &= \pi_{\iota^{-1}_WX}\iota^{-1}_Wg^{-1}Z\\ &=\pi_{\iota^{-1}_WX}\iota^{-1}_W(g^{-1}Z\wedge W), \end{align*} where the reason of the last equality is as follows: $$ \iota^{-1}_Wg^{-1}Z=\iota^{-1}_Wg^{-1}Z \vee 1= \iota^{-1}_W\iota_W(\iota^{-1}_Wg^{-1}Z)= \iota^{-1}_W(\iota_W\iota^{-1}_Wg^{-1}Z)=\iota^{-1}_W(g^{-1}Z\wedge W).$$ The identity (\ref{ext}), and the two outcomes of the above chasing give the desired exactness at $W/X.$
Conversely, let us assume that (\ref{seq}) be exact at $W/X$, i.e. in particular, we have $\pi_{\iota^{-1}_W X} \iota^{-1}_Wf U = \pi_{\iota^{-1}_WX} \iota^{-1}_W (g^{-1} Z\wedge W).$ Applying $\iota_W\pi^{-1}_{\iota^{-1}_W X}$ on the left and the right hand sides of the last identity, we obtain, respectively \begin{align*} \iota_W\pi^{-1}_{\iota^{-1}_W X}\pi_{\iota^{-1}_W X} (\iota^{-1}_Wf U) &= \iota_W(\iota_W^{-1}f U \vee \iota^{-1}_W X)\qquad\qquad\;\quad [\textrm{ Axiom}\;\ref{ax2}]\\ &=\iota_W\iota^{-1}_W(f U\vee X)\qquad\qquad\quad\;\quad\;[\mathrm{ Lemma}\;\ref{C}]\\ &= (f U\vee X)\wedge W\qquad\qquad\qquad\quad\;\; [\textrm{Axiom}\;\ref{ax2}]\\ &= f U\vee X,\qquad\qquad\;\quad\; [\mathrm{hypothesis,}\; fU\subseteq W]\\ \noalign{\hbox{and}} \iota_W\pi^{-1}_{\iota^{-1}_W X}\pi_{\iota^{-1}_W X} (\iota^{-1}_W(g^{-1}Z\wedge W)) &= \iota_W(\iota_W^{-1}(g^{-1} Z\wedge W) \vee \iota^{-1}_W X)\quad\;\; [\textrm{Axiom}\;\ref{ax2}]\\ &=\iota_W\iota^{-1}_W(g^{-1} Z\wedge W)\quad\qquad [X\subseteq W, gX\subseteq Z,\\&\qquad\qquad\qquad \qquad \qquad\qquad \;\;\;\; \mathrm{and \; Lemma}\;\ref{C}]\\ \\ &= (g^{-1} Z\wedge W)\wedge W\qquad\quad\qquad\quad
[\textrm{ Axiom}\;\ref{ax2}]\\ &= g^{-1} Z\wedge W. \end{align*} \end{proof}
\msection{Double complexes and salamander lemma} \begin{definition} A \emph{double complex} is a triple $(X, \delta_h, \delta_v),$ where for all integers $m$ and $n,$ $X=(X^{n,m})$ is a family of groups, $\delta_h=(\delta_h^{n,m}\colon X^{n,m}\to X^{n,m+1}),$ and $\delta_v=(\delta_v^{n,m}\colon X^{n,m}\to X^{n+1,m})$ are families of group morphisms such that $\delta_h^{n,m}\delta_h^{n,m-1}=0,\;$ $\delta_v^{n,m}\delta_v^{n-1,m}=0,\;$and $\;\delta_v^{n,m+1}\delta_h^{n,m}=\delta_h^{n+1,m}\delta_v^{n,m}.$ \end{definition} Let us consider a double complex as shown in the diagram \begin{equation}\label{sl} \vcenter{ \xymatrix@C=20pt@R=16pt{ &&\ar@{.}[d] &\ar@{.}[d]\\&& \bullet\ar[d]^{m} & \ar[d]\\ \ar@{.}[r] &\bullet\ar[r]^{a}\ar[dr]^{p} & C \ar[dr]^{r} \ar[r]\ar[d]^{c} & \bullet\ar[d]^{v}\ar[r] & \ar@{.}[r] &\\ \ar@{.}[r] & \bullet \ar[r]_{d} & A \ar[r]^{e}\ar[d]_{f} \ar[dr]^{\!q} & B\ar[r]^{s} \ar[d]^{g}& \bullet \ar@{.}[r] &\\ \ar@{.}[r] & \ar[r] &\bullet\ar[r]\ar[d] & D\ar[r]^{t}\ar[d]^{u} & \bullet\ar@{.}[r] &\\ & & \ar@{.}[d] & \bullet\ar@{.}[d] &\\ &&&& }} \end{equation} where $p=ca,$ $r=ec,$ and $q=ge.$ \begin{definition} \label{ho1} Following Bergman \cite{B12}, in a double complex (\ref{sl}) we define the following \emph{homology objects} associated with the group $A$: \begin{itemize} \item [$\bullet$] $\displaystyle A_{\mathtt{h}} = \mathtt{Ker}\,\! e/\mathtt{Im}\,\! d,$ whenever $ \mathtt{Im}\,\! d \vartriangleleft \mathtt{Ker}\,\! e;$ \item [$\bullet$] $\displaystyle A_{\Box} = \mathtt{Ker}\,\! q/(\mathtt{Im}\,\! c \vee \mathtt{Im}\,\!d),$ whenever $(\mathtt{Im}\,\!c \vee \mathtt{Im}\,\!d) \vartriangleleft \mathtt{Ker}\,\!q;$ \item [$\bullet$] $\displaystyle ^\Box\!\!A = (\mathtt{Ker}\,\! e \wedge \mathtt{Ker}\,\!f)/\mathtt{Im}\,\! p,$ whenever $\mathtt{Im}\,\!p \vartriangleleft (\mathtt{Ker}\,\!e \wedge \mathtt{Ker}\,\!f).$ \end{itemize}
When we say that one of the above three homology objects \emph{is defined}, we mean that the corresponding normality condition holds. \end{definition} \begin{theorem}[Salamander Lemma] \label{slt} In a double complex \textup{(}\ref{sl}\textup{)}, if the homology objects $C_{\Box},$ $A_{\mathtt{h}},$ $A_{\Box},$ $^\Box\! B,$ $B_{\mathtt{h}},$ and $^\Box\!D$ are defined, and $\mathtt{Im}\,\!c$ is a normal subgroup of $A,$ then there is an exact sequence \begin{equation} \label{sallem} C_{\Box}\to A_{\mathtt{h}} \to A_{\Box} \to ^\Box\!\!\!B\to B_{\mathtt{h}}\to ^\Box\!\!\!D. \end{equation} \end{theorem} \begin{proof} For proving the existence of all the morphisms of the sequence (\ref{sallem}), we check the hypothesises of Proposition \ref{pro1} or Corollary \ref{cor1} whichever is applicable. \begin{itemize} \item [$\bullet$] $C_{\Box}\to A_{\mathtt{h}}$: $ c\mathtt{Ker}\,\!r=c\mathtt{Ker}\,\!ec =cc^{-1}\mathtt{Ker}\,\!e=\mathtt{Ker}\,\!e\wedge \mathtt{Im}\,\!c\subseteq \mathtt{Ker}\,\!e,$ and using Lemma \ref{A}, we get $c(\mathtt{Im}\,\!a\vee \mathtt{Im}\,\!m)=c\mathtt{Im}\,\!a \vee c\mathtt{Im}\,\!m=\mathtt{Im}\,\!ca \vee 1 = \mathtt{Im}\,\!p \subseteq \mathtt{Im}\,\!d.$ \item [$\bullet$] $A_{\mathtt{h}}\to A_{\Box}$: $\mathtt{Ker}\,\!e\subseteq \mathtt{Ker}\,\!q$ and $\mathtt{Im}\,\!d \subseteq \mathtt{Im}\,\!c\vee \mathtt{Im}\,\!d.$ \item [$\bullet$] $ A_{\Box}\to ^\Box\!\!\!B$: $ e\mathtt{Ker}\,\!q=e\mathtt{Ker}\,\!ge =ee^{-1}\mathtt{Ker}\,\!g=\mathtt{Im}\,\!e\wedge\mathtt{Ker}\,\!g \subseteq \mathtt{Ker}\,\!s \wedge \mathtt{Ker}\,\!g,$ and again using Lemma \ref{A}, we get $ e(\mathtt{Im}\,\!c\vee \mathtt{Im}\,\!d)=e\mathtt{Im}\,\!c \vee e\mathtt{Im}\,\!d=\mathtt{Im}\,\!r \vee 1 = \mathtt{Im}\,\!r.$ \item [$\bullet$] $^\Box\!B\to B_{\mathtt{h}}$: $\mathtt{Ker}\,\!s \wedge \mathtt{Ker}\,\!g \subseteq \mathtt{Ker}\,\!s$ and $\mathtt{Im}\,\!r\subseteq \mathtt{Im}\,\!e.$ \item [$\bullet$] $ B_{\mathtt{h}} \to ^\Box\!\!\!D$: $ g\mathtt{Ker}\,\!s \subseteq \mathtt{Ker}\,\!u,$ $g\mathtt{Ker}\,\!s \subseteq g\mathtt{Ker}\,\!tg=gg^{-1}\mathtt{Ker}\,\!t =\mathtt{Im}\,\!g \wedge \mathtt{Ker}\,\!t \subseteq \mathtt{Ker}\,\!t,$ which implies $g\mathtt{Ker}\,\!s\subseteq \mathtt{Ker}\,\!u\wedge \mathtt{Ker}\,\!t.$ Also $ g\mathtt{Im}\,\!e=\mathtt{Im}\,\!q.$ \end{itemize} For the exactness, we apply Proposition \ref{ext1}, by checking the condition (\ref{ext}). \begin{itemize} \item Exactness of $C_{\Box} \to A_{\mathtt{h}}\to A_{\Box}$: For the left hand side of (\ref{ext}), we have $c\mathtt{Ker}\,\!r \vee \mathtt{Im}\,\!d = (\mathtt{Ker}\,\!e \wedge \mathtt{Im}\,\!c)\vee \mathtt{Im}\,\!d,$ whereas the right hand side of (\ref{ext}) is $(\mathtt{Im}\,\!c \vee \mathtt{Im}\,\!d)\wedge \mathtt{Ker}\,\!e= (\mathtt{Ker}\,\!e \wedge \mathtt{Im}\,\!c)\vee \mathtt{Im}\,\!d$ (by Lemma \ref{rml}). \item Exactness of $A_{\mathtt{h}} \to A_{\Box}\to ^\Box\!\!\!B$: We notice that the left hand side of (\ref{ext}) is $ \mathtt{Ker}\,\!e \vee (\mathtt{Im}\,\!c \vee \mathtt{Im}\,\!d) = \mathtt{Ker}\,\!e \vee \mathtt{Im}\,\!c, $ whereas the right hand side of (\ref{ext}) is $ e^{-1}\mathtt{Im}\,\!r \wedge \mathtt{Ker}\,\!q=e^{-1}e\mathtt{Im}\,\!c\wedge \mathtt{Ker}\,\!q= (\mathtt{Ker}\,\!e \vee \mathtt{Im}\,\!c)\wedge \mathtt{Ker}\,\!q = \mathtt{Ker}\,\!e \vee \mathtt{Im}\,\!c.$ \item Exactness of $A_{\Box}\to ^\Box\!\!\!B\to B_{\mathtt{h}}$: The left hand side of (\ref{ext}) is $e\mathtt{Ker}\,\!q\vee\mathtt{Im}\,\!r=ee^{-1}\mathtt{Ker}\,\!g\vee \mathtt{Im}\,\!r=(\mathtt{Im}\,\!e\wedge\mathtt{Ker}\,\!g)\vee\mathtt{Im}\,\!r=\mathtt{Im}\,\!e\wedge \mathtt{Ker}\,\!g,$ whereas the right hand side of (\ref{ext}) is $\mathtt{Im}\,\!e\wedge (\mathtt{Ker}\,\!s\wedge \mathtt{Ker}\,\!g)= \mathtt{Im}\,\!e\wedge \mathtt{Ker}\,\!g.$ \item Exactness of $^\Box\!B\to B_{\mathtt{h}}\to ^\Box\!\!\!D$: The left hand side of (\ref{ext}) is $(\mathtt{Ker}\,\!s\wedge \mathtt{Ker}\,\!g)\vee \mathtt{Im}\,\!e,$ while the right hand side of (\ref{ext}) is $g^{-1}(\mathtt{Im}\,\!q)\wedge \mathtt{Ker}\,\!s = g^{-1}g(\mathtt{Im}\,\!e)\wedge \mathtt{Ker}\,\!s=(\mathtt{Im}\,\!e\vee \mathtt{Ker}\,\!g)\wedge \mathtt{Ker}\,\!s=(\mathtt{Ker}\,\!s\wedge \mathtt{Ker}\,\!g)\vee \mathtt{Im}\,\!e$ (by Lemma \ref{rml}). \end{itemize} \end{proof} \begin{remark} For the proofs of the existence of the morphisms $C_{\Box} \to A_{\mathtt{h}}$ and $B_{\mathtt{h}} \to ^\Box\!\!\!D$ in (\ref{sallem}), we have constructed direct morphisms which are respectively the same as the composites $ C_{\Box}\to ^\Box\!\!\!\!A\to A_{\mathtt{h}}$ and $ B_{\mathtt{h}}\to B_{\Box}\to ^\Box\!\!\!\!\,D$ as have been done in \cite{B12}. \end{remark} \begin{remark}
The Theorem \ref{sl} is the horizontal version of the salamander lemma. The formulation and proof of the vertical version are similar. \end{remark}
The following two corollaries are the reformulation of the Corollary 2.1 and the Corollary 2.2 of \cite{B12}, which are used to prove diagram lemmas of homological algebra. We give a proof of Corollary \ref{cor1} using Proposition \ref{ext1}, and the proof of Corollary \ref{cor2} is similar. \begin{corollary}\label{cor1} Let $A\to B$ be a horizontal \textup{(}vertical\textup{)} morphism of a double complex. Let $A_{\mathtt{h}}$ and $B_\mathtt{h}$ \textup{(}$A_{\mathtt{v}}$ and $B_\mathtt{v}$\textup{)} are defined and $A_{\mathtt{h}}=1,$ $B_\mathtt{h}=1$ \textup{(}$A_{\mathtt{v}}=1$ and $B_\mathtt{v}=1$\textup{)}. Whenever the homology objects $A_{\Box}$ and $^{\Box}\!B$ are defined, we have the isomorphism: $A_{\Box} \cong\,\! ^{\Box}\!B.$ \end{corollary} \begin{proof}
Let $A\to B$ be the morphism $e$ of the double complex (\ref{slt}). Let $A_{\mathtt{h}}=1$ and $B_\mathtt{h}=1.$ The existence of the morphism $\phi\colon A_{\Box}\to\,\! ^{\Box}\!B$ has been proved in Theorem \ref{sl}. Now to show $\phi$ is an isomorphism, by Lemma \ref{B1}, it is sufficient to show that $\phi$ is both an embedding and a projection. From the double complex (\ref{slt}), we observe that $e\mathtt{Ker}q=e\mathtt{Ker}ge=ee^{-1}\mathtt{Ker}g=\mathtt{Im}e\wedge \mathtt{Ker}g=\mathtt{Ker}b\wedge \mathtt{Ker}g,$ where the last equality follows from the fact that $\mathtt{Im}e=\mathtt{Ker}b.$
This proves that $\phi$ is a projection. Again,
$e^{-1}\mathtt{Im}r=e^{-1}e\mathtt{Im}c=\mathtt{Ker}e\vee \mathtt{Im}c=\mathtt{Im}\vee \mathtt{Im}d\;(\mathrm{as}\; \mathtt{Im}d=\mathtt{Ker}e)$
proves that $\phi$ is an embedding. \end{proof} \begin{corollary}\label{cor2}
In each of the following four portions of double complexes, if the dotted row or column \textup{(}the row or column through $B$ perpendicular to the arrow connecting it with $A)$ is exact at $B,$ and $A_{\mathtt{h}},$ $A_{\mathtt{v}},$ $^\Box \!\!A,$ $A_{\Box}$ are defined
\[
\xymatrix@C=.9pc@R=.9pc{
&\bullet\ar@{-}[d] & \bullet\ar@{-}[d] & & &1\ar[d] & 1\ar@{.>}[d] & & & \bullet\ar@{-}[d] & \bullet\ar@{-}[d] & & &\bullet\ar@{-}[d] & \bullet\ar@{-}[d] &\\
1\ar[r] &A\ar[r]\ar[d]&\bullet\ar[d]\ar@{-}[r]& & \bullet\ar[r]&A\ar[d]\ar[r] &B\ar@{.>}[d]\ar@{-}[r]& &\bullet\ar[r] &\bullet\ar[d] \ar@{.>}[r]&B\ar[d]\ar@{.>}[r]&1 &\bullet\ar[r] &\bullet\ar@{.>}[d]\ar[r]&\bullet\ar[d]\ar@{-}[r]&\\
1\ar@{.>}[r]&B\ar@{-}[d]\ar@{.>}[r]&\bullet\ar@{-}[d]\ar@{-}[r]& &\bullet\ar[r]&\bullet\ar@{-}[d]\ar[r]& \bullet\ar@{-}[d]\ar@{-}[r]& &\bullet\ar[r]& \bullet\ar@{-}[d]\ar[r]&A\ar@{-}[d]\ar[r]&1 & \bullet\ar[r]&B\ar@{.>}[d]\ar[r]&A\ar[d]\ar@{-}[r]& \\
& & & & & & & & & & & & & 1&1\\
&(a) & & & & (b) & & & & (c) & & & & (d)
}
\]
then we have the following pairs of isomorphisms associated with the above four diagrams respectively: $(a)\,^\Box\!\!A\cong A_{\mathtt{h}}, \, A_{\mathtt{v}}\cong A_{\Box}; (b)\,^\Box\!\!A \cong A_{\mathtt{v}},\,A_{\mathtt{h}}\cong A_{\Box};$\\ $ (c)\;A_{\mathtt{h}}\cong A_{\Box},\, ^\Box\!\!A\cong A_{\mathtt{v}};\;(d)\;\; A_{\mathtt{v}}\cong A_{\Box},\, ^\Box\!\!A \cong A_{\mathtt{h}}.$ \end{corollary} \begin{theorem}[$3\times 3$ Lemma]
In the commutative diagram below, if all columns, and all rows but the first, are exact, then the first row is also exact.
\[
\xymatrix@C=2pc@R=2pc{
&1\ar[d]^{y_1}&1\ar[d]^{y_5}&1\ar[d]^{y_9}&\\
1\ar[r]^{x_1}&A'\ar[r]^{x_2}\ar[d]^{y_2}\ar[dr]^{z_1}&B'\ar[r]^{x_3}\ar[d]^{y_6}\ar[dr]^{z_2}&C'\ar[r]^{x_4}\ar[d]^{\!y_{10}}\ar[dr]^{z_3}&1\\
1\ar[r]^{x_5}&A\ar[r]^{x_6}\ar[d]^{y_3}\ar[dr]^{z_4}&B\ar[r]^{x_7}\ar[d]^{y_7}\ar[dr]^{z_5}&C\ar[r]^{x_8}\ar[d]^{y_{11}}&1\\
1\ar[r]^{x_9}&A''\ar[r]^{x_{10}}\ar[d]^{y_4}\ar[dr]^{z_6}&B''\ar[r]^{x_{11}}\ar[d]^{y_8}&C''\ar[r]^{x_{12}} \ar[d]^{y_{12}}&1\\
&1 &1 &1
}
\] \end{theorem} \begin{proof} To show that the first row is a complex, we notice $ y_{10}x_3x_2=x_7x_6y_2=0,$ and since $y_{10}$ is an embedding, by Lemma \ref{A1} we have $x_3x_2=0.$
By the approach of \cite{B12}, to show the trivialities of $A'_{\mathtt{h}},$ $B'_{\mathtt{h}},$ and $C'_{\mathtt{h}},$ we need to consider the following homology objects: $$ A'_{\mathtt{h}},\, A'_{\Box},\, A'_{\mathtt{v}},\, B'_{\mathtt{h}},\, B'_{\Box},\, ^\Box\!B,\, A_{\Box},\, A_{\mathtt{v}},\, C'_{\mathtt{h}},\, C'_{\Box},\, ^\Box\!C,\, B_{\Box},\, ^\Box\!B'',\, A''_{\Box},\, A''_{\mathtt{v}}. $$ For them to be defined in self-dual context, we need to verify their respective normality conditions. We show the method of verification for $B'_{\Box},$ and the others can be checked similarly.
In order to show $(\mathtt{Im}x_2\vee \mathtt{Im}y_5) \vartriangleleft \mathtt{Ker}z_2,$ first we observe that $\mathtt{Im}x_2\vee \mathtt{Im}y_5=\mathtt{Im}x_2\vee 1=\mathtt{Im}x_2 \subseteq \mathtt{Ker}x_3\subseteq \mathtt{Ker}z_2.$ To show that $\mathtt{Ker}z_2$ is a conormal subgroup of $B'$, it is sufficient to show that its dual $\mathtt{Im}z_4$ is a normal subgroup of $B''.$ Now, $\mathtt{Im}z_4=y_7\mathtt{Im}x_6=y_7\mathtt{Ker}x_7$ which is a normal subgroup of $B''$ by Lemma \ref{B2}. Finally to show that $\iota^{-1}_{\mathtt{Ker}z_2}\mathtt{Im}x_2$ is a normal subgroup of $\mathtt{Ker}z_2/1,$ it is sufficient to show that $\mathtt{Ker}x_{11},$ a dual of $\mathtt{Im}x_2,$ is a conormal subgroup of $B''$ which indeed is true because of the fact that $\mathtt{Im}x_{10}=\mathtt{Ker}x_{11}.$
Applying the Corollary \ref{cor1} and the Corollary \ref{cor2}, the proofs of trivialities of $A'_{\mathtt{h}},$ $B'_{\mathtt{h}},$ and $C'_{\mathtt{h}}$ are same as in \cite{B12}, and here we recall them. \begin{align*} A'_{\mathtt{h}}&\cong A'_{\Box} \cong A'_{\mathtt{v}}=1.\\ B'_{\mathtt{h}}&\cong B'_{\Box}\cong\, ^\Box\!B \cong A_{\Box} \cong A_{\mathtt{v}}=1.\\ C'_{\mathtt{h}}&\cong C'_{\Box}\cong\, ^\Box\!C \cong B_{\Box}\cong\, ^\Box\!B'' \cong A''_{\Box} \cong A''_{\mathtt{v}}=1. \end{align*}
\end{proof}
\end{document} | arXiv |
New answers tagged orbital-mechanics
Are there any remaining spacecraft that can retrieve objects from Earth orbit?
ESA has Space Rider in development. First launch is planned for 2022 on a Vega-C. Planned payload is 800 kg.
orbital-mechanics orbit orbital-maneuver payload
answered 22 hours ago
Why put X-ray telescope Spektr-RG/eROSITA all the way out at Sun-Earth L2?
eRosita is on its way to an elliptical orbit around L2 (with L2 in the centre of the ellipse). (image source: https://www.slideshare.net/esaops/wilms, p. 19) According to Merloni et al. (2012, https://arxiv.org/abs/1209.3114), the semi-major axis is planed with about 1,000,000 km and the orbital period should be about 6 months. Taking a look into the ...
orbital-mechanics lagrangian-points halo-orbit spektr-rg x-ray-telescope
answered Jul 15 at 15:06
Why does the eccentricity vector equation always equal -1?
The expression on the right is meant to give the eccentricity vector but the vector notation has been lost. Here it is in this answer: $$ e = {v^2 r \over {\mu}} - {(r \cdot v ) v \over{\mu}} - {r\over{\left|r\right|}}$$ and the vector nature is not clear either. We should write it as $$ \mathbf{e} = {v^2 \mathbf{r} \over {\mu}} - {(\mathbf{r} \cdot \...
orbital-mechanics orbit
answered Jul 15 at 0:43
uhoh
What is the latest plan to ameliorate the largest drifts of orbital space debris?
No, there's no plan. Tragedy of the commons. ESA have trialed some solutions, with the RemoveDEBRIS mission. The Journal of the British Interplanetary Society has some papers on the Necropolis system
orbital-mechanics debris removedebris-mission
JCRM
Free Fall to Earth from Solar North Pole?
Wikipedia sez that for a body starting at rest at distance $r$, the time it takes to reach a distance $x$ is given by: $$t(x) = \frac{r^{3/2}}{\sqrt{2 GM}} \left( \arccos(\sqrt{b}) + \sqrt{b(1-b)} \right)$$ where $b=x/r$. The Sun's standard gravitational parameter GM is about 1.327E+20 m^3/s^2. The time to fall all of the way ($x=0$) is just $$t(x) = \...
LVLH to ECI Conversion
If you have a direct cosine matrix (also called a "rotation matrix") which converts from ECI to LVLH, then the transpose of that matrix will perform the opposite rotation: LVLH to ECI.
orbital-mechanics coordinates satellite-constellation frames transform
ChrisR
To a pretty good approximation, the angle must be between 82.75° and 97.25° from the Sun's pole, and the distance doesn't matter. This is because if you're starting from rest relative to the sun, you will fall straight in relative to the Sun. To hit the Earth on your trajectory, the straight-line path from your starting location to the Sun must intersect ...
Michael Seifert
Could a spacecraft fly in a direct line from the outer solar system to Earth?
Trajectories are reversible. So if you had a spacecraft heading into the solar system at the same velocity that NH is currently heading out, it would follow the same trajectory but in reverse. If you time things correctly, it could narrowly miss Jupiter and be deflected into a solar orbit which could then hit the Earth and it would reenter at the same ...
orbital-mechanics new-horizons
Steve Linton
Could a spacecraft fly in a direct line from the outer solar system to Earth? Ignoring tiny deviations due to long-range attraction to the larger planets, sure! absolutely! no problem! you betcha! And you will need zero propulsion to do it. Sit back and enjoy the ride! At the right time of year you can position yourself at say 100 AU from the Sun in the ...
Is aerobraking used for orbit insertion for each planet with an atmosphere?
You are confusing aerobraking with aerocapture. Aerobraking is used to circularize an elliptical orbit into a circular one after orbit insertion, and it has been used a few times on the following missions: Hiten: this was a demonstration mission in Earth orbit Magellan: Around Venus Mars Global surveyor Mars Odyssey Mars Reconnaissance Orbiter Venus ...
orbital-mechanics probe aerobraking
GdD
Theoretically: Yes If the net force on an object is zero, then Newton's 1st Law says that it will continue in a straight line. By continually adjusting the amount and direction of your thrust, so that the thrust cancels out the force of the Sun's gravity, you can do just that. Practically: No Doing the above takes ridiculous amounts of energy -- which in ...
DrSheldon
Does the Delta v to leave the entire solar system depend on its angle to the ecliptic?
It depends on if you are doing a direct to leave the solar system or doing a flyby, but probably. If you are directly leaving the solar system, then the close you are to the Earth's inclination around the Sun, the more your velocity will count. If you go completely perpendicular to that, it will take quite a bit more fuel, as you have to do an inclination ...
orbital-mechanics delta-v ecliptic-plane
PearsonArtPhoto♦
Does Lightsail-2 take significant advantage of the Oberth effect?
Their solar sailing algorithm is described in the Planetary Society blog: Solar sailing: The spacecraft is attempting to raise its orbit using the solar sail. To do this, it must make two 90-degree turns each orbit. When flying towards the Sun, the sail orients itself edge-on, effectively turning off the thrust. When flying away from the Sun, the sail ...
orbital-mechanics solar-sail oberth-maneuver planetary-society
answered Jul 7 at 14:41
How to implement SGP4 C++, satellite propagator?
Grady Hillhouse was able to get it working on a Nucleo F401RE development board: link. I improved on that to get it on an ESP8266 wifi module. And added some extra stuff to calculated satellite overpasses and see if the satellite is visible: project , library This is focused on embedded devices, but it shouldn't be to hard to get it running in other ...
orbital-mechanics simulation sgp4
Hopperpop
Are the propesed Launch UK facilities in the wrong places?
If you are trying for Geostationary orbit an equatorial launch site is better, but if you are stuck with launch from inside the UK but are not prepared to drop spent stages on voters then you can still do low altitude polar launches . As with so many things this is much more about politics than physics. So Israel launches ...
orbital-mechanics uk-space
answered Jun 30 at 9:55
GremlinWranger
How was the radius of Venus measured so accurately (± 3 km) via radar in the mid 1960's, before Venera 4 and Mariner 5?
In 1964, the Soviet scientist A.D. Kuzmin, together with the American scientist Barry Clark, began observing Venus using a movable radiointerferometer consisting of two 27-meter paraboloids (Owens Valley Radio Observatory, California). The radius of the hard sphere of Venus was measured: 6057 km (before that, astronomers measured only the radius of ...
orbital-mechanics venus solar-system mariner venera
A. Rumlin
1,11266 bronze badges
This was indeed quite a challenge. The main advantage was having the mariner probe there. By determining the effect of Venus on the orbit of mariner, Having the exact distance to the surface of Venus using radar on-board Mariner and tracking data of Mariner itself allowed for a pretty good estimate. In particular the difference in distance between where ...
1,53511 silver badge1414 bronze badges
How could a satellite follow earth around the sun while staying outside of earth's orbit?
With engines! Orbiting at L1 is completely feasible, as long as your satellite regularly uses little bursts from its engines to keep it there. L1 is "unstable", meaning that a satellite without engines will eventually drift away from L1. But the closer your satellite stays to L1, the less fuel it requires to stay in place. Low thrust, high specific impulse ...
orbital-mechanics artificial-satellite orbit solar-power
Note: the question has been radically re-written since this answer was written. Consequently, it is no longer relevant to the question. If you want an object to stay between the Sun and Earth, it has to be at the Earth-Sun L1 point, which is about 1.5 million km away. We already have stuff there: https://en.wikipedia.org/wiki/...
If I understand the question as it is evolving, you are looking for an orbit that produces a solar eclipse; a complete shadow of the Sun on a small area of the Earth, and further that the object casting the shadow not be in an orbit around the Earth as in the question Is a sun-blocking orbit possible? but instead be in a heliocentric orbit. That would mean ...
The only stable points that orbit at the same speed as Earth are the L4 and L5 points, as you mention, but there are some unstable ones as well. See this pic from NASA: L4 and L5 remain ahead of and behind the Earth, whereas L1, L2 and L3 are inherently unstable. From your question, I'd suggest L4 and L5 would be best suited, unless you really need ...
Rory Alsop
You're on the right track looking up Lagrangian points, orbits where a small object can stay in the same relationship with two celestial bodies, one orbiting another. The one you are describing is the earth-sun L2 point, a point outside of earth's orbit around the sun. This Wikipedia page will tell you more.
antlersoft
Space Shuttle Moon Mission? [duplicate]
Not even remotely enough delta-v to make the TLI burn. The apollo burn was 3+ km/sec. The space shuttle OMS system was good for about 300 m/sec, so about 10% of the amount needed.
orbital-mechanics the-moon space-shuttle
zeta-band
Are Lagrangian points associated only with the smaller body?
@asdfex's answer is correct. I'll just add an illustration. In the Circular Restricted Three-Body Problem or CR3BP (where the Lagrange points are defined) there are some conventions to make the math easier. With $m_1 + m_2$, the distance between $m_1$ and $m_2$, and the rotation rate all being unity (1.0) the reduced mass is defined as $$\mu = \frac{m_2}{...
orbital-mechanics lagrangian-points
The 5 points you show are the only ones in the Sun-Earth system, there is no other set "with respect to Sun". The set of points seems to be Earth-centric, just because the points are closer to the smaller of the bodies and we choose Sun as the center of rotation here. You can take the exact same drawing and revolve it around Earth or around the common ...
asdfex
Top 50 recent answers are included
orbital-mechanics × 1000
orbit × 129
orbital-maneuver × 109
artificial-satellite × 89
orbital-elements × 61
trajectory × 60
mathematics × 59
low-earth-orbit × 48
the-moon × 46
delta-v × 44
lagrangian-points × 42
mars × 40
gravity-assist × 35
gravity × 33
launch × 32
physics × 30
interplanetary × 27
hohmann-transfer × 27
astrodynamics × 24
iss × 22
spacecraft × 22
geostationary × 21
rendezvous × 21
rockets × 20
spacex × 19 | CommonCrawl |
Denjoy–Riesz theorem
In topology, the Denjoy–Riesz theorem states that every compact set of totally disconnected points in the Euclidean plane can be covered by a continuous image of the unit interval, without self-intersections (a Jordan arc).
Definitions and statement
A topological space is zero-dimensional according to the Lebesgue covering dimension if every finite open cover has a refinement that is also an open cover by disjoint sets. A topological space is totally disconnected if it has no nontrivial connected subsets; for points in the plane, being totally disconnected is equivalent to being zero-dimensional. The Denjoy–Riesz theorem states that every compact totally disconnected subset of the plane is a subset of a Jordan arc.[1]
History
Kuratowski (1968) credits the result to publications by Frigyes Riesz in 1906, and Arnaud Denjoy in 1910, both in Comptes rendus de l'Académie des sciences.[2] As Moore & Kline (1919) describe,[3] Riesz actually gave an incorrect argument that every totally disconnected set in the plane is a subset of a Jordan arc. This generalized a previous result of L. Zoretti, which used a more general class of sets than Jordan arcs, but Zoretti found a flaw in Riesz's proof: it incorrectly presumed that one-dimensional projections of totally disconnected sets remained totally disconnected. Then, Denjoy (citing neither Zoretti nor Riesz) claimed a proof of Riesz's theorem, with little detail. Moore and Kline state and prove a generalization that completely characterizes the subsets of the plane that can be subsets of Jordan arcs, and that includes the Denjoy–Riesz theorem as a special case.[3]
Applications and related results
By applying this theorem to a two-dimensional version of the Smith–Volterra–Cantor set, it is possible to find an Osgood curve, a Jordan arc or closed Jordan curve whose Lebesgue measure is positive.[4]
A related result is the analyst's traveling salesman theorem, describing the point sets that form subsets of curves of finite arc length. Not every compact totally disconnected set has this property, because some compact totally disconnected sets require any arc that covers them to have infinite length.
References
1. Krupka, Demeter (2015), Introduction to global variational geometry, Atlantis Studies in Variational Geometry, vol. 1, Atlantis Press, Paris, p. 158, doi:10.2991/978-94-6239-073-7, ISBN 978-94-6239-072-0, MR 3290001.
2. Kuratowski, K. (1968), Topology. Vol. II, New edition, revised and augmented. Translated from the French by A. Kirkor, Państwowe Wydawnictwo Naukowe Polish Scientific Publishers, Warsaw, p. 539, ISBN 9781483271798, MR 0259835.
3. Moore, R. L.; Kline, J. R. (1919), "On the most general plane closed point-set through which it is possible to pass a simple continuous arc", Annals of Mathematics, Second Series, 20 (3): 218–223, doi:10.2307/1967872, JSTOR 1967872, MR 1502556.
4. Balcerzak, M.; Kharazishvili, A. (1999), "On uncountable unions and intersections of measurable sets", Georgian Mathematical Journal, 6 (3): 201–212, doi:10.1023/A:1022102312024, MR 1679442, S2CID 1486611. For an earlier construction of a positive-area Jordan curve, not using this theorem, see Osgood, William F. (1903), "A Jordan curve of positive area", Transactions of the American Mathematical Society, 4 (1): 107–112, doi:10.2307/1986455, JSTOR 1986455.
| Wikipedia |
Brass is an alloy created using $80\%$ copper and $20\%$ zinc. If Henri's brass trumpet contains 48 ounces of copper, how many ounces of zinc are in the trumpet?
This means that the trumpet is $\frac{4}{5}$ copper, and $\frac{1}{5}$ zinc. Since there are 48 ounces of copper, and that represents $\frac{4}{5}$ of the total, we can simply divide by 4 to find the corresponding amount of zinc, making for $\frac{48}{4} = \boxed{12}$ ounces of zinc. | Math Dataset |
Fredholm theory
In mathematics, Fredholm theory is a theory of integral equations. In the narrowest sense, Fredholm theory concerns itself with the solution of the Fredholm integral equation. In a broader sense, the abstract structure of Fredholm's theory is given in terms of the spectral theory of Fredholm operators and Fredholm kernels on Hilbert space. The theory is named in honour of Erik Ivar Fredholm.
Overview
The following sections provide a casual sketch of the place of Fredholm theory in the broader context of operator theory and functional analysis. The outline presented here is broad, whereas the difficulty of formalizing this sketch is, of course, in the details.
Fredholm equation of the first kind
Much of Fredholm theory concerns itself with the following integral equation for f when g and K are given:
$g(x)=\int _{a}^{b}K(x,y)f(y)\,dy.$
This equation arises naturally in many problems in physics and mathematics, as the inverse of a differential equation. That is, one is asked to solve the differential equation
$Lg(x)=f(x)$
where the function f is given and g is unknown. Here, L stands for a linear differential operator.
For example, one might take L to be an elliptic operator, such as
$L={\frac {d^{2}}{dx^{2}}}\,$
in which case the equation to be solved becomes the Poisson equation.
A general method of solving such equations is by means of Green's functions, namely, rather than a direct attack, one first finds the function $K=K(x,y)$ such that for a given pair x,y,
$LK(x,y)=\delta (x-y),$
where δ(x) is the Dirac delta function.
The desired solution to the above differential equation is then written as an integral in the form of a Fredholm integral equation,
$g(x)=\int K(x,y)f(y)\,dy.$
The function K(x,y) is variously known as a Green's function, or the kernel of an integral. It is sometimes called the nucleus of the integral, whence the term nuclear operator arises.
In the general theory, x and y may be points on any manifold; the real number line or m-dimensional Euclidean space in the simplest cases. The general theory also often requires that the functions belong to some given function space: often, the space of square-integrable functions is studied, and Sobolev spaces appear often.
The actual function space used is often determined by the solutions of the eigenvalue problem of the differential operator; that is, by the solutions to
$L\psi _{n}(x)=\omega _{n}\psi _{n}(x)$
where the ωn are the eigenvalues, and the ψn(x) are the eigenvectors. The set of eigenvectors span a Banach space, and, when there is a natural inner product, then the eigenvectors span a Hilbert space, at which point the Riesz representation theorem is applied. Examples of such spaces are the orthogonal polynomials that occur as the solutions to a class of second-order ordinary differential equations.
Given a Hilbert space as above, the kernel may be written in the form
$K(x,y)=\sum _{n}{\frac {\psi _{n}(x)\psi _{n}(y)}{\omega _{n}}}.$
In this form, the object K(x,y) is often called the Fredholm operator or the Fredholm kernel. That this is the same kernel as before follows from the completeness of the basis of the Hilbert space, namely, that one has
$\delta (x-y)=\sum _{n}\psi _{n}(x)\psi _{n}(y).$
Since the ωn are generally increasing, the resulting eigenvalues of the operator K(x,y) are thus seen to be decreasing towards zero.
Inhomogeneous equations
The inhomogeneous Fredholm integral equation
$f(x)=-\omega \varphi (x)+\int K(x,y)\varphi (y)\,dy$
may be written formally as
$f=(K-\omega )\varphi $
which has the formal solution
$\varphi ={\frac {1}{K-\omega }}f.$
A solution of this form is referred to as the resolvent formalism, where the resolvent is defined as the operator
$R(\omega )={\frac {1}{K-\omega I}}.$
Given the collection of eigenvectors and eigenvalues of K, the resolvent may be given a concrete form as
$R(\omega ;x,y)=\sum _{n}{\frac {\psi _{n}(y)\psi _{n}(x)}{\omega _{n}-\omega }}$
with the solution being
$\varphi (x)=\int R(\omega ;x,y)f(y)\,dy.$
A necessary and sufficient condition for such a solution to exist is one of Fredholm's theorems. The resolvent is commonly expanded in powers of $\lambda =1/\omega $, in which case it is known as the Liouville-Neumann series. In this case, the integral equation is written as
$g(x)=\varphi (x)-\lambda \int K(x,y)\varphi (y)\,dy$
and the resolvent is written in the alternate form as
$R(\lambda )={\frac {1}{I-\lambda K}}.$
Fredholm determinant
The Fredholm determinant is commonly defined as
$\det(I-\lambda K)=\exp \left[-\sum _{n}{\frac {\lambda ^{n}}{n}}\operatorname {Tr} \,K^{n}\right]$
where
$\operatorname {Tr} \,K=\int K(x,x)\,dx$
and
$\operatorname {Tr} \,K^{2}=\iint K(x,y)K(y,x)\,dx\,dy$
and so on. The corresponding zeta function is
$\zeta (s)={\frac {1}{\det(I-sK)}}.$
The zeta function can be thought of as the determinant of the resolvent.
The zeta function plays an important role in studying dynamical systems. Note that this is the same general type of zeta function as the Riemann zeta function; however, in this case, the corresponding kernel is not known. The existence of such a kernel is known as the Hilbert–Pólya conjecture.
Main results
The classical results of the theory are Fredholm's theorems, one of which is the Fredholm alternative.
One of the important results from the general theory is that the kernel is a compact operator when the space of functions are equicontinuous.
A related celebrated result is the Atiyah–Singer index theorem, pertaining to index (dim ker – dim coker) of elliptic operators on compact manifolds.
History
Fredholm's 1903 paper in Acta Mathematica is considered to be one of the major landmarks in the establishment of operator theory. David Hilbert developed the abstraction of Hilbert space in association with research on integral equations prompted by Fredholm's (amongst other things).
See also
• Green's functions
• Spectral theory
• Fredholm alternative
References
• Fredholm, E. I. (1903). "Sur une classe d'equations fonctionnelles" (PDF). Acta Mathematica. 27: 365–390. doi:10.1007/bf02421317.
• Edmunds, D. E.; Evans, W. D. (1987). Spectral Theory and Differential Operators. Oxford University Press. ISBN 0-19-853542-2.
• B. V. Khvedelidze, G. L. Litvinov (2001) [1994], "Fredholm kernel", Encyclopedia of Mathematics, EMS Press
• Driver, Bruce K. "Compact and Fredholm Operators and the Spectral Theorem" (PDF). Analysis Tools with Applications. pp. 579–600.
• Mathews, Jon; Walker, Robert L. (1970). Mathematical Methods of Physics (2nd ed.). New York: W. A. Benjamin. ISBN 0-8053-7002-1.
• McOwen, Robert C. (1980). "Fredholm theory of partial differential equations on complete Riemannian manifolds". Pacific J. Math. 87 (1): 169–185. doi:10.2140/pjm.1980.87.169. Zbl 0457.35084.
Authority control: National
• Germany
Spectral theory and *-algebras
Basic concepts
• Involution/*-algebra
• Banach algebra
• B*-algebra
• C*-algebra
• Noncommutative topology
• Projection-valued measure
• Spectrum
• Spectrum of a C*-algebra
• Spectral radius
• Operator space
Main results
• Gelfand–Mazur theorem
• Gelfand–Naimark theorem
• Gelfand representation
• Polar decomposition
• Singular value decomposition
• Spectral theorem
• Spectral theory of normal C*-algebras
Special Elements/Operators
• Isospectral
• Normal operator
• Hermitian/Self-adjoint operator
• Unitary operator
• Unit
Spectrum
• Krein–Rutman theorem
• Normal eigenvalue
• Spectrum of a C*-algebra
• Spectral radius
• Spectral asymmetry
• Spectral gap
Decomposition
• Decomposition of a spectrum
• Continuous
• Point
• Residual
• Approximate point
• Compression
• Direct integral
• Discrete
• Spectral abscissa
Spectral Theorem
• Borel functional calculus
• Min-max theorem
• Positive operator-valued measure
• Projection-valued measure
• Riesz projector
• Rigged Hilbert space
• Spectral theorem
• Spectral theory of compact operators
• Spectral theory of normal C*-algebras
Special algebras
• Amenable Banach algebra
• With an Approximate identity
• Banach function algebra
• Disk algebra
• Nuclear C*-algebra
• Uniform algebra
• Von Neumann algebra
• Tomita–Takesaki theory
Finite-Dimensional
• Alon–Boppana bound
• Bauer–Fike theorem
• Numerical range
• Schur–Horn theorem
Generalizations
• Dirac spectrum
• Essential spectrum
• Pseudospectrum
• Structure space (Shilov boundary)
Miscellaneous
• Abstract index group
• Banach algebra cohomology
• Cohen–Hewitt factorization theorem
• Extensions of symmetric operators
• Fredholm theory
• Limiting absorption principle
• Schröder–Bernstein theorems for operator algebras
• Sherman–Takeda theorem
• Unbounded operator
Examples
• Wiener algebra
Applications
• Almost Mathieu operator
• Corona theorem
• Hearing the shape of a drum (Dirichlet eigenvalue)
• Heat kernel
• Kuznetsov trace formula
• Lax pair
• Proto-value function
• Ramanujan graph
• Rayleigh–Faber–Krahn inequality
• Spectral geometry
• Spectral method
• Spectral theory of ordinary differential equations
• Sturm–Liouville theory
• Superstrong approximation
• Transfer operator
• Transform theory
• Weyl law
• Wiener–Khinchin theorem
| Wikipedia |
Home Journals MMEP Significance of Induced Magnetic Field and Exponential Space Dependent Heat Source on Quadratic Convective Flow of Casson Fluid in a Micro-channel via HPM
Significance of Induced Magnetic Field and Exponential Space Dependent Heat Source on Quadratic Convective Flow of Casson Fluid in a Micro-channel via HPM
Thriveni Kunnegowda | Basavarajappa Mahanthesh* | Giulio Lorenzini | Isaac Lare Animasaun
Department of Mathematics, CHRIST (Deemed to be University), Bangalore 560029, Karnataka, India
Department of Engineering and Architecture, University of Parma, Parco Area Delle Scienze 181/A, 43124 Parma, Italy
Fluid Dynamics Research Group, Department of Mathematical Sciences, Federal University of Technology, Akure, Nigeria
[email protected]
https://doi.org/10.18280/mmep.060308
The effects of the exponential space based heat source on quadratic convective flow of Casson fluid in a microchannel with an induced magnetic field is studied through a statistical approach. The flow is considered in vertical microchannel formed by two vertical plates. The solution for the governing equations has been obtained for the velocity, induced magnetic field and temperature field using Homotopy Perturbation Method (HPM). The current density, skin friction co-efficient and Nusselt number expressions are also estimated. The impact of various physical parameters on the velocity, temperature, induced magnetic field, current density, skin friction co-efficient and Nusselt number distributions have been discussed with the help of graphs. The results obtained by using HPM, are compared to those obtained by using the Runge-Kutta-Fehlberg 4-5th order method and an excellent agreement is found. The impact of Casson fluid parameter and the exponential heat source is qualitatively agreed for all flow fields.
Casson fluid, exponential heat source, microchannel, nonlinear convection, nonlinear boussinesq approximation
The heating/cooling applications at engineering require high thermal performance to the thermal systems. As a result, it has attracted many researchers to find the technique to enhance the rate of heat transfer in the cooling and thermal engineering system. However, the enhancement of thermal energy is one of the challenges in these applications. The significant of the heat transfer enhancement can be obtained by developing compact devices that are small in size, reduction in equipment weight or light weight and having high efficiency. The transfer of energy due to the temperature difference is termed as a heat exchanger. In the field of energy conservation, conversion and recovery heat exchangers play a very important role. The heat exchanger can be found and used in many applications such as household air conditioning, automotive air conditioning system and manufacturing processes. In view of this, in 1981 Tuckerman and Pease [1] proposed a micro-channel heat exchanger for the first time. Later, Mehendale defined the micro-channel heat exchanger as hydraulic diameter less than 1mm. The heat exchange between two different fluids in a microchannel was first developed by Swift [2] in the year 1985. The natural convection in an open-ended micro-channel was investigated analytically by Chen and Weng [3]. They found that in the slip-flow natural convection, rarefaction and fluid-wall interaction have significant effects on the flow. Taking suction/injection effect into account, later this work was extended by Jha et al. [4]. They concluded that skin friction coefficient and rate of heat transfer strongly depend on suction/injection parameter. Wang and Chiu-On [5] investigated the natural convection in a vertical microchannel influenced by no-slip condition. The main conclusion drawn from these studies is that the heat transfer enhancement can be done by accounting the micro-channel.
The above studies are concerned with natural convection involving various physical parameters like MHD, suction/injection velocity slip condition wherein linear Boussinesq approximation has been taken into account. Since density is directly proportional to the temperature/concentration difference as the temperature difference increases it is possible to have a nonlinearity fluctuation in the density which will affect the flow fields consequently. The nonlinear density variation with temperature was proposed by Vajravelu et al. [6] and is as follows
$\rho(T)=\rho\left(T_{0}\right)+\left(\frac{\partial \rho}{\partial T}\right)_{0}\left(T-T_{0}\right)+\frac{1}{2}\left(\frac{\partial^{2} \rho}{\partial T^{2}}\right)_{0}\left(T-T_{0}\right)^{2}+\cdots$
By following Vajravelu et al. [6], the three dimensional analysis of radiation and nonlinear convection for the flow of a non-Newtonian nanofluid was studied by Mahanthesh et al. [7]. They found that the temperature profile is stronger in the case of solar radiation. Hayat et al. [8] studied the effect of nonlinear convection in thixotropic fluid with magnetic field. Nonlinear convection of third grade fluid on stratified flow was investigated by Waqas et al. [9]. Gireesha et al. [10] studied the nonlinear convective flow of nanoliquid subjected to an exponential heat source and variable viscosity. However, the amount of literature done on nonlinear convection using microchannel is limited. Thus, this study is proposed to fill this gap in the literature.
The Newtonian theory fails to explain the characteristics of many materials like paint, shampoos, printing ink, tomato paste, etc., so the non-Newtonian theory was introduced. Among them, Casson liquid exhibits the stress, shear thinning characteristics along with high shear viscosity. The Casson fluid model was first introduced by Casson in the year 1959 which describes the flow of viscoelastic fluids. Many researchers showed their interest to study the Casson fluid model due to the variety of applications of Casson fluid in the field of petrochemical, food processing and in the field of metallurgy, etc. The flow of Casson fluid over a stretching cylinder by considering magnetism was studied by Tamoor et al. [11]. Later, the numerical study on magneto Casson fluid with cross-diffusion effect was investigated by Pushpalatha et al. [12]. MHD flow of Casson fluid through porous microchannel subjected to thermal radiation was examined by Shashikumar et al. [13]. Makinde et al. [14-15] addressed the combined effect of thermal radiation, suction/injection, magnetic field and porous media in the forced convection flow of an electrically conducting Casson fluid in a horizontal and vertical microchannel with velocity slip and temperature jump condition.
Magnetohydrodynamics deals with the movement of particles influenced by electromagnetic field. It is mainly focused on the particles in which currents are induced by induction. The novelty behind magnetohydrodynamics is that current in a moving convective field can be induced by a magnetic field. The induced magnetic field plays a significant role in the case of nuclear reactors, thermomagneto aerodynamics etc. The significance of induced magnetic field on natural convection in a vertical microchannel was investigated by Basant et al. [16]. Shivakumar et al. [17] studied the influence of induced magnetic field on the forced convection subjected to magnetic field. The role of induced magnetic field on a mixed convection flow in microchannel was addressed by Basant et al. [18]. In view of these, the study on transport of Casson fluid under nonlinear Boussinesq approximation in a microchannel in presence of induced magnetic field and exponential heat source is an open question. Therefore, the prime purpose of this study is to investigate the momentum and thermal behavior of Casson fluid in the presence of the induced magnetic field, exponential heat source under nonlinear Boussinesq approximation in a microchannel. The governing equations are treated analytically by using HPM under velocity slip and temperature jump boundary conditions. The following section illustrates the basic idea of HPM.
2. Idea of HPM
To explain the basic idea of HPM, consider the nonlinear differential equation of the form (see [19-20]):
$A(u)-f(l)=0, l \in D$ (1)
with the boundary condition:
$B\left(u, \frac{\partial u}{\partial m}\right)=0, l \in F$ (2)
where A, B, f(l) and F are general differential operator, boundary operator, a known analytical function and boundary of the domain D respectively. The operator A can be divided into linear(L) and nonlinear(L) parts. Therefore Eq. (1) can be written as:
Figure 7. Velocity profile $u(y)$ and induced magnetic field $H(y)$ for different values of exponential index $n$
Figures 10$(\text { a } \& \text { b })$ and 11$(a \& b)$ show the effect of $Q \& n$ on $\theta(y)$ and $J(y)$ . It is observed that the temperature profile can be increased by increasing the value of heat source parameter. This is because dissipation of energy due to heat source aspect. Whereas the reduction in the temperature profile can be seen by increasing the value of $n$ . Also, a similar nature can be seen in the induced current density. Figures 12 (a) and 13 (a) present the variation of $\beta_{v} K_{n} \&$ ln on $\theta(y) .$ It is seen that, an increase in the values of $\beta_{v} K_{n}$ and $l n$ causes an enhancement in the temperature profluence of the increase in the temperature jump. The influence of $\beta_{v} K_{n}$ and $\ln$ on $\theta(y)$ becomes significant as $\xi$ increases. Figures 12 (b) and 13 (b) shows the effect of $\beta_{v} K_{n}$ and $\ln$ on $J(y) .$ It is observed that, an increase in both $\beta_{v} K_{n}$ and $l n$ causes an enhancement in $J(y)$ in the domain $y \in(0.2,0.7)$ whereas the inverse trend is seen in the domain $y \in(0,0.2)$ and $y \in(0.7,1) .$ For the wall ambient temperature difference ratio, the inverse effect of induced current density is seen. It is also found that the induced current density becomes independent of $\beta_{v} K_{n}$ and $l n$ at two points due to the existence of points of intersection inside the microchannel.
Figure 8. Velocity profile $u(y)$ and induced magnetic field $H(y)$ for different values of Knudsen number $\beta_{v} K_{n}$
Figure 9. Velocity profile $u(y)$ and induced magnetic field $H(y)$ for different values of fluid-wall interaction parameter $\ln$
Figure 10. Temperature profile $\theta(y)$ and induced current density $J(y)$ for different values of $Q$
Figure 11. Temperature profile $\theta(y)$ and induced current density $J(y)$ for different values of $n$
Figure 12. Temperature profile $\theta(y)$ and induced current density $J(y)$ for different values of $\beta_{v} K_{n}$
Figure 13. Temperature profile $\theta(y)$ and induced current density $J(y)$ for different yalues of $l n$
Figure 14 (a $\&$ b) shows the effect of $\beta, \alpha \& \xi$ on induced current density. Here an increase in $\beta$ and $\alpha$ causes an enhancement in the induced current density at the central region of the vertical microchannel while reveres behavior is observed at the microchannel plates. Also, it is interesting to note that current density changes its behavior at two points inside the microchannel with $\beta$ and $\alpha .$
Figure 14. Induced current density $J(y)$ for different values of $\beta$ and $\alpha$
Figure 15 (a \& b) presences the variations of $Q_{m}$ with respect to $\beta_{v} K_{n}$ for different values of $Q \& \alpha$ . It is seen that an increase in $Q$ and $\alpha$ causes an enhancement in volume flow rate $\left(Q_{m}\right)$ for both symmetric and asymmetric heating. Alse it is found that $Q_{m}$ is an increasing function of $\xi$ and $l n .$ Figures 16$\left(\text { a) and } 17 \text { (a) illustrate the effect of } Q, \beta_{v} K_{n} \text { and } \xi \text { on the }\right.$ skin friction. It is found that the increase in the $Q$ leads to an increase in the skin friction at the wall $y=0$ while reveres nature occurs at the microchannel wall $y=1 .$ Furthermore, it is evident that the skin friction $\tau_{1}$ is more in the case of asymmetric heating in compare with symmetric heating whereas the reverse trend is seen for $\tau_{0}$ . Also, similar effects can be found in Figures 16$(\mathrm{b})$ and 17 (b) for different values of $\alpha .$
Figures 18$(a)$ and 19 (a) show the effect of $Q$ on the Nusselt number. It is observed that the heat transfer rate increases with the increase in the value of $Q$ at the wall $y=0$ while the reverse trend occurs at the microchannel wall $y=1$ . In addition, it is found that the heat transfer rate is higher in the case of asymmetric heating than that of the symmetric heating. Figures 18 (b) and 19 (b) depict the effect of the fluid-wall interaction parameter on the Nusselt number. It is seen that the heat transfer rate decreases by rising the values of $\ln , \beta_{v} K_{n} \& \xi$ .
Figure 15. Volume flow rate $Q_{m}$ for different values of $Q$ and $\alpha$
Figure 16. Skin friction $\tau_{0}$ for different values of $Q$ and $\alpha$
Figure 18. Nusselt number $N u_{0}$ for different values of $Q$ and $\ln$
Table 2. Numerical values of volume flow rate $\left(Q_{m}\right)$ for various values of $M, P m, \beta, n, l n$ when $Q=2$ and $\alpha=0.5$ along with
the slope of data points
$\boldsymbol{\beta}$
$\xi=1$
$\begin{aligned} & \boldsymbol{\beta}_{v} \boldsymbol{K}_{\boldsymbol{n}} \\=& \mathbf{0} \cdot \mathbf{0} 5 \end{aligned}$
$\boldsymbol{\beta}_{v} \boldsymbol{K}_{\boldsymbol{n}}$
$=\mathbf{0} \cdot \mathbf{1}$
$\begin{aligned} & \boldsymbol{\beta}_{v} \boldsymbol{K}_{\boldsymbol{n}} \\=& \mathbf{0} \cdot \mathbf{0} \mathbf{5} \end{aligned}$
$\boldsymbol{\beta}_{v} \boldsymbol{K}_{n}$
Table 3. Numerical values of skin friction $\left(\tau_{0}\right)$ for various values of $M, P m, \beta, n, l n$ when $Q=2, \alpha=0.5$ along with the slope of data points
$\boldsymbol{\tau}_{0}$
$=\mathbf{0} \cdot \mathbf{0} \mathbf{J}$
$=\mathbf{0}, \mathbf{0} 5$
$\beta_{v} K_{n}$
$=0.1$
Table 4. Numerical values of skin friction $\left(\tau_{1}\right)$ for various values of $M, P m, \beta, n, l n$ when $Q=2, \alpha=0.5$ along with the slope of
$\tau_{1}$
$\xi=\mathbf{0}$
$=0.05$
$=\mathbf{0} . \mathbf{1}$
The numerical values of $Q_{m}$ for various values of $M, P m, \beta, n$ and $l n$ when $Q=2$ and $\alpha=0.5$ are recorded for the cases of $\xi=1$ and $\xi=0$ in table 2 . Also, the slope of linear regression using data points in estimated to know the amount of increase or decrease in the $Q_{m}$ . It is also seen that, the $Q_{m}$ is a declining function of $M, P m$ and $n$ whereas $Q_{m}$ is an increasing function of $\beta$ and $l n$ . Impact of $\beta$ on $Q_{m}$ is more significant than that of $l n$ . Tables 3 and 4 present the numerical values of skin friction coefficient at $y=0$ and $y=$ 1 respectively for various values of $M, P m, \beta, n$ and $l n$ when $Q=2$ and $\alpha=0.5 .$ It is found that, the $\tau_{0}$ is an increasing function of $M, P m, \beta, n$ and $l n .$ Impact of $n$ on $\tau_{0}$ is more significant than that of $M, P m, \beta,$ and $l n .$ From tables 4 it is noticed that $\tau_{1}$ is an increasing function of $M, P m$ and $n$ whereas it is a declining function of $\beta$ and $l n .$
7. Statistical Analysis
7.1 Correlation coefficient and probable error
The correlation coefficient $(r)$ and probable error $(P E)$ are calculated for skin friction co-efficient and Nusselt number for various parameters. The nature of the relationship for variables considered was determined by the sign of $r$ . The significance precision of the correlation coefficient is calculated by using probable error $(P E) .$ If $r>6 \cdot P E$ then the correlation is said to be significant according to Fisher $[22]$ . The probable error is given by:
$P E=\left(\frac{1-r^{2}}{\sqrt{j}}\right) 0.6745$
where j denotes the number of observations.
Table 5. Correlation coefficient $(\mathrm{r}),$ Probable error $(\mathrm{PE})$ and $\left|\frac{r}{p E}\right|$values for $\tau_{0}$ with respect to the parameters $Q, \beta, \alpha, n, l n$ and $\beta_{v} K_{n}$
$\boldsymbol{\tau}_{\mathbf{0}}$
$|\boldsymbol{r} / \boldsymbol{P} \boldsymbol{E}|$
$\beta$
3.23752E-05
$\alpha$
Table 5 illustrates that $\tau_{0}$ is highly positively correlated with $Q, \beta, \alpha$ \& $l n$ while it is negatively correlated with $\beta_{v} K_{n} \& n .$ From table $6,$ it is observed that $\tau_{1}$ is highly negatively correlated with $Q, \beta, \alpha,$ ln $\& \beta_{v} K_{n}$ whereas positively correlated with $n$ . Table 7 shows that, $N u_{o}$ is highly positively correlated with $Q$ and negatively correlated with $n, \beta_{v} K_{n} \&$ In. Similarly using table $8,$ it 15 observed that $N u_{1}$ is highly negatively correlated with $Q,$ ln $\& \beta_{v} K_{n}$ and positively correlated with $n$ . Finally in all the cases correlation obtained for $\tau_{0}, \tau_{1}, N u_{0}$ and $N u_{1}$ are significant because $\left|\frac{r}{p_{F}}\right|>6$
Table 6. Correlation coefficient $(\mathrm{r}),$ Probable error $(\mathrm{PE})$ and $\left|\frac{r}{P E}\right|$ values for $\tau_{1}$ with respect to the parameters $Q, \beta, \alpha, n, l n$ and $\beta_{v} K_{n}$
$\beta_{\nu} K_{n}$
Table 7. Correlation coefficient $(\mathrm{r}),$ Probable error $(\mathrm{PE})$ and $\left|\frac{r}{P E}\right|$ values for $N u_{0}$ with respect to the parameters $Q, n, l n$ and $$\beta_{v} K_{n}$$
$\boldsymbol{N} \boldsymbol{u}_{\mathbf{0}}$
Table 8. Correlation coefficient $(\mathrm{r}),$ Probable error $(\mathrm{PE})$ and $\left|\frac{r}{P E}\right|$ values $N u_{1}$ for with respect to the parameters $Q, n, l n$ and $\beta_{v} K_{n}$
$N u_{1}$
7.2 Regression analysis
The regression analysis is made to estimate the skin friction co-efficient and Nusselt number by multivariable linear regression models. Since the curves of τ and Nu (see Figures 16-19) are linear in nature the linear regression model is chosen specifically to estimate the same. The estimated models are given below:
$\tau_{0 e s t}=b_{Q} Q+b_{M} M+b_{P m} P m+b_{\beta} \beta+b_{\alpha} \alpha+b_{n} n+b_{l n} l n$
$+b_{\beta_{\nu K n}} \beta_{\nu} K n+C_{1}$
$N u_{0 e s t}=b_{Q} Q+b_{n} n+b_{l n} \ln +b_{\beta_{\nu} K n} \beta_{\nu} K n+C_{3}$
$N u_{1 e s t}=b_{Q} Q+b_{n} n+b_{l n} l n+b_{\beta_{\nu} K n} \beta_{\nu} K n+C_{4}$
where, $b_{Q}, b_{M}, b_{P m}, b_{\beta}, b_{\alpha}, b_{n}, b_{l n}$ and $b_{\beta_{v} K_{n}}$ are the estimated regression coefficient and $C_{1}, C_{2}, C_{3}$ and $C_{4}$ are constants. The $\tau_{0}$ values are estimated from 30 set of random values of $Q, M, \beta, \alpha, n,$ ln $\& \beta_{v} K_{n} \in[0.1,0.6]$ and $P m \in[0.01,0.07]$ for regression model. It is found that all the physical parameters achieve significance value<0.05 for significant regression coefficients except for the parameters $Q$ and $P m$ (see Table 9).
Table 9. Regression coefficients for the multiple linear regression model for $\tau_{0}$
Unstandardized Coefficients
(Constant)
βvKn
The estimated $\tau_{0}$ is given by:
$\tau_{0 e s t}=0.029 Q-0.001 M-0.003 P m+0.440 \beta$
$+0.264 \alpha-0.022 n$
$+0.011 l n+0.456 \beta_{v} K n-0.111$
The above equation implies that the parameters $Q, \beta, \alpha, l n, \beta_{v} K_{n}$ and $M, P m, n$ have a positive and negative impact on $\tau_{0}$ correspondingly. Similarly, $\tau_{1}$ values are estimated from 30 set of random values of $Q, M, \beta, \alpha, n, l n, \beta_{v} K_{n} \& P m \in[0.1,0.4]$ for the regression model. It is evident from table 10 that, all the physical parameters have the significance value<0.05 for significant regression coefficients except for the parameters $Q$ and $P m .$
Table 10. Regression coefficients for the multiple linear regression model for $\tau_{1}$
The estimated regression model of $\tau_{1}$ is given by:
$\begin{array}{rl}{\boldsymbol{\tau}_{1 e s t}=-\mathbf{0} \cdot \mathbf{0} 3} & {\mathbf{1} \boldsymbol{Q}-\mathbf{0} \cdot \mathbf{0} \mathbf{0} \mathbf{1} \boldsymbol{M}-\mathbf{0} \cdot \mathbf{0} \mathbf{0} 5 \boldsymbol{P} \boldsymbol{m}-\mathbf{0} \cdot \mathbf{4} 52 \boldsymbol{\beta}} \\ {-\mathbf{0} \cdot \mathbf{2} 56 \alpha+\mathbf{0} \cdot \mathbf{0} 35 \mathbf{n}} \\ {-\mathbf{0} \cdot \mathbf{0} \mathbf{1} \mathbf{3} \boldsymbol{l} \boldsymbol{n}-\mathbf{0} \cdot \mathbf{4} \mathbf{3} \mathbf{4} \boldsymbol{\beta}_{v} \boldsymbol{K} \boldsymbol{n}+\mathbf{0} \cdot \mathbf{1} \mathbf{2} \mathbf{5}}\end{array}$
The $\quad$ above $\quad$ equation $\quad$ depicts $\quad$ that $Q, M, P m, \beta, \alpha, l n, \beta_{v} K_{n}$ and $n$ have a negative and positive impact on $\tau_{1}$ respectively.
The $N u_{0}$ values are estimated from 30 set of random values of $Q, n, \ln \& \beta_{v} K_{n} \in[0.01,0.08]$ for regression model. It is found that all the physical parameters achieve the significance value $<0.05$ for significant regression coefficients (see Table 11).
Table 11. Regression coefficients for the multiple linear regression model for $N u_{0}$
The estimated regression model for $N u_{0}$ is given by:
$\begin{array}{rl}{N u_{0 e s t}=0.422 Q-0.309 n-0.006} & {l n-0.153 \beta_{v} K n} \\ {+0.170}\end{array}$
The above equation implies that the parameters $Q, n, \ln \& \beta_{v} K_{n}$ have a negative impact on $N u_{0} .$ Similarly, $N u_{1}$ values are estimated from 30 set of random values of $Q, n, \ln \& \beta_{v} K_{n} \in[0.1,0.8]$ for the regression model. It is evident from table 12 that, all the physical parameters have the significance value<0.05 for significant regression coefficients.
$\begin{array}{rl}{N u_{1 e s t}=-1.300 Q+20.486 n-1.355} & {l n+1.824 \beta_{v} K n} \\ {-7.913}\end{array}$
The above equation depicts that the parameters $Q \&$ ln have a negative impact on $N u_{1}$ whereas $n \& \beta_{v} K_{n}$ have a positive impact on $N u_{1} .$ The outcomes of estimated $\tau$ $\text { and } N u \text { are matching with actual } \tau \text { and } N u \text { (see Figure } 20)$.
Figure 20. Comparison of actual and estimated values of
$\tau_{0}$ and $N u_{0}$
The role of the exponential heat source and quadratic convection in the flow of Casson fluid with an induced magnetic field under velocity slip and temperature jump is investigated analytically by using HPM. The following conclusions are drawn.
In the induced magnetic field profile there exists a point of intersection inside the vertical microchannel which makes the induced magnetic field to be independent of the parameters involved.
As similar to the induced magnetic field there exist two points of intersection inside the vertical microchannel for the induced current density.
The effect of M and Pm on velocity profile causes a point of intersection inside the microchannel for asymmetric heating ($\xi=-1$).
The nonlinear convection parameter is favorable for skin friction ($\tau_{0}$).
Impact of Casson fluid parameter and the exponential heat source is qualitatively agreed for all flow fields.
The Nusselt number and the skin friction $S f_{1}$ is more in case of asymmetric heating in compare with symmetric heating.
The impact of exponential index is more significant for $S f_{0}$.
The solution obtained for and from the calculation and the regression equations are superimposed.
The authors (B Mahanthesh and Thriveni K) expresses thier sincere thanks to the Management, CHRIST (Deemed to be University), Bangalore, India for the support to complete this work.
Distance between the plate (m)
Specific heat at constant pressure (J/kg K )
Acceleration due to gravity (m/s2)
$H _ { 0 } ^ { \prime }$
Applied magnetic field (T)
$H _ { x } ^ { \prime }$
Dimensional induced magnetic field (A/m)
Dimensionless induced magnetic field
The fluid-wall interaction parameter
Induced current density (A/m2)
Induced magnetic parameter
Magnetic Prandtl number
Prandtl number
Dimensionless volume flow rate
The temperature of the fluid (K)
Reference temperature (K)
The dimensionless velocity of the fluid
The dimensional velocity of the fluid (m/s)
Exponential heat source parameter
Exponential index
Thermal conductivity (W/m K)
Greek symbols
Nonlinear convection parameter
Casson fluid parameter
$\beta _ { 0 } , \beta _ { 1 }$
$\beta _ { t } , \beta _ { v }$
Dimensionless variables
$\gamma$
The ratio of specific heat
$\theta$
Dimensionless temperature
$\rho$
Density (kg/m3)
$\mu _ { e }$
Magnetic permeability (H/m)
$v$
Kinematic viscosity (m2/s)
$\sigma$
The electrical conductivity of the fluid (S/m)
$\lambda$
Molecular mean free path
$\sigma _ { t } , \sigma _ { v }$
Thermal and tangential momentum coefficient, respectively
[1] Tuckerman, D.B., Pease, R.F.W. (1981). High-performance heat sinking for VLSI. IEEE Electron Device Letters, 2(5): 126-129. https://doi.org/10.1109/EDL.1981.25367
[2] Swift, G., Migliori, A., Wheatley, J. (1985). Construction of and measurements with an extremely compact cross-flow heat exchanger. Heat transfer Engineering, 6(2): 39-47. https://doi.org/10.1080/01457638508939623
[3] Weng, H.C. (2005). Natural convection in a vertical microchannel. Journal of Heat Transfer, 127(9): 1053-1056. https://doi.org/10.1115/1.1999651
[4] Jha, B.K., Aina, B., Joseph, S.B. (2014). Natural convection flow in a vertical microchannel with suction/injection. Proceedings of the Institution of Mechanical Engineers, Part E: Journal of Process Mechanical Engineering, 228(3): 171-180. https://doi.org/10.1177/0954408913492719
[5] Wang, C.Y., Ng, C.O. (2014). Natural convection in a vertical slit microchannel with superhydrophobic slip and temperature jump. Journal of Heat Transfer, 136(3): 034502. https://doi.org/10.1115/1.4025822
[6] Vajravelu, K., Cannon, J.R., Leto, J., Semmoum, R., Nathan, S., Draper, M., Hammock, D. (2003). Nonlinear convection at a porous flat plate with application to heat transfer from a dike. Journal of Mathematical Analysis and Applications, 277(2): 609-623. https://doi.org/10.1016/S0022-247X(02)00634-0
[7] Mahanthesh, B., Gireesha, B.J., Thammanna, G.T., Shehzad, S.A., Abbasi, F.M., Gorla, R.S.R. (2018). Nonlinear convection in nano Maxwell fluid with nonlinear thermal radiation: A three-dimensional study. Alexandria Engineering Journal, 57(3): 1927-1935. https://doi.org/10.1016/j.aej.2017.03.037
[8] Hayat, T., Qayyum, S., Alsaedi, A., Ahmad, B. (2018). Modern aspects of nonlinear convection and magnetic field in flow of thixotropic nanofluid over a nonlinear stretching sheet with variable thickness. Physica B: Condensed Matter, 537: 267-276. https://doi.org/10.1016/j.physb.2018.02.005
[9] Waqas, M., Khan, M.I., Hayat, T., Alsaedi, A. (2018). Effect of nonlinear convection on stratified flow of third grade fluid with revised Fourier-Fick relations. Communications in Theoretical Physics, 70(1): 025. https://doi.org/10.1088/0253-6102/70/1/25
[10] Gireesha, B.J., Kumar, P.S., Mahanthesh, B., Shehzad, S.A., Abbasi, F.M. (2018). Nonlinear gravitational and radiation aspects in nanoliquid with exponential space dependent heat source and variable viscosity. Microgravity Science and Technology, 30(3): 257-264. https://doi.org/10.1007/s12217-018-9594-9
[11] Tamoor, M., Waqas, M., Khan, M.I., Alsaedi, A., Hayat, T. (2017). Magnetohydrodynamic flow of Casson fluid over a stretching cylinder. Results in Physics, 7: 498-502. https://doi.org/10.1016/j.rinp.2017.01.005
[12] Pushpalatha, K., Reddy, J.R., Sugunamma, V., Sandeep, N. (2017). Numerical study of chemically reacting unsteady Casson fluid flow past a stretching surface with cross diffusion and thermal radiation. Open Engineering, 7(1): 69-76. https://doi.org/10.1515/eng-2017-0013
[13] Shashikumar, N.S., Prasannakumara, B.C., Gireesha, B.J., Makinde, O.D. (2018). Thermodynamics analysis of MHD casson fluid slip flow in a porous microchannel with thermal radiation. In Diffusion Foundations, 16: 120-139. https://doi.org/10.4028/www.scientific.net/DF.16.120
[14] Makinde, O.D., Eegunjobi, A.S. (2016). Entropy analysis of thermally radiating magnetohydrodynamic slip flow of Casson fluid in a microchannel filled with saturated porous media. Journal of Porous Media, 19(9). https://doi.org/10.1615/JPorMedia.v19.i9.40
[15] Eegunjobi, A.S., Makinde, O.D. (2017). MHD mixed convection slip flow of radiating Casson fluid with entropy generation in a channel filled with porous media. In Defect and Diffusion Forum, 374: 47-66. https://doi.org/10.4028/www.scientific.net/DDF.374.47
[16] Jha, B.K., Aina, B. (2016). Role of induced magnetic field on MHD natural convection flow in vertical microchannel formed by two electrically non-conducting infinite vertical parallel plates. Alexandria Engineering Journal, 55(3): 2087-2097. https://doi.org/10.1016/j.aej.2016.06.030
[17] Sivakumar, R., Vimala, S., Sekhar, T.V.S. (2015). Influence of induced magnetic field on thermal MHD flow. Num. Heat Trans., Part A: Applications, 68(7): 797-811. https://doi.org/10.1080/10407782.2014.994438
[18] Jha, B.K., Aina, B. (2017). Effect of induced magnetic field on MHD mixed convection flow in vertical microchannel. International Journal of Applied Mechanics and Engineering, 22(3): 567-582. https://doi.org/10.1515/ijame-2017-0036
[19] He, J.H. (1999). Homotopy perturbation technique. Computer Methods in Applied Mechanics and Engineering, 178(3): 257-262. https://doi.org/10.1016/S0045-7825(99)00018-3
[20] Biazar, J., Aminikhah, H. (2009). Study of convergence of homotopy perturbation method for systems of partial differential equations. Computers & Mathematics with Applications, 58(11): 2221-2230. https://doi.org/10.1016/j.camwa.2009.03.030
[21] Kumar, A., Singh, A.K. (2013). Unsteady MHD free convective flow past a semi-infinite vertical wall with induced magnetic field. Applied Mathematics and Computation, 222: 462-471. https://doi.org/10.1016/j.amc.2013.07.044
[22] Fisher, R.A. (1921). On the probable error of a coefficient of correlation deduced from a small sample. Metron, 1: 3-32. | CommonCrawl |
\begin{definition}[Definition:Natural Numbers/Inductive Sets in Real Numbers]
Let $\R$ be the set of real numbers.
Let $\II$ be the set of all inductive sets defined as subsets of $\R$.
Then the '''natural numbers''' $\N$ are defined as:
:$\N := \ds \bigcap \II$
where $\ds \bigcap$ denotes intersection.
It follows from the definition of inductive set that according to this definition, $0 \notin \N$.
\end{definition} | ProofWiki |
Covering groups of the alternating and symmetric groups
In the mathematical area of group theory, the covering groups of the alternating and symmetric groups are groups that are used to understand the projective representations of the alternating and symmetric groups. The covering groups were classified in (Schur 1911): for n ≥ 4, the covering groups are 2-fold covers except for the alternating groups of degree 6 and 7 where the covers are 6-fold.
For example the binary icosahedral group covers the icosahedral group, an alternating group of degree 5, and the binary tetrahedral group covers the tetrahedral group, an alternating group of degree 4.
Definition and classification
A group homomorphism from D to G is said to be a Schur cover of the finite group G if:
1. the kernel is contained both in the center and the commutator subgroup of D, and
2. amongst all such homomorphisms, this D has maximal size.
The Schur multiplier of G is the kernel of any Schur cover and has many interpretations. When the homomorphism is understood, the group D is often called the Schur cover or Darstellungsgruppe.
The Schur covers of the symmetric and alternating groups were classified in (Schur 1911). The symmetric group of degree n ≥ 4 has Schur covers of order 2⋅n! There are two isomorphism classes if n ≠ 6 and one isomorphism class if n = 6. The alternating group of degree n has one isomorphism class of Schur cover, which has order n! except when n is 6 or 7, in which case the Schur cover has order 3⋅n!.
Finite presentations
Schur covers can be described by means of generators and relations. The symmetric group Sn has a presentation on n−1 generators ti for i = 1, 2, ..., n−1 and relations
titi = 1, for 1 ≤ i ≤ n−1
ti+1titi+1 = titi+1ti, for 1 ≤ i ≤ n−2
tjti = titj, for 1 ≤ i < i+2 ≤ j ≤ n−1.
These relations can be used to describe two non-isomorphic covers of the symmetric group. One covering group $2\cdot S_{n}^{-}$ has generators z, t1, ..., tn−1 and relations:
zz = 1
titi = z, for 1 ≤ i ≤ n−1
ti+1titi+1 = titi+1ti, for 1 ≤ i ≤ n−2
tjti = titjz, for 1 ≤ i < i+2 ≤ j ≤ n−1.
The same group $2\cdot S_{n}^{-}$ can be given the following presentation using the generators z and si given by ti or tiz according as i is odd or even:
zz = 1
sisi = z, for 1 ≤ i ≤ n−1
si+1sisi+1 = sisi+1siz, for 1 ≤ i ≤ n−2
sjsi = sisjz, for 1 ≤ i < i+2 ≤ j ≤ n−1.
The other covering group $2\cdot S_{n}^{+}$ has generators z, t1, ..., tn−1 and relations:
zz = 1, zti = tiz, for 1 ≤ i ≤ n−1
titi = 1, for 1 ≤ i ≤ n−1
ti+1titi+1 = titi+1tiz, for 1 ≤ i ≤ n−2
tjti = titjz, for 1 ≤ i < i+2 ≤ j ≤ n−1.
The same group $2\cdot S_{n}^{+}$ can be given the following presentation using the generators z and si given by ti or tiz according as i is odd or even:
zz = 1, zsi = siz, for 1 ≤ i ≤ n−1
sisi = 1, for 1 ≤ i ≤ n−1
si+1sisi+1 = sisi+1si, for 1 ≤ i ≤ n−2
sjsi = sisjz, for 1 ≤ i < i+2 ≤ j ≤ n−1.
Sometimes all of the relations of the symmetric group are expressed as where mij are non-negative integers, namely mii = 1, mi,i+1 = 3, and mij = 2, for 1 ≤ i < i+2 ≤ j ≤ n−1. The presentation of $2\cdot S_{n}^{-}$ becomes particularly simple in this form: and zz = 1. The group $2\cdot S_{n}^{+}$ has the nice property that its generators all have order 2.
Projective representations
Covering groups were introduced by Issai Schur to classify projective representations of groups. A (complex) linear representation of a group G is a group homomorphism G → GL(n,C) from the group G to a general linear group, while a projective representation is a homomorphism G → PGL(n,C) from G to a projective linear group. Projective representations of G correspond naturally to linear representations of the covering group of G.
The projective representations of alternating and symmetric groups are the subject of the book (Hoffman & Humphreys 1992).
Integral homology
Covering groups correspond to the second group homology group, H2(G,Z), also known as the Schur multiplier. The Schur multipliers of the alternating groups An (in the case where n is at least 4) are the cyclic groups of order 2, except in the case where n is either 6 or 7, in which case there is also a triple cover. In these cases, then, the Schur multiplier is the cyclic group of order 6, and the covering group is a 6-fold cover.
H2(An,Z) = 0 for n ≤ 3
H2(An,Z) = Z/2Z for n = 4, 5
H2(An,Z) = Z/6Z for n = 6, 7
H2(An,Z) = Z/2Z for n ≥ 8
For the symmetric group, the Schur multiplier vanishes for n ≤ 3, and is the cyclic group of order 2 for n ≥ 4:
H2(Sn,Z) = 0 for n ≤ 3
H2(Sn,Z) = Z/2Z for n ≥ 4
Construction of double covers
The double covers can be constructed as spin (respectively, pin) covers of faithful, irreducible, linear representations of An and Sn. These spin representations exist for all n, but are the covering groups only for n≥4 (n≠6,7 for An). For n≤3, Sn and An are their own Schur covers.
Explicitly, Sn acts on the n-dimensional space Rn by permuting coordinates (in matrices, as permutation matrices). This has a 1-dimensional trivial subrepresentation corresponding to vectors with all coordinates equal, and the complementary (n−1)-dimensional subrepresentation (of vectors whose coordinates sum to 0) is irreducible for n≥4. Geometrically, this is the symmetries of the (n−1)-simplex, and algebraically, it yields maps $A_{n}\hookrightarrow \operatorname {SO} (n-1)$ and $S_{n}\hookrightarrow \operatorname {O} (n-1)$ expressing these as discrete subgroups (point groups). The special orthogonal group has a 2-fold cover by the spin group $\operatorname {Spin} (n)\to \operatorname {SO} (n),$ and restricting this cover to $A_{n}$ and taking the preimage yields a 2-fold cover $2\cdot A_{n}\to A_{n}.$ A similar construction with a pin group yields the 2-fold cover of the symmetric group: $\operatorname {Pin} _{\pm }(n)\to \operatorname {O} (n).$ As there are two pin groups, there are two distinct 2-fold covers of the symmetric group, 2⋅Sn±, also called ${\tilde {S}}_{n}$ and ${\hat {S}}_{n}$.
Construction of triple cover for n = 6, 7
The triple covering of $A_{6},$ denoted $3\cdot A_{6},$ and the corresponding triple cover of $S_{6},$ denoted $3\cdot S_{6},$ can be constructed as symmetries of a certain set of vectors in a complex 6-space. While the exceptional triple covers of A6 and A7 extend to extensions of S6 and S7, these extensions are not central and so do not form Schur covers.
This construction is important in the study of the sporadic groups, and in much of the exceptional behavior of small classical and exceptional groups, including: construction of the Mathieu group M24, the exceptional covers of the projective unitary group $U_{4}(3)$ and the projective special linear group $L_{3}(4),$ and the exceptional double cover of the group of Lie type $G_{2}(4).$.
Exceptional isomorphisms
For low dimensions there are exceptional isomorphisms with the map from a special linear group over a finite field to the projective special linear group.
For n = 3, the symmetric group is SL(2,2) ≅ PSL(2,2) and is its own Schur cover.
For n = 4, the Schur cover of the alternating group is given by SL(2,3) → PSL(2,3) ≅ A4, which can also be thought of as the binary tetrahedral group covering the tetrahedral group. Similarly, GL(2,3) → PGL(2,3) ≅ S4 is a Schur cover, but there is a second non-isomorphic Schur cover of S4 contained in GL(2,9) – note that 9=32 so this is extension of scalars of GL(2,3). In terms of the above presentations, GL(2,3) ≅ Ŝ4.
For n = 5, the Schur cover of the alternating group is given by SL(2,5) → PSL(2,5) ≅ A5, which can also be thought of as the binary icosahedral group covering the icosahedral group. Though PGL(2,5) ≅ S5, GL(2,5) → PGL(2,5) is not a Schur cover as the kernel is not contained in the derived subgroup of GL(2,5). The Schur cover of PGL(2,5) is contained in GL(2,25) – as before, 25=52, so this extends the scalars.
For n = 6, the double cover of the alternating group is given by SL(2,9) → PSL(2,9) ≅ A6. While PGL(2,9) is contained in the automorphism group PΓL(2,9) of PSL(2,9) ≅ A6, PGL(2,9) is not isomorphic to S6, and its Schur covers (which are double covers) are not contained in nor a quotient of GL(2,9). Note that in almost all cases, $S_{n}\cong \operatorname {Aut} (A_{n}),$ with the unique exception of A6, due to the exceptional outer automorphism of A6. Another subgroup of the automorphism group of A6 is M10, the Mathieu group of degree 10, whose Schur cover is a triple cover. The Schur covers of the symmetric group S6 itself have no faithful representations as a subgroup of GL(d,9) for d≤3. The four Schur covers of the automorphism group PΓL(2,9) of A6 are double covers.
For n = 8, the alternating group A8 is isomorphic to SL(4,2) = PSL(4,2), and so SL(4,2) → PSL(4,2), which is 1-to-1, not 2-to-1, is not a Schur cover.
Properties
Schur covers of finite perfect groups are superperfect, that is both their first and second integral homology vanish. In particular, the double covers of An for n ≥ 4 are superperfect, except for n = 6, 7, and the six-fold covers of An are superperfect for n = 6, 7.
As stem extensions of a simple group, the covering groups of An are quasisimple groups for n ≥ 5.
References
• Hoffman, P. N.; Humphreys, John F. (1992), Projective representations of the symmetric groups, Oxford Mathematical Monographs, The Clarendon Press Oxford University Press, ISBN 978-0-19-853556-0, MR 1205350
• Schur, J. (1911), "Über die Darstellung der symmetrischen und der alternierenden Gruppe durch gebrochene lineare Substitutionen", Journal für die reine und angewandte Mathematik, 139: 155–250, doi:10.1515/crll.1911.139.155, JFM 42.0154.02
• Schur, J. (2001), "On the representation of the symmetric and alternating groups by fractional linear substitutions", International Journal of Theoretical Physics, 40 (1): 413–458, doi:10.1023/A:1003772419522, ISSN 0020-7748, MR 1820589, Zbl 0969.20002(translation of (Schur 1911) by Marc-Felix Otto){{citation}}: CS1 maint: postscript (link)
• Wilson, Robert (October 31, 2006), "Chapter 2: Alternating groups", The Finite Simple Groups, archived from the original on May 22, 2011, 2.7: Covering groups{{citation}}: CS1 maint: postscript (link)
| Wikipedia |
Haïm Brezis
Haïm Brezis (born 1 June 1944) is a French mathematician, who mainly works in functional analysis and partial differential equations.
Haïm Brezis
Born (1944-06-01) 1 June 1944
Riom-ès-Montagnes, Cantal, France
NationalityFrench
Alma materUniversity of Paris
Known forBrezis–Gallouet inequality
Bony-Brezis theorem
Brezis–Lieb lemma
Scientific career
FieldsMathematics
InstitutionsPierre and Marie Curie University
Doctoral advisorGustave Choquet
Jacques-Louis Lions
Doctoral studentsAbbas Bahri
Henri Berestycki
Jean-Michel Coron
Jesús Ildefonso Díaz
Pierre-Louis Lions
Juan Luis Vázquez Suárez
Biography
Born in Riom-ès-Montagnes, Cantal, France. Brezis is the son of a Romanian immigrant father, who came to France in the 1930s, and a Jewish mother who fled from the Netherlands. His wife, Michal Govrin, a native Israeli, works as a novelist, poet, and theater director.[1] Brezis received his Ph.D. from the University of Paris in 1972 under the supervision of Gustave Choquet. He is currently a professor at the Pierre and Marie Curie University and a visiting distinguished professor at Rutgers University. He is a member of the Academia Europaea (1988) and a foreign associate of the United States National Academy of Sciences (2003). In 2012 he became a fellow of the American Mathematical Society.[2] He holds honorary doctorates from several universities including National Technical University of Athens.[3] Brezis is listed as an ISI highly cited researcher.[4] He also served on the Mathematical Sciences jury for the Infosys Prize in 2013 and 2014.
Works
• Opérateurs maximaux monotones et semi-groupes de contractions dans les espaces de Hilbert (1973)
• Analyse Fonctionnelle. Théorie et Applications (1983)
• Haïm Brezis. Un mathématicien juif. Entretien Avec Jacques Vauthier. Collection Scientifiques & Croyants. Editions Beauchesne, 1999. ISBN 978-2-7010-1335-0, ISBN 2-7010-1335-6
• Functional Analysis, Sobolev Spaces and Partial Differential Equations, Springer; 1st Edition. edition (November 10, 2010), ISBN 978-0-387-70913-0, ISBN 0-387-70913-4
See also
• Bony–Brezis theorem
• Brezis–Gallouet inequality
• Brezis–Lieb lemma
References
1. Dalia Karpel (2002-04-18). "Oh my love, comely as Jerusalem". Haaretz Daily Newspaper. Retrieved 2010-10-21.
2. List of Fellows of the American Mathematical Society, retrieved 2012-11-10.
3. "DHC National Technical University of Athens".
4. "List of ISI highly cited researchers".
External links
• Biographical sketch (in French)
• List of publications on the website of Rutgers University
Authority control
International
• FAST
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Catalonia
• Germany
• Italy
• Israel
• Belgium
• United States
• Japan
• Czech Republic
• Netherlands
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Section 8.1 Sequences
What is a sequence?
What does it mean for a sequence to converge?
What does it mean for a sequence to diverge?
We encounter sequences every day. Your monthly utility payments, the annual interest you earn on investments, the amount you spend on groceries each week; all are examples of sequences. Other sequences with which you may be familiar include the Fibonacci sequence \(1, 1, 2, 3, 5, 8, \ldots \text{,}\) where each term is the sum of the two preceding terms, and the triangular numbers \(1, 3, 6, 10, 15, 21, 28, 36, 45, 55, \ldots \text{,}\) the number of vertices in the triangles shown in Figure 8.1.1.
Figure 8.1.1. Triangular numbers
Sequences of integers are of such interest to mathematicians and others that they have a journal 1 devoted to them and an on-line encyclopedia 2 that catalogs a huge number of integer sequences and their connections. Sequences are also used in digital recordings and images.
Our studies in calculus have dealt with continuous functions. Sequences model discrete instead of continuous information. We will study ways to represent and work with discrete information in this chapter as we investigate sequences and series, and ultimately see key connections between the discrete and continuous.
Suppose you receive \(\dollar5000\) through an inheritance. You decide to invest this money into a fund that pays \(8\%\) annually, compounded monthly. That means that each month your investment earns \(\frac{0.08}{12} \cdot P\) additional dollars, where \(P\) is your principal balance at the start of the month. So in the first month your investment earns
\begin{equation*} 5000 \left(\frac{0.08}{12}\right) \end{equation*}
or \(\dollar33.33\text{.}\) If you reinvest this money, you will then have \(\dollar5033.33\) in your account at the end of the first month. From this point on, assume that you reinvest all of the interest you earn.
How much interest will you earn in the second month? How much money will you have in your account at the end of the second month?
Complete Table 8.1.2 to determine the interest earned and total amount of money in this investment each month for one year.
Table 8.1.2. Interest
Month Interest
earned Total amount
of money
in the account
\(0\) \(\dollar0.00\) \(\dollar5000.00\)
\(1\) \(\dollar33.33\) \(\dollar5033.33\)
\(2\)
\(10\)
As we will see later, the amount of money \(P_n\) in the account after month \(n\) is given by
\begin{equation*} P_n = 5000\left(1+\frac{0.08}{12}\right)^{n}\text{.} \end{equation*}
Use this formula to check your calculations in Table 8.1.2. Then find the amount of money in the account after 5 years.
How many years will it be before the account has doubled in value to $10000?
Subsection 8.1.1 Sequences
As Preview Activity 8.1.1 illustrates, many discrete phenomena can be represented as lists of numbers (like the amount of money in an account over a period of months). We call any such list a sequence. A sequence is nothing more than list of terms in some order. We often list the entries of the sequence with subscripts,
\begin{equation*} s_1, s_2, \ldots, s_n \ldots\text{,} \end{equation*}
where the subscript denotes the position of the entry in the sequence.
Definition 8.1.3.
A sequence is a list of terms \(s_1, s_2, s_3, \ldots\) in a specified order.
We can think of a sequence as a function \(f\) whose domain is the set of positive integers where \(f(n) = s_n\) for each positive integer \(n\text{.}\) This alternative view will be be useful in many situations.
We often denote the sequence
\begin{equation*} s_1, s_2, s_3, \ldots \end{equation*}
by \(\{s_n\}\text{.}\) The value \(s_n\) (alternatively \(s(n)\)) is called the \(n\)th term in the sequence. If the terms are all 0 after some fixed value of \(n\text{,}\) we say the sequence is finite. Otherwise the sequence is infinite. With infinite sequences, we are often interested in their end behavior and the idea of convergent sequences.
Let \(s_n\) be the \(n\)th term in the sequence \(1, 2, 3, \ldots\text{.}\) Find a formula for \(s_n\) and use appropriate technological tools to draw a graph of entries in this sequence by plotting points of the form \((n,s_n)\) for some values of \(n\text{.}\) Most graphing calculators can plot sequences; directions follow for the TI-84.
In the MODEmenu, highlight SEQin the FUNCline and press ENTER.
In the Y=menu, you will now see lines to enter sequences. Enter a value for nMin (where the sequence starts), a function for u(n) (the \(n\)th term in the sequence), and the value of u(nMin).
Set your window coordinates (this involves choosing limits for \(n\) as well as the window coordinates XMin, XMax, YMin, and YMax.
The GRAPHkey will draw a plot of your sequence.
Using your knowledge of limits of continuous functions as \(x \to \infty\text{,}\) decide if this sequence \(\{s_n\}\) has a limit as \(n \to \infty\text{.}\) Explain your reasoning.
Let \(s_n\) be the \(n\)th term in the sequence \(1, \frac{1}{2}, \frac{1}{3}, \ldots\text{.}\) Find a formula for \(s_n\text{.}\) Draw a graph of some points in this sequence. Using your knowledge of limits of continuous functions as \(x \to \infty\text{,}\) decide if this sequence \(\{s_n\}\) has a limit as \(n \to \infty\text{.}\) Explain your reasoning.
Let \(s_n\) be the \(n\)th term in the sequence \(2, \frac{3}{2}, \frac{4}{3}, \frac{5}{4}, \ldots\text{.}\) Find a formula for \(s_n\text{.}\) Using your knowledge of limits of continuous functions as \(x \to \infty\text{,}\) decide if this sequence \(\{s_n\}\) has a limit as \(n \to \infty\text{.}\) Explain your reasoning.
Next we formalize the ideas from Activity 8.1.2.
Recall our earlier work with limits involving infinity in Section 2.8. State clearly what it means for a continuous function \(f\) to have a limit \(L\) as \(x \to \infty\text{.}\)
Given that an infinite sequence of real numbers is a function from the integers to the real numbers, apply the idea from part (a) to explain what you think it means for a sequence \(\{s_n\}\) to have a limit as \(n \to \infty\text{.}\)
Based on your response to the part (b), decide if the sequence \(\left\{ \frac{1+n}{2+n}\right\}\) has a limit as \(n \to \infty\text{.}\) If so, what is the limit? If not, why not?
In Activities 8.1.2 and 8.1.3 we investigated a sequence \(\{s_n\}\) that has a limit as \(n\) goes to infinity. More formally, we make the following definition.
A sequence \(\{ s_n \}\) converges or is a convergent sequence provided that there is a number \(L\) such that we can make \(s_n\) as close to \(L\) as we want by taking \(n\) sufficiently large. In this situation, we call \(L\) the limit of the convergent sequence and write
\begin{equation*} \lim_{n \to \infty} s_n = L\text{.} \end{equation*}
If the sequence \(\{s_n\}\) does not converge, we say that the sequence \(\{s_n\}\) diverges.
The idea of sequence having a limit as \(n \to \infty\) is the same as the idea of a continuous function having a limit as \(x \to \infty\text{.}\) The only difference is that sequences are discrete instead of continuous.
Use graphical and/or algebraic methods to determine whether each of the following sequences converges or diverges.
\(\displaystyle \left\{\frac{1+2n}{3n-2}\right\}\)
\(\displaystyle \left\{\frac{5+3^n}{10+2^n}\right\}\)
\(\left\{\frac{10^n}{n!}\right\}\) (where \(!\) is the factorial symbol and \(n! = n(n-1)(n-2) \cdots (2)(1)\) for any positive integer \(n\) (as convention we define \(0!\) to be 1)).
A sequence is a list of objects in a specified order. We will typically work with sequences of real numbers. We can think of a sequence as a function from the positive integers to the set of real numbers.
A sequence \(\{s_n\}\) of real numbers converges to a number \(L\) if we can make every value of \(s_k\) for \(k \ge n\) as close as we want to \(L\) by choosing \(n\) sufficiently large.
A sequence diverges if it does not converge.
1. Limits of five sequences.
Match the formulas with the descriptions of the behavior of the sequence as \(n\to\infty\text{.}\)
\(\displaystyle s_n = 1 + \cos(n)/n\)
\(\displaystyle s_n = n(n+1) - 1\)
\(\displaystyle s_n = (n+1)/n\)
\(\displaystyle s_n = (\sin(n)/n)\)
\(\displaystyle s_n = n\sin(n)/(n+1)\)
converges to zero through positive and negative numbers
does not converge, but doesn't go to \(\pm\infty\)
diverges to \(\infty\)
converges to one from above and below
converges to one from above
2. Formula for a sequence, given first terms.
Find a formula for \(s_n\text{,}\) \(n\ge 1\) for the sequence 0, 3, 8, 15, 24...
\(s_n =\)
3. Divergent or convergent sequences.
For each of the sequences below, enter either diverges if the sequence diverges, or the limit of the sequence if the sequence converges as \(n\to\infty\text{.}\) (Note that to avoid this becoming a "multiple guess" problem you will not see partial correct answers.)
A. \({4 n + 8\over n}\) :
B. \(4^n\) :
C. \({4 n + 8\over n^2}\) :
D. \({\sin n\over 4 n}\) :
4. Terms of a sequence from sampling a signal.
In electrical engineering, a continuous function like \(f(t) = \sin t\text{,}\) where \(t\) is in seconds, is referred to as an analog signal. To digitize the signal, we sample \(f(t)\) every \(\Delta t\) seconds to form the sequence \(s_n = f(n\Delta t)\text{.}\) For example, sampling \(f\) every 1/10 second produces the sequence \(\sin(1/10)\text{,}\) \(\sin(2/10)\text{,}\) \(\sin(3/10)\text{,...}\)
Suppose that the analog signal is given by
\begin{equation*} f(t) = {\sin(1 t)\over t}. \end{equation*}
Give the first 6 terms of a sampling of the signal every \(\Delta t = 1.5\) seconds:
(Enter your answer as a comma-separated list.)
Finding limits of convergent sequences can be a challenge. However, there is a useful tool we can adapt from our study of limits of continuous functions at infinity to use to find limits of sequences. We illustrate in this exercise with the example of the sequence
\begin{equation*} \frac{\ln(n)}{n}\text{.} \end{equation*}
Calculate the first 10 terms of this sequence. Based on these calculations, do you think the sequence converges or diverges? Why?
For this sequence, there is a corresponding continuous function \(f\) defined by
\begin{equation*} f(x) = \frac{\ln(x)}{x}\text{.} \end{equation*}
Draw the graph of \(f(x)\) on the interval \([0,10]\) and then plot the entries of the sequence on the graph. What conclusion do you think we can draw about the sequence \(\left\{\frac{\ln(n)}{n}\right\}\) if \(\lim_{x \to \infty} f(x) = L\text{?}\) Explain.
Note that \(f(x)\) has the indeterminate form \(\frac{\infty}{\infty}\) as \(x\) goes to infinity. What idea from differential calculus can we use to calculate \(\lim_{x \to \infty} f(x)\text{?}\) Use this method to find \(\lim_{x \to \infty} f(x)\text{.}\) What, then, is \(\lim_{n \to \infty} \frac{\ln(n)}{n}\text{?}\)
We return to the example begun in Preview Activity 8.1.1 to see how to derive the formula for the amount of money in an account at a given time. We do this in a general setting. Suppose you invest \(P\) dollars (called the principal) in an account paying \(r\%\) interest compounded monthly. In the first month you will receive \(\frac{r}{12}\) (here \(r\) is in decimal form; e.g., if we have \(8\%\) interest, we write \(\frac{0.08}{12}\)) of the principal \(P\) in interest, so you earn
\begin{equation*} P\left(\frac{r}{12}\right) \end{equation*}
dollars in interest. Assume that you reinvest all interest. Then at the end of the first month your account will contain the original principal \(P\) plus the interest, or a total of
\begin{equation*} P_1 = P + P\left(\frac{r}{12}\right) = P\left( 1 + \frac{r}{12}\right) \end{equation*}
dollars.
Given that your principal is now \(P_1\) dollars, how much interest will you earn in the second month? If \(P_2\) is the total amount of money in your account at the end of the second month, explain why
\begin{equation*} P_2 = P_1\left( 1 + \frac{r}{12}\right) = P\left( 1 + \frac{r}{12}\right)^2\text{.} \end{equation*}
Find a formula for \(P_3\text{,}\) the total amount of money in the account at the end of the third month in terms of the original investment \(P\text{.}\)
There is a pattern to these calculations. Let \(P_n\) the total amount of money in the account at the end of the third month in terms of the original investment \(P\text{.}\) Find a formula for \(P_n\text{.}\)
Sequences have many applications in mathematics and the sciences. In a recent paper 3 the authors write
The incretin hormone glucagon-like peptide-1 (GLP-1) is capable of ameliorating glucose-dependent insulin secretion in subjects with diabetes. However, its very short half-life (1.5-5 min) in plasma represents a major limitation for its use in the clinical setting.
The half-life of GLP-1 is the time it takes for half of the hormone to decay in its medium. For this exercise, assume the half-life of GLP-1 is 5 minutes. So if \(A\) is the amount of GLP-1 in plasma at some time \(t\text{,}\) then only \(\frac{A}{2}\) of the hormone will be present after \(t+5\) minutes. Suppose \(A_0 = 100\) grams of the hormone are initially present in plasma.
Let \(A_1\) be the amount of GLP-1 present after 5 minutes. Find the value of \(A_1\text{.}\)
Let \(A_2\) be the amount of GLP-1 present after 10 minutes. Find the value of \(A_2\text{.}\)
Let \(A_n\) be the amount of GLP-1 present after \(5n\) minutes. Find a formula for \(A_n\text{.}\)
Does the sequence \(\{A_n\}\) converge or diverge? If the sequence converges, find its limit and explain why this value makes sense in the context of this problem.
Determine the number of minutes it takes until the amount of GLP-1 in plasma is 1 gram.
Continuous data is the basis for analog information, like music stored on old cassette tapes or vinyl records. A digital signal like on a CD or MP3 file is obtained by sampling an analog signal at some regular time interval and storing that information. For example, the sampling rate of a compact disk is 44,100 samples per second. So a digital recording is only an approximation of the actual analog information. Digital information can be manipulated in many useful ways that allow for, among other things, noisy signals to be cleaned up and large collections of information to be compressed and stored in much smaller space. While we won't investigate these techniques in this chapter, this exercise is intended to give an idea of the importance of discrete (digital) techniques.
Let \(f\) be the continuous function defined by \(f(x) = \sin(4x)\) on the interval \([0,10]\text{.}\) A graph of \(f\) is shown in Figure 8.1.5.
Figure 8.1.5. The graph of \(f(x) = \sin(4x)\) on the interval \([0,10]\)
We approximate \(f\) by sampling, that is by partitioning the interval \([0,10]\) into uniform subintervals and recording the values of \(f\) at the endpoints.
Ineffective sampling can lead to several problems in reproducing the original signal. As an example, partition the interval \([0,10]\) into 8 equal length subintervals and create a list of points (the sample) using the endpoints of each subinterval. Plot your sample on graph of \(f\) in Figure Figure 8.1.5. What can you say about the period of your sample as compared to the period of the original function?
The sampling rate is the number of samples of a signal taken per second. As the part (a) illustrates, sampling at too small a rate can cause serious problems with reproducing the original signal (this problem of inefficient sampling leading to an inaccurate approximation is called aliasing). There is an elegant theorem called the Nyquist-Shannon Sampling Theorem that says that human perception is limited, which allows that replacement of a continuous signal with a digital one without any perceived loss of information. This theorem also provides the lowest rate at which a signal can be sampled (called the Nyquist rate) without such a loss of information. The theorem states that we should sample at double the maximum desired frequency so that every cycle of the original signal will be sampled at at least two points. Recall that the frequency of a sinusoidal function is the reciprocal of the period. Identify the frequency of the function \(f\) and determine the number of partitions of the interval \([0,10]\) that give us the Nyquist rate.
Humans cannot typically pick up signals above 20 kHz. Explain why, then, that information on a compact disk is sampled at 44,100 Hz.
The Journal of Integer Sequences at http://www.cs.uwaterloo.ca/journals/JIS/
The On-Line Encyclopedia of Integer Sequences at http://oeis.org/
Hui H, Farilla L, Merkel P, Perfetti R. The short half-life of glucagon-like peptide-1 in plasma does not reflect its long-lasting beneficial effects, Eur J Endocrinol 2002 Jun;146(6):863-9. | CommonCrawl |
\begin{document}
\title{Minimum number of additive tuples in groups of prime order}
\begin{abstract} For a prime number $p$ and a sequence of integers $a_0,\dots,a_k\in \{0,1,\dots,p\}$, let $s(a_0,\dots,a_k)$ be the minimum number of $(k+1)$-tuples $(x_0,\dots,x_k)\in A_0\times\dots\times A_k$ with $x_0=x_1+\dots + x_k$, over subsets $A_0,\dots,A_k\subseteq\Z{p}$ of sizes $a_0,\dots,a_k$ respectively. An elegant argument of Lev (independently rediscovered by Samotij and Sudakov) shows that there exists an extremal configuration with all sets $A_i$ being intervals of appropriate length, and that the same conclusion also holds for the related problem, reposed by Bajnok, when $a_0=\dots=a_k=:a$ and $A_0=\dots=A_k$, provided $k$ is not equal 1 modulo~$p$. By applying basic Fourier analysis, we show for Bajnok's problem that if $p\geqslant 13$ and $a\in\{3,\dots,p-3\}$ are fixed while $k\equiv 1\pmod p$ tends to infinity, then the extremal configuration alternates between at least two affine non-equivalent sets. \end{abstract}
\section{Introduction}
Let $\Gamma$ be a given finite Abelian group, with the group operation written additively.
For $A_0,\dots,A_k\subseteq\Gamma$, let $s(A_0,\dots,A_k)$ be the number of $(k+1)$-tuples $(x_0,\dots,x_k)\in A_0\times\dots\times A_k$ with $x_0=x_1+\dots+x_k$. If $A_0=\dots=A_k:=A$, then we use the shorthand $s_k(A):=S(A_0,\dots,A_k)$. For example, $s_2(A)$ is the number of \emph{Schur triples} in $A$, that is, ordered triples $(x_0,x_1,x_2)\in A^3$ with $x_0=x_1+x_2$.
For integers $n\geqslant m\geqslant 0$, let $[m,n]:=\{m,m+1,\dots,n\}$ and
$[n]:=[0,n-1]=\{0,\dots,n-1\}$. For a sequence $a_0,\dots,a_k\in [\,|\Gamma| +1\,] = \lbrace 0,1,\hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt}, |\Gamma|\rbrace$, let $s(a_0,\dots,a_k;\Gamma)$ be the minimum of $s(A_0,\dots,A_k)$ over subsets $A_0,\dots,A_k\subseteq \Gamma$ of sizes $a_0,\dots,a_k$ respectively. Additionally, for $a\in [0,p]$, let $s_k(a;\Gamma)$ be the minimum of $s_k(A)$ over all $a$-sets $A\subseteq \Gamma$.
The question of finding the maximal size of a sum-free subset of $\Gamma$ (i.e.\ the maximum $a$ such that $s_2(a;\Gamma)=0$) originated in a paper of Erd\H os~\cite{Erdos65} in 1965 and took 40 years before it was resolved in full generality by Green and Ruzsa~\cite{GreenRuzsa05}.
In this paper, we are interested in the case where $p$ is a fixed prime and the underlying group $\Gamma$ is taken to be $\Z p$, the cyclic group of order $p$, which we identify with the additive group of residues modulo $p$ (also using the multiplicative structure on it when this is useful).
Lev~\cite{Lev01duke} solved the problem of finding $s_k(a_0,\hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt},a_k;\Z p)$, where $p$ is prime (in the equivalent guise of considering solutions to $x_1+\hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt} + x_k=0$).\footnote{We learned of Lev's work after the publication of this paper. For completeness, we still provide a proof of Theorem~\ref{th:main} in Section~\ref{knot=1}, which is essentially the same as the original proof of Lev's more general result, and which was rediscovered in~\cite{SamotijSudakov16pmcps}.} For $I\subseteq\Z p$ and $x,y\in\Z p$, write $x\cdot I+y:=\{x\cdot z+y: z\in I\}$.
\begin{theorem}\cite{Lev01duke} \label{th:main} For arbitrary $k\geqslant 1$ and $a_0,\dots,a_k\in [0,p]$, there is $t\in\Z p$ such that
$$
s(a_0,\dots,a_k;\Z p)=s([a_0]+t,[a_1],\dots,[a_k]; \Z p).\qed
$$ \end{theorem}
Huczynska, Mullen and Yucas~\cite{HuczynskaMullenYucas09jcta} found a new proof of the $s_2(a; \Z p)$-problem, while also addressing some extensions. Samotij and Sudakov~\cite{SamotijSudakov16pmcps} rediscovered Lev's proof of the $s_2(a ; \Z p)$-problem and showed that, when $s_2(a ,\Z p)>0$, then the $a$-sets that achieve the minimum are exactly those of the form $\xi\cdot I$ with $\xi\in\Z{p}\setminus\{0\}$, where $I$ consists of the residues modulo $p$ of $a$ integers closest to $\frac{p-1}2\in\I Z$. Each such set is an arithmetic progression; its difference can be any non-zero value but the initial element has to be carefully chosen. (By an \emph{$m$-term arithmetic progression} (or \emph{$m$-AP} for short) we mean a set of the form $\{x,x+d,\dots,x+(m-1)d\}$ for some $x,d\in\Z p$ with $d\not=0$. We call $d$ the \emph{difference}.) Samotij and Sudakov~\cite{SamotijSudakov16pmcps} also solved the $s_2(a)$-problem for various groups $\Gamma$. Bajnok~\cite[Problem~G.48]{Bajnok18acmrp} suggested the more general problem of considering $s_k(a;\Gamma)$. This is wide open in full generality.
This paper concentrates on the case $\Gamma = \Z p$, for $p$ prime, and the sets which attain equality in Theorem~\ref{th:main}. In particular, we write $s(a_0,\dots,a_k):= s(a_0,\dots,a_k;\Z p)$ and $s_k(a):=s_k(a;\Z p)$. Since the case $p=2$ is trivial, let us assume that $p\geqslant 3$. Since
\begin{equation}\label{eq:equiv}
s(A_0,\dots,A_k)=s(\xi\cdot A_0+\eta_0,\dots,\xi\cdot A_k+\eta_k),\quad\mbox{for $\xi\not=0$ and $\eta_0=\eta_1+\dots+\eta_k$},
\end{equation} Theorem~\ref{th:main} shows that, for any difference $d$, there is at least one extremal configuration consisting of $k+1$ arithmetic progressions with the same difference $d$.
In particular, if $a_0=\dots=a_k=:a$, then one extremal configuration consists of $A_1=\dots=A_k=[a]$ and $A_0=[t,t+a-1]$ for some $t\in\Z p$. Given this, one can write down some formulas for $s(a_0,\hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt},a_k)$ in terms of $a_0,\hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt},a_k$ involving summation (based on~\eqref{eq:s} or a version of~\eqref{eq:sk(A)}) but there does not seem to be a closed form in general.
If $k\not\equiv1\pmod p$, then by taking $\xi:=1$, $\eta_1:=\dots:=\eta_k:=-t(k-1)^{-1}$, and $\eta_0:=-kt(k-1)^{-1}$ in~\eqref{eq:equiv}, we can get another extremal configuration where all sets are the same: $A_0+\eta_0=\dots=A_k+\eta_k$. Thus Theorem~\ref{th:main} directly implies the following corollary.
\begin{corollary}\label{cr:main} For every $k\geqslant 2$ with $k\not\equiv1\pmod p$ and $a\in [0,p]$, there is $t\in\Z p$ such that $s_k(a)=s_k([t,t+a-1])$.\qed\end{corollary}
Unfortunately, if $k\geqslant 3$, then there may be sets $A$ different from APs that attain equality in Corollary~\ref{cr:main} with $s_k(|A|)>0$ (which is in contrast to the case $k=2$). For example, our (non-exhaustive) search showed that this happens already for $p=17$, when $$
s_3(14)=2255=s_3([-1,12])=s_3([6,18]\cup\{3\}).
$$
Also, already the case $k=2$ of the more general Theorem~\ref{th:main} exhibits extra solutions. Of course, by analysing the proof of Theorem~\ref{th:main} or Corollary~\ref{cr:main} one can write a necessary and sufficient condition for the cases of equality. We do this in Section~\ref{knot=1}; in some cases this condition can be simplified.
The first main result of this paper is to describe the extremal sets for Corollary~\ref{cr:main} when $k \not\equiv 1 \pmod p$ is sufficiently large. The proof uses basic Fourier analysis on $\Z p$.
\begin{theorem}\label{th:knot1} Let a prime $p\geqslant 7$ and an integer $a\in [3,p-3]$ be fixed, and let $k\not\equiv1\pmod p$ be sufficiently large. Then there exists $t \in \Z p$ for which the only $s_k(a)$-extremal sets are $\xi\cdot[t,t+a-1]$ for all non-zero $\xi \in \Z p$. \end{theorem}
\begin{problem} Find a `good' description of all extremal families for Corollary~\ref{cr:main} (or perhaps Theorem~\ref{th:main}) for $k\geqslant 3$.\end{problem}
While Corollary~\ref{cr:main} provides an example of an $s_k(a)$-extremal set for $k\not\equiv1\pmod p$, the case $k\equiv1\pmod p$ of the $s_k(a)$-problem turns out to be somewhat special. Here, translating a set $A$ has no effect on the quantity $s_k(A)$. More generally, let $\mathcal{A}$ be the group of all invertible affine transformations of $\Z p$, that is, it consists of maps $x\mapsto \xi\cdot x+\eta$, $x\in\Z p$, for $\xi,\eta\in \Z p$ with $\xi\not=0$. Then \begin{equation}\label{eq:equiv1}
s_k(\alpha(A))=s_k(A),\quad \mbox{for every $k\equiv1\!\!\pmod p$\ \ and\ \ $\alpha\in\mathcal{A}$}.
\end{equation}
Let us call two subsets $A,B\subseteq \Z p$ \emph{(affine) equivalent} if there is $\alpha\in \mathcal{A}$ with $\alpha(A)=B$. By~\eqref{eq:equiv1}, we need to consider sets only up to this equivalence.
Trivially, any two subsets of $\Z p$ of size $a$ are equivalent if $a \leq 2$ or $a \geq p-2$.
Our second main result, is to describe the extremal sets when $k \equiv 1 \pmod p$ is sufficiently large, again using Fourier analysis on $\Z p$.
\begin{theorem}\label{th:k1} Let a prime $p\geqslant 7$ and an integer $a\in [3,p-3]$ be fixed, and let $k\equiv1\pmod p$ be sufficiently large. Then the following statements hold for the $s_k(a)$-problem.
\begin{enumerate}
\item If $a$ and $k$ are both even, then $[a]$ is the unique (up to affine equivalence) extremal set.
\item If at least one of $a$ and $k$ is odd, define $I':=[a-1]\cup\{a\}=\{0,\dots,a-2,a\}$. Then
\begin{enumerate}
\item $s_k(a)<s_k([a])$ for all large $k$;
\item $I'$ is the unique extremal set for infinitely many $k$;
\item $s_k(a)<s_k(I')$ for infinitely many $k$, provided there are at least three non-equivalent $a$-subsets of $\Z p$.
\end{enumerate}
\end{enumerate}
\end{theorem}
It is not hard to see that there are at least three non-equivalent $a$-subsets of $\Z p$ if and only if $p\geqslant 13$ and $a\in [3,p-3]$, or $p\geqslant 11$ and $a\in [4,p-4]$. Thus Theorem~\ref{th:k1} characterises pairs $(p,a)$ for which there exists an $a$-subset $A$ which is $s_k(a)$-extremal for \emph{all} large $k\equiv1\pmod p$.
\begin{corollary} Let $p$ be a prime and $a\in[0,p]$. There is an $a$-subset $A\subseteq \Z p$ with $s_k(A)=s_k(a)$ for all large $k\equiv1\pmod p$ if and only if $a\leqslant 2$, or $a\geqslant p-2$, or $p\in\{7,11\}$ and $a=3$.\qed \end{corollary}
As is often the case in mathematics, a new result leads to further open problems.
\begin{problem} Given $a\in[3,p-3]$, find a `good' description of all $a$-subsets of $\Z p$ that are $s_k(a)$-extremal for at least one (resp.\ infinitely many) values of $k\equiv1\pmod p$.\end{problem}
\begin{problem} Is it true that for every $a\in[3,p-3]$ there is $k_0$ such that for all $k\geqslant k_0$ with $k\equiv 1\pmod p$, any two $s_k(a)$-extremal sets are affine equivalent?\end{problem}
\section{Proof of Theorem~\ref{th:main}}\label{knot=1}
For completeness, here we prove Theorem~\ref{th:main}, which is a special case of Theorem~1 in~\cite{Lev01duke}.
Let $A_1,\dots,A_k$ be subsets of $\Z p$. Define $\sigma(x;A_1,\dots,A_k)$ as the number of $k$-tuples $(x_1,\dots,x_k)\in A_1\times\dots\times A_k$ with $x=x_1+\dots+x_k$. Also, for an integer $r\geqslant 0$, let
\begin{eqnarray*}
N_r(A_1,\dots,A_k)&:=&\{x\in\Z p: \sigma(x;A_1,\dots,A_k)\geqslant r\},\\
n_r(A_1,\dots,A_k)&:=&|N_r(A_1,\dots,A_k)|.
\end{eqnarray*}
These notions are related to our problem because of the following easy identity:
\begin{equation}\label{eq:s}
s(A_0,\dots,A_k)=\sum_{r=1}^\infty |A_0\cap N_r(A_1,\dots,A_k)|.
\end{equation}
Let an \emph{interval} mean an arithmetic progression with difference $1$, i.e.\ a subset $I$ of $\Z p$ of form $\{x,x+1,\dots,x+y\}$. Its \emph{centre} is $x+y/2\in \I Z_p$; it is unique if $I$ is \emph{proper} (that is, $0<|I|<p$).
Note the following easy properties of the sets $N_r$:
\begin{enumerate}
\item These sets are nested:
\begin{equation}\label{eq:nested}
N_0(A_1,\dots,A_k)=\Z p\supseteq N_1(A_1,\dots,A_k)\supseteq N_2(A_1,\dots,A_k)\supseteq \dots
\end{equation}
\item If each $A_i$ is an interval with centre $c_i$, then $N_r(A_1,\dots,A_k)$ is an interval with centre $c_1+\dots+c_k$.
\end{enumerate}
We will also need the following result of Pollard~\cite[Theorem~1]{Pollard75}.
\begin{theorem}\label{th:Pollard} Let $p$ be a prime, $k\geqslant 1$, and $A_1,\dots,A_k$ be subsets of $\Z{p}$ of sizes $a_1,\dots,a_k$. Then for every integer $r\geqslant 1$, we have
$$
\sum_{i=1}^r n_i(A_1,\dots,A_k)\geqslant \sum_{i=1}^r n_i([a_1],\dots,[a_k]).\qed
$$ \end{theorem}
\bpf[Proof of Theorem~\ref{th:main}] Let $A_0,\dots,A_k$ be some extremal sets for the $s(a_0,\dots,a_k)$-problem. We can assume that $0<a_0<p$, because $s(A_0,\dots,A_k)$ is $0$ if $a_0=0$ and $\prod_{i=1}^ka_i$ if $a_0=p$, regardless of the choice of the sets $A_i$.
Since $n_0([a_1],\dots,[a_k])=p>p-a_0$ while $n_r([a_1],\dots,[a_k])=0<p-a_0$ when, for example, $r>\prod_{i=1}^{k-1} a_i$, there is a (unique) integer $r_0\geqslant 0$ such that
\begin{eqnarray}
n_{r}([a_1],\dots,[a_k])&>&p-a_0,\quad\mbox{all $r\in [0,r_0]$,}\label{eq:r01}\\
n_{r}([a_1],\dots,[a_k])&\leqslant &p-a_0,\quad\mbox{all integers $r\geqslant r_0+1$.}\label{eq:r02}
\end{eqnarray}
The nested intervals $N_1([a_1],\dots,[a_k])\supseteq N_2([a_1],\dots,[a_k])\supseteq\hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt}$ have the same centre $c:=((a_1-1)+\dots+(a_k-1))/2$. Thus there is a translation $I:=[a_0]+t$ of $[a_0]$, with $t$ independent of $r$, which has as small as possible intersection with each $N_r$-interval above given their sizes, that is,
\begin{equation}\label{eq:intersection}
|I\cap N_r([a_1],\dots,[a_k])|=\max\{\,0,\,n_r([a_1],\dots,[a_k])+a_0-p\,\},\quad \mbox{for all $r\in\I N$}.
\end{equation}
This and Pollard's theorem give the following chain of inequalities:
\begin{eqnarray*}
s(A_0,\dots,A_k)&\stackrel{\eqref{eq:s}}{=}& \sum_{i=1}^\infty |A_0\cap N_i(A_1,\dots,A_k)|\\
&\geqslant & \sum_{i=1}^{r_0} |A_0\cap N_i(A_1,\dots,A_k)|\\
&\geqslant & \sum_{i=1}^{r_0} (n_i(A_1,\dots,A_k)+a_0-p)\\
&\stackrel{\mathrm{Thm~\ref{th:Pollard}}}{\geqslant} & \sum_{i=1}^{r_0} (n_i([a_1],\dots,[a_k])+a_0-p)\\
&\stackrel{\eqref{eq:r01}-\eqref{eq:r02}}{=} & \sum_{i=1}^{\infty} \max\{\,0,\, n_i([a_1],\dots,[a_k])+a_0-p\,\}\\
&\stackrel{\eqref{eq:intersection}}=& \sum_{i=1}^{\infty}|I\cap N_i([a_1],\dots,[a_k])|\\
&\stackrel{\eqref{eq:s}}=& s(I,[a_1],\dots,[a_k]),
\end{eqnarray*}
giving the required.\qed
Let us write a necessary and sufficient condition for equality in Theorem~\ref{th:main} in the case $a_0,\dots,a_k\in [1,p-1]$. Let $r_0\geqslant 0$ be defined by \eqref{eq:r01}--\eqref{eq:r02}. Then, by~\eqref{eq:nested}, a sequence $A_0,\dots,A_k\subseteq \Z p$ of sets of sizes respectively $a_0,\dots,a_k$ is extremal if and only if
\begin{eqnarray}
A_0\cap N_{r_0+1}(A_1,\dots,A_k)&=&\emptyset,\label{eq:empty}\\
A_0\cup N_{r_0}(A_1,\dots,A_k)&=& \Z p,\label{eq:whole}\\
\sum_{i=1}^{r_0} n_i(A_1,\dots,A_k)&=& \sum_{i=1}^{r_0} n_i([a_1],\dots,[a_k]).\label{eq:PollEq}
\end{eqnarray}
Let us now concentrate on the case $k=2$, trying to simplify the above condition. We can assume that no $a_i$ is equal to 0 or $p$ (otherwise the choice of the other two sets has no effect on $s(A_0,A_1,A_2)$ and every triple of sets of sizes $a_0$, $a_1$ and $a_2$ is extremal). Also, as in~\cite{SamotijSudakov16pmcps}, let us exclude the case $s(a_0,a_1,a_2)=0$, as then there are in general many extremal configurations. Note that $s(a_0,a_1,a_2)=0$ if and only if $r_0=0$; also, by the Cauchy-Davenport theorem (the special case $k=2$ and $r=1$ of Theorem~\ref{th:Pollard}), this is equivalent to $a_1+a_2-1\leqslant p-a_0$. Assume by symmetry that $a_1\leqslant a_2$. Note that~\eqref{eq:r01} implies that $r_0\leqslant a_1$.
The condition in~\eqref{eq:PollEq} states that we have equality in Pollard's theorem. A result of Nazarewicz, O'Brien, O'Neill and Staples~\cite[Theorem~3]{NazarewiczObrienOneillStaples07} characterises when this happens (for $k=2$), which in our notation is the following.
\begin{theorem}\label{th:NazarewiczObrienOneillStaples07} For $k=2$ and $1\leqslant r_0\leqslant a_1\leqslant a_2<p$, we have equality in~\eqref{eq:PollEq} if and only if at least one of the following conditions holds:
\begin{enumerate}
\item\label{it:1} $r_0=a_1$,
\item\label{it:2} $a_1+a_2\geqslant p+r_0$,
\item\label{it:3} $a_1=a_2=r_0+1$ and $A_2=g-A_1$ for some $g\in \Z{p}$,
\item\label{it:4} $A_1$ and $A_2$ are arithmetic progressions with the same difference.
\end{enumerate}
\end{theorem}
Let us try to write more explicitly each of these four cases, when combined with~\eqref{eq:empty} and~\eqref{eq:whole}.
First, consider the case $r_0=a_1$. We have $N_{a_1}([a_1],[a_2])=[a_1-1,a_2-1]$ and thus $n_{a_1}([a_1],[a_2])=a_2-a_1+1>p-a_0$, that is, $a_2-a_1\geqslant p-a_0$. The condition~\eqref{eq:empty} holds automatically since $N_i(A_1,A_2)=\emptyset$ whenever $i>|A_1|$. The other condition~\eqref{eq:whole} may be satisfied even when none of the sets $A_i$ is an arithmetic progression (for example, take $p=13$, $A_1=\{0,1,3\}$, $A_2=\{0,2,3,5,6,7,9,10\}$ and let $A_0$ be the complement of $N_3(A_1,A_2)=\{3,6,10\}$). We do not see any better characterisation here, apart from stating that~\eqref{eq:whole} holds.
Next, suppose that $a_1+a_2\geqslant p+r_0$. Then, for any two sets $A_1$ and $A_2$ of sizes $a_1$ and $a_2$, we have $N_{r_0}(A_1,A_2)=\Z p$; thus~\eqref{eq:whole} holds automatically. Similarly to the previous case, there does not seem to be a nice characterisation of~\eqref{eq:empty}. For example,~\eqref{eq:empty} may hold even when none of the sets $A_i$ is an AP: e.g.\ let $p=11$, $A_1=A_2=\{0,1,2,3,4,5,7\}$, and let $A_0=\{0,2,10\}$ be the complement of $N_4(A_1,A_2)=\{1,3,4,5,6,7,8,9\}$ (here $r_0=3$).
\comment{Let us verify that indeed $r_0=3$. Indeed, $n_3([7],[7])=11$ by above while $N_4([7],[7])=[3,9]$ has $7\leqslant 11-a_0=8$ elements.
}
Next, suppose that we are in the third case. The primality of $p$ implies that $g\in\Z p$ satisfying $A_2=g-A_1$ is unique and thus $N_{r_0+1}(A_1,A_2)=\{g\}$. Therefore~\eqref{eq:empty} is equivalent to $A_0\not\ni g$. Also, note that if $I_1$ and $I_2$ are intervals of size $r_0+1$, then $n_{r_0}(I_1,I_2)=3$. By the definition of $r_0$, we have $p-2\leqslant a_0\leqslant p-1$.
Thus we can choose any integer $r_0\in [1,p-2]$ and $(r_0+1)$-sets $A_2=g-A_1$, and then let $A_0$ be obtained from $\Z p$ by removing $g$ and at most one further element of $N_{r_0}(A_1,A_2)$. Here, $A_0$ is always an AP (as a subset of $\Z p$ of size $a_0\geqslant p-2$) but $A_1$ and $A_2$ need not be.
Finally, let us show that if $A_1$ and $A_2$ are arithmetic progressions with the same difference $d$ and we are not in Case~1 nor~2 of Theorem~\ref{th:NazarewiczObrienOneillStaples07}, then $A_0$ is also an arithmetic progression whose difference is~$d$. By~\eqref{eq:equiv}, it is enough to prove this when $A_1=[a_1]$ and $A_2=[a_2]$ (and $d=1$). Since $a_1+a_2\leqslant p-1+r_0$ and $r_0+1\leqslant a_1\leqslant a_2$, we have that
\begin{eqnarray*} N_{r_0}(A_1,A_2)&=&[r_0-1,a_1+a_2-r_0-1]\\
N_{r_0+1}(A_1,A_2)&=& [r_0,a_1+a_2-r_0-2]
\end{eqnarray*}
have sizes respectively $a_1+a_2-2r_0+1<p$ and $a_1+a_2-2r_0-1>0$. We see that $N_{r_0+1}(A_1,A_2)$ is obtained from the proper interval $N_{r_0}(A_1,A_2)$ by removing its two endpoints. Thus $A_0$, which is sandwiched between the complements of these two intervals by~\eqref{eq:empty}--\eqref{eq:whole}, must be an interval too. (And, conversely, every such triple of intervals is extremal.)
\section{The proof of Theorems~\ref{th:knot1} and~\ref{th:k1}}
Let us recall the basic definitions and facts of Fourier analysis on $\Z p$. For a more detailed treatment of this case, see e.g.~\cite[Chapter~2]{Terras99faofg}. Write $\omega := e^{2\pi i /p}$ for the $p^{\mathrm{th}}$ root of unity. Given a function $f : \Z p \rightarrow \mathbb{C}$, we define its \emph{Fourier transform} to be the function $\fourier{f}:\Z p\to \mathbb{C}$ given by $$ \fourier{f}(\gamma) := \sum_{x=0}^{p-1} f(x)\, \omega^{-x\gamma}, \qquad\text{for } \gamma \in \Z p. $$ Parseval's identity states that
\begin{equation}\label{eq:Parseval} \sum_{x=0}^{p-1} f(x)\,\overline{g(x)} = \frac{1}{p}\sum_{\gamma=0}^{p-1} \fourier{f}(\gamma)\,\overline{\fourier{g}(\gamma)}. \end{equation} The \emph{convolution} of two functions $f,g : \Z p \rightarrow \mathbb{C}$ is given by $$ (f * g)(x) := \sum_{y=0}^{p-1}f(y)\,g(x-y). $$ It is not hard to show that the Fourier transform of a convolution equals the product of Fourier transforms, i.e. \begin{equation}\label{convolution} \fourier{f_1 * \hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt} * f_k} = \fourier{f_1} \cdot\hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt} \cdot \fourier{f_k}. \end{equation} We write $f^{*k}$ for the convolution of $f$ with itself $k$ times. (So, for example, $f^{* 2} = f * f$.)
Denote by $\mathbbm{1}_A$ the \emph{indicator function} of $A \subseteq \Z p$ which assumes value 1 on $A$ and $0$ on $\Z p\setminus A$. We will call $\fourier{{\mathbbm{1}}}_A(0)=|A|$ the \emph{trivial Fourier coefficient of $A$}. Since the Fourier transform behaves very nicely with respect to convolution, it is not surprising that our parameter of interest, $s_k(A)$, can be written as a simple function of the Fourier coefficients of $\mathbbm{1}_A$. Indeed, let $ A \subseteq \mathbb{Z}_p$ and $x \in \Z p$. Then the number of tuples $(a_1,\hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt},a_k) \in A^k$ such that $a_1+\hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt}+a_k=x$ (which is $\sigma(x;A,\hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt},A)$ in the notation of Section~\ref{knot=1}) is precisely $\mathbbm{1}^{* k}_A(x)$. The function $s_k(A)$ counts such a tuple if and only if its sum $x$ also lies in $A$. Thus, \begin{equation}\label{eq:sk(A)} s_k(A) = \sum_{x = 0}^{p-1}\mathbbm{1}^{* k}_A(x)\, \mathbbm{1}_A(x) \stackrel{(\ref{eq:Parseval})}= \frac{1}{p}\sum_{\gamma=0}^{p-1}\fourier{\mathbbm{1}^{* k}_A}(\gamma)\,\overline{\fourier{\mathbbm{1}_A}(\gamma)} \stackrel{(\ref{convolution})}{=} \frac{1}{p} \sum_{\gamma=0}^{p-1} \left(\fourier{\mathbbm{1}_A}(\gamma)\right)^k\, \overline{\fourier{\mathbbm{1}_A}(\gamma)}. \end{equation} Since every set $A \subseteq \Z p$ of size $a$ has the same trivial Fourier coefficient (namely $\fourier{\mathbbm{1}_A}(0)=a$), let us re-write~\eqref{eq:sk(A)} as \beq{eq:SkF1} p s_k(A)-a^{k+1}= \sum_{\gamma=1}^{p-1} (\fourier{\mathbbm{1}_A}(\gamma))^k\,
\O{\fourier{\mathbbm{1}_A}(\gamma)} =: F(A).
\end{equation} Thus we need to minimise $F(A)$ (which is a real number for any $A$) over $a$-subsets $A\subseteq\Z p$.
To do this when $k$ is sufficiently large, we will consider the largest in absolute value non-trivial Fourier coefficient $\fourier{\mathbbm{1}_{A}}(\gamma)$ of an $a$-subset $A$. Indeed, the term $(\fourier{\mathbbm{1}_A}(\gamma))^k\overline{\fourier{\mathbbm{1}_A}(\gamma)}$ will dominate $F(A)$, so if it has strictly negative real part, then $F(A)<F(B)$ for all $a$-subsets~$B\subseteq \Z p$ with $\max_{\delta\not=0}|\fourier{\mathbbm{1}_B}(\delta)|<|\fourier{\mathbbm{1}_A}(\gamma)|$.
Given $a \in [p-1]$, let $$ I := [a]=\lbrace 0,\hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt},a-1\rbrace\quad\text{and}\quad I' := [a-1]\cup\lbrace a \rbrace = \lbrace a,\hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt},a-2,a\rbrace. $$ In order to prove Theorems~\ref{th:knot1} and~\ref{th:k1}, we will make some preliminary observations about these special sets. The set of $a$-subsets which are affine equivalent to $I$ is precisely the set of $a$-APs.
Next we will show that \begin{equation}\label{skI}
F(I) = 2\sum_{\gamma=1}^{(p-1)/2} (-1)^{\gamma(a-1)(k-1)} \left|\fourier{\mathbbm{1}_I}(\gamma)\right|^{k+1}\quad\text{if }k \equiv 1 \pmod p. \end{equation} Note that $(-1)^{\gamma(a-1)(k-1)}$ equals $(-1)^\gamma$ if both $a,k$ are even and 1 otherwise. To see~\eqref{skI}, let $\gamma \in \lbrace 1,\hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt},\frac{p-1}{2}\rbrace$ and write $\fourier{\mathbbm{1}_I}(\gamma) = re^{\theta i}$ for some $r >0$ and $0 \leq \theta < 2\pi$. Then $\theta$ is the midpoint of $0,-2\pi \gamma/p,\hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt}, -2(a-1)\gamma\pi/p$,~i.e. $ \theta = -\pi(a-1)\gamma/p $. Choose $s \in \mathbb{N}$ such that $k = sp+1$. Then \begin{equation}\label{termx} (\fourier{\mathbbm{1}_I}(\gamma))^k \overline{\fourier{\mathbbm{1}_I}(\gamma)} = \left(r e^{-\pi i (a-1)\gamma/p}\right)^k r e^{\pi i (a-1)\gamma/p} = r^{k+1} e^{-\pi i(a-1)\gamma s}, \end{equation} and $e^{-\pi i(a-1)s}$ equals $1$ if $(a-1)s$ is even, and $-1$ if $(a-1)s$ is odd. Note that, since $p$ is an odd prime, $(a-1)s$ is odd if and only if $a$ and $k$ are both even. So~(\ref{termx}) is real, and the fact that $\fourier{\mathbbm{1}_I}(p-\gamma) = \overline{\fourier{\mathbbm{1}_I}(\gamma)}$ implies that the corresponding term for $p-\gamma$ is the same as for $\gamma$. This gives~\eqref{skI}. A very similar calculation to~(\ref{termx}) shows that \begin{equation}\label{skI2}
F(I+t) = \sum_{\gamma=1}^{p-1} e^{-\pi i (2t+a-1)(k-1)\gamma/p}|\fourier{\mathbbm{1}_{I+t}}(\gamma)|^{k+1}\quad\text{for all }k \geq 3. \end{equation}
Given $r>0$ and $0 \leq \theta < 2\pi$, we write $\arg(re^{\theta i}) := \theta$.
\begin{proposition}\label{angleprop} Suppose that $p \geq 7$ is prime and $a \in [3,p-3]$. Then $\arg\left(\fourier{\mathbbm{1}_{I'}}(1)\right)$ is not an integer multiple of $\pi/p$. \end{proposition}
\bpf Since $\fourier{\mathbbm{1}_A}(\gamma)=-\fourier{\mathbbm{1}_{\Z p\setminus A}}(\gamma)$ for all $A \subseteq \Z p$ and non-zero $\gamma \in \Z p$, we may assume without loss of generality that $a \leq p-a$. Since $p$ is odd, we have $a \leq (p-1)/2$.
Suppose first that $a$ is odd. Let $m := (a-1)/2$. Then $m \in [1,\frac{p-3}{4}]$. Observe that translating any $A \subseteq \Z p$ changes the arguments of its Fourier coefficients by an integer multiple of $2\pi/p$. So, for convenience of angle calculations, here we may redefine $I := [-m,m]$ and $I' := \lbrace -m-1\rbrace\cup[-m+1,m]$. Also let $I^- := [-m+1,m-1]$, which is non-empty. The argument of $\fourier{\mathbbm{1}_{I^-}}(1)$ is $0$. Further, $\fourier{\mathbbm{1}_{I'}}(1) = \fourier{\mathbbm{1}_{I^-}}(1) + \omega^{m+1}+\omega^{-m}$. Since $\omega^{m+1},\omega^{-m}$ lie on the unit circle, the argument of $\omega^{m+1}+\omega^{-m}$ is either $\pi/p$ or $\pi+\pi/p$. But the bounds on $m$ imply that it has positive real part, so $\arg(\omega^{m+1}+\omega^{-m})=\pi/p$. By looking at the non-degenerate parallelogram in the complex plane with vertices $0,\fourier{\mathbbm{1}_{I^-}}(1),\omega^{m+1}+\omega^{-m},\fourier{\mathbbm{1}_{I'}}(1)$, we see that the argument of $\fourier{\mathbbm{1}_{I'}}(1)$ lies strictly between that of $\fourier{\mathbbm{1}_{I^-}}(1)$ and $\omega^{m+1}+\omega^{-m}$, i.e.~strictly between $0$ and $\pi/p$, giving the required.
\begin{figure}\label{figure}
\end{figure}
Suppose now that $a$ is even and let $m := (a-2)/2 \in [1,\frac{p-5}{4}]$. Again without loss of generality we may redefine $I := [-m,m+1]$ and $I' := \lbrace -m-1\rbrace \cup [ -m+1,m+1]$. Let also $I^- := [-m+1,m]$, which is non-empty. The argument of $\fourier{\mathbbm{1}_{I^-}}(1)$ is $-\pi/p$. Further, $\fourier{\mathbbm{1}_{I'}}(1) = \fourier{\mathbbm{1}_{I^-}}(1) + \omega^{m+1}+\omega^{-(m+1)}$. The argument of $\omega^{m+1}+\omega^{-(m+1)}$ is $0$, so as before the argument of $\fourier{\mathbbm{1}_{I'}}(1)$ is strictly between $-\pi/p$ and $0$, as required. \qed
We say that an $a$-subset $A$ is a \emph{punctured interval} if $A=I'+t$ or $A = -I'+t$ for some $t \in \Z p$. That is, $A$ can be obtained from an interval of length $a+1$ by removing a penultimate point.
\begin{lemma}\label{int-equivalence} Let $p \geq 7$ be prime and let $a \in \lbrace 3,\hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt},p-3\rbrace$. Then the sets $I,I'\subseteq \Z p$ are not affine equivalent. Thus no punctured interval is affine equivalent to an interval. \end{lemma} \bpf Suppose on the contrary that there is $\alpha \in \mathcal{A}$ with $\alpha(I')=I$. Let a \emph{reflection} mean an affine map $R_c$ with $c\in\Z p$ that maps $x$ to $-x+c$. Clearly, $I=[a]$ is invariant under the reflection $R:=R_{a-1}$. Thus $I'$ is invariant under the map $R':=\alpha^{-1}\circ R\circ \alpha$. As is easy to see, $R'$ is also some reflection and thus preserves the cyclic distances in $\Z p$. So $R'$ has to fix $a$, the unique element of $I'$ with both distance-1 neighbours lying outside of $I'$. Furthermore, $R'$ has to fix $a-2$, the unique element of $I'$ at distance 2 from $a$. However, no reflection can fix two distinct elements of $\Z p$, a contradiction. \qed
We remark that the previous lemma can also be deduced from Proposition~\ref{angleprop}. Indeed, for any $A \subseteq \Z p$, the multiset of Fourier coefficients of $A$ is the same as that of $x\cdot A$ for $x \in \Z p\setminus\lbrace 0 \rbrace$, and translating a subset changes the argument of Fourier coefficients by an integer multiple of $2\pi/p$. Thus for every subset which is affine equivalent to $I$, the argument of each of its Fourier coefficients is an integer multiple of $\pi/p$.
Let $$
\rho(A) := \max_{\gamma \in \Z p \setminus \lbrace 0 \rbrace}|\fourier{\mathbbm{1}}_A(\gamma)|\quad\text{and}\quad R(a) := \left\lbrace \rho(A) : A \in \binom{\Z p}{a}\right\rbrace = \lbrace m_1(a) > m_2(a) > \hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt} \rbrace. $$
Given $j \geq 1$, we say that $A$ \emph{attains} $m_j(a)$, and specifically that \emph{$A$ attains $m_j(a)$ at $\gamma$} if $m_j(a) = \rho(A)=|\fourier{\mathbbm{1}_A}(\gamma)|$. Notice that, since $\fourier{\mathbbm{1}_A}(-\gamma)=\overline{\fourier{\mathbbm{1}_A}(\gamma)}$, the set $A$ attains $m_j(a)$ at $\gamma$ if and only if $A$ attains $m_j(a)$ at $-\gamma$ (and $\gamma,-\gamma \neq 0$ are distinct values).
As we show in the next lemma, the $a$-subsets which attain $m_1(a)$ are precisely the affine images of $I$ (i.e.~arithmetic progressions), and the $a$-subsets which attain $m_2(a)$ are the affine images of the punctured interval~$I'$.
\begin{lemma}\label{lm:MaxNontriv} Let $p\geq 7$ be prime and let $a\in [3,p-3]$. Then $|R(a)| \geq 2$ and \begin{itemize} \item[(i)] $A \in \binom{\Z p}{a}$ attains $m_1(a)$ if and only if $A$ is affine equivalent to $I$, and every interval attains $m_1(a)$ at $1$ and $-1$ only; \item[(ii)] $B\in\binom{\Z p}{a}$ attains $m_2(a)$ if and only if $B$ is affine equivalent to $I'$, and every punctured interval attains $m_2(a)$ at $1$ and $-1$ only. \end{itemize} \end{lemma}
\bpf Given $D \in \binom{\Z p}{a}$, we claim that there is some $D_{\rm{pri}} \in \binom{\Z p}{a}$ with the following properties: \begin{itemize} \item $D_{\rm{pri}}$ is affine equivalent to $D$;
\item $\rho(D) = |\fourier{\mathbbm{1}_{D_{\rm{pri}}}}(1)|$; and \item $-\pi/p < \arg\left(\fourier{\mathbbm{1}_{D_{\rm{pri}}}}(1)\right) \leq \pi/p$. \end{itemize}
Call such a $D_{\rm{pri}}$ a \emph{primary image} of $D$. Indeed, suppose that $\rho(D) = |\fourier{\mathbbm{1}_D}(\gamma)|$ for some non-zero $\gamma \in \Z p$, and let $\fourier{\mathbbm{1}_D}(\gamma) = r'e^{\theta' i}$ for some $r' > 0$ and $0 \leq \theta' < 2\pi$. (Note that we have $r'>0$ since $p$ is prime.) Choose $\ell \in \lbrace 0,\hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt},p-1\rbrace$ and $-\pi/p < \phi \leq \pi/p$ such that $\theta' = 2\pi \ell/p + \phi$. Let $D_{\rm{pri}} := \gamma\cdot D + \ell$. Then $$
|\fourier{\mathbbm{1}_{D_{\rm{pri}}}}(1)| = \left|\sum_{x \in D}\omega^{-\gamma x - \ell}\right| = |\omega^{-\ell} \fourier{\mathbbm{1}_D}(\gamma)| = |\fourier{\mathbbm{1}_D}(\gamma)| = \rho(D), $$ and $$ \arg\left(\fourier{\mathbbm{1}_{D_{\rm{pri}}}}(1)\right) = \arg(e^{\theta' i}\omega^{-\ell}) = 2\pi \ell/p + \phi - 2\pi \ell/p = \phi, $$ as required.
Let $D \subseteq \Z p$ have size $a$ and write $\fourier{\mathbbm{1}_D}(1) = re^{\theta i}$. Assume by the above that $-\pi/p < \theta \leq \pi/p$. For all $j \in \Z p$, let $$ h(j) := \Re(\omega^{-j}e^{-\theta i}) = \cos\left(\frac{2\pi j}{p}+\theta\right), $$
where $\Re(z)$ denotes the real part of $z\in\mathbb{C}$. Given any $a$-subset $E$ of $\Z p$, we have \begin{equation}\label{hmbound}
H_D(E) := \sum_{j \in E}h(j) = \Re\left(e^{-\theta i}\sum_{j \in E}\omega^{-j}\right) = \Re\left(e^{-\theta i} \fourier{\mathbbm{1}_E}(1)\right) \leq |\fourier{\mathbbm{1}_E}(1)|. \end{equation} Then \begin{equation}\label{HAA}
H_D(D) = \sum_{j \in D}h(j) = \Re(e^{-\theta i} \fourier{\mathbbm{1}_D}(1)) = r = |\fourier{\mathbbm{1}_D}(1)|. \end{equation}
Note that $H_D(E)$ is the (signed) length of the orthogonal projection of $\fourier{\mathbbm{1}_E}(1)\in\mathbb{C}$ on the 1-dimensional
line $\{xe^{i\theta}: x\in\I R\}$. As stated in~\eqref{hmbound} and~\eqref{HAA}, $H_D(E)\leqslant |\fourier{\mathbbm{1}_E}(1)|$
and this is equality for $E=D$. (Both of these facts are geometrically obvious.) If $|\fourier{\mathbbm{1}_D}(1)|=m_1(a)$ is maximum, then no $H_D(E)$ for an $a$-set $E$ can exceed $m_1(a)=H_D(D)$. Informally speaking, the main idea of the proof is that if we fix the direction $e^{i\theta}$, then the projection length is maximised if we take $a$ distinct elements $j\in \I Z_p$ with the $a$ largest values of $h(j)$, that is, if we take some interval (with the runner-up being a punctured interval).
Let us provide a formal statement and proof of this now.
\begin{claim}\label{claim} Let $\mathcal{I}_a$ be the set of length-$a$ intervals in $\Z p$. \begin{itemize} \item[(i)] Let $M_1(D) \subseteq \binom{\Z p}{a}$ consist of $a$-sets $E\subseteq \Z p$ such that $H_D(E) \geq H_D(C)$ for all $C \in \binom{\Z p}{a}$. Then $M_1(D) \subseteq \mathcal{I}_a$. \item[(ii)] Let $M_2(D) \subseteq \binom{\Z p}{a}$ be the set of $E \notin \mathcal{I}_a$ for which $H_D(E) \geq H_D(C)$ for all $C \in \binom{\Z p}{a} \setminus \mathcal{I}_a$. Then every $E \in M_2(A)$ is a punctured interval. \end{itemize} \end{claim}
\bpf Suppose that $0 < \theta < \pi/p$. Then $h(0) > h(1) > h(-1) > h(2) > h(-2) > \hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt} > h(\frac{p-1}{2}) > h(-\frac{p-1}{2})$. In other words, $h(j_\ell) > h(j_k)$ if and only if $\ell < k$, where $j_m := (-1)^{m-1}\lceil m/2\rceil$.
Letting $J_{a-1} := \lbrace j_0,\hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt},j_{a-2}\rbrace$, we see that $$ H_D(J_{a-1} \cup \lbrace j_{a-1}\rbrace) > H_D(J_{a-1} \cup \lbrace j_{a}\rbrace) > H_D(J_{a-1} \cup \lbrace j_{a+1}\rbrace), H_D(J_{a-2} \cup \lbrace j_{a-1},j_a\rbrace) > H_D(J) $$ for all other $a$-subsets $J$. But $J_{a-1} \cup \lbrace j_{a-1}\rbrace$ and $J_{a-1} \cup \lbrace j_a\rbrace$ are both intervals, and $J_{a-1} \cup \lbrace j_{a+1}\rbrace$ and $J_{a-2} \cup \lbrace j_{a-1},j_a\rbrace$ are both punctured intervals. So in this case $M_1(D) := \lbrace J_{a-1}\cup\lbrace j_{a-1}\rbrace\rbrace$ and $M_2(D) \subseteq \lbrace J_{a-1}\cup\lbrace j_{a+1}\rbrace, J_{a-2} \cup \lbrace j_{a-1},j_a\rbrace\rbrace$, as required.
The case when $-\pi/p < \theta < 0$ is almost identical except now $j_\ell := (-1)^\ell\lceil \ell/2\rceil$ for all $0 \leq \ell \leq p-1$. If $\theta=0$ then $h(0) > h(1) = h(-1) > h(2) = h(-2) > \hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt} > h(\frac{p-1}{2}) = h(-\frac{p-1}{2})$. If $\theta=-\pi/p$ then $h(0)=h(-1) > h(1)=h(-2) > \hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt} = h(-\frac{p-1}{2}) > h(\frac{p-1}{2})$. \qed
\noindent We can now prove part~(i) of the lemma. Suppose $A \in \binom{\Z p}{a}$ attains $m_1(a)$ at $\gamma \in \Z p \setminus \lbrace 0 \rbrace$. Then the primary image $D$ of $A$ satisfies $|\fourier{\mathbbm{1}_D}(1)|=m_1(a)=|\fourier{\mathbbm{1}_A}(\gamma)|$. So, for any $E \in M_1(D)$, $$
|\fourier{\mathbbm{1}_A}(\gamma)| = |\fourier{\mathbbm{1}_D}(1)| \stackrel{(\ref{HAA})}{=} H_D(D) \leq H_D(E) \stackrel{(\ref{hmbound})}{\leq} |\fourier{\mathbbm{1}_E}(1)|, $$ with equality in the first inequality if and only if $D \in M_1(D)$. Thus, by Claim~\ref{claim}(i), $D$ is an interval, and so $A$ is affine equivalent to an interval, as required. Further, if $A$ is an interval then $D$ is an interval if and only if $\gamma=\pm 1$. This completes the proof of (i).
\noindent For~(ii), note that $m_2(a)$ exists since by Lemma~\ref{int-equivalence}, there is a subset (namely $I'$) which is not affine equivalent to $I$. By~(i), it does not attain $m_1(a)$, so $\rho(I') \leq m_2(a)$. Suppose now that $B$ is an $a$-subset of $\Z p$ which attains $m_2(a)$ at $\gamma \in \Z p \setminus \lbrace 0 \rbrace$. Let $D$ be the primary image of $B$. Then $D$ is not an interval. This together with Claim~\ref{claim}(i) implies that $H_D(D) < H_D(E)$ for any $E \in M_1(D)$. Thus, for any $C \in M_2(D)$, we have $$
m_2(a) = |\fourier{\mathbbm{1}_B}(\gamma)| = |\fourier{\mathbbm{1}_D}(1)| = H_D(D) \leq H_D(C) \leq |\fourier{\mathbbm{1}_C}(1)|. $$
with equality in the first inequality if and only if $D \in M_2(D)$. Since $C$ is a punctured interval, it is not affine equivalent to an interval. So the first part of the lemma implies that $|\fourier{\mathbbm{1}_C}(1)|\leqslant m_2(a)$. Thus we have equality everywhere and so $D \in M_2(D)$. Therefore $B$ is the affine image of a punctured interval, as required. Further, if $B$ is a punctured interval, then $D$ is a punctured interval if and only if $\gamma=\pm 1$. This completes the proof of (ii). \qed
We will now prove Theorem~\ref{th:knot1}.
\bpf[Proof of Theorem~\ref{th:knot1}.] Recall that $p\geqslant 7$, $a\in [3,p-3]$ and $k > k_0(a,p)$ is sufficiently large with $k\not\equiv 1\pmod p$. Let $I = [a]$. Given $t \in \Z p$, write $\rho_t := (\fourier{\mathbbm{1}_{I+t}}(1))^k\overline{\fourier{\mathbbm{1}_{I+t}}(1)}$ as $r_te^{\theta_t i}$, where $\theta_t \in [0,2\pi)$ and $r_t > 0$. Then~(\ref{skI2}) says that $\theta_t$ equals $-\pi(2t+a-1)(k-1)/p$ modulo $2\pi$. Increasing $t$ by $1$ rotates $\rho_t$ by $-2\pi(k-1)/p$. Using the fact that $k-1$ is invertible modulo $p$, we have the following. If $(a-1)(k-1)$ is even, then the set of $\theta_t$ for $t \in \Z p$ is precisely $0,2\pi/p,\hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt},(2p-2)\pi/p$, so there is a unique $t$ (resp.\ a unique $t'$) in $\Z p$ for which $\theta_t=\pi+\pi/p$ (resp.\ $\theta_{t'} = \pi-\pi/p$). Furthermore, $t' = -(a-1)-t$ and $I+t' = -(I+t)$; thus $I+t$ and $I+t'$ have the same set of dilations. If $(a-1)(k-1)$ is odd, then the set of $\theta_t$ for $t \in \Z p$ is precisely $\pi/p,3\pi/p,\hspace{0.9pt}.\hspace{0.3pt}.\hspace{0.3pt}.\hspace{1.5pt},(2p-1)\pi/p$, so there is a unique $t \in \Z p$ for which $\theta_t = \pi$. We call $t$ (and $t'$, if it exists) \emph{optimal}.
Let $t$ be optimal. To prove the theorem, we will show that $F(\xi\cdot(I+t)) < F(A)$ (and so $s_k(\xi\cdot(I+t))<s_k(A)$) for any $a$-subset $A\subseteq\Z p$ which is not a dilation of $I+t$.
We will first show that $F(I+t)<F(A)$ for any $a$-subset $A$ which is not affine equivalent to an interval. By Lemma~\ref{lm:MaxNontriv}(i), we have that $|\fourier{\mathbbm{1}_{I+t}}(\pm 1)|=m_1(a)$ and $\rho(A) \leq m_2(a)$. Let $m_2'(a)$ be the maximum of $\fourier{\mathbbm{1}_J}(\gamma)$ over all length-$a$ intervals $J$ and $\gamma \in [2,p-2]$. Lemma~\ref{lm:MaxNontriv}(i) implies that $m_2'(a)<m_1(a)$. Thus \begin{eqnarray}\label{knot1eq2}
\left|F(I+t)-2(m_1(a))^{k+1}\cos(\theta_t) - F(A)\right| \leq (p-1)(m_2(a))^{k+1} + (p-3)\left(m_2'(a)\right)^{k+1}. \end{eqnarray} Now $\cos(\theta_t) \leq \cos(\pi-\pi/p) < -0.9$ since $p \geq 7$. This together with the fact that $k\geqslant k_0(a,p)$ and Lemma~\ref{lm:MaxNontriv} imply that the absolute value of $2(m_1(a))^{k+1}\cos(\theta_t)<0$ is greater than the right-hand size of~(\ref{knot1eq2}). Thus $F(I+t) < F(A)$, as required.
The remaining case is when $A=\zeta\cdot(I+v)$ for some non-optimal $v \in \Z p$ and non-zero $\zeta \in \Z p$. Since $s_k(A)=s_k(I+v)$, we may assume that $\zeta=1$. Note that $\cos(\theta_t) \leq \cos(\pi-\pi/p) < \cos(\pi-2\pi/p) \leq \cos(\theta_v)$. Thus \begin{align*} F(I+t)-F(I+v) &\leq 2(m_1(a))^{k+1}(\cos(\theta_t)-\cos(\theta_v)) + (2p-4)(m_2'(a))^{k+1}\\ &\leq 2(m_1(a))^{k+1}(\cos(\pi-\pi/p)-\cos(\pi-2\pi/p)) + (2p-4)(m_2'(a))^{k+1} <0 \end{align*} where the last inequality uses the fact that $k$ is sufficiently large. Thus $F(I+t)<F(I+v)$, as required. \qed
Finally, using similar techniques, we prove Theorem~\ref{th:k1}.
\bpf[Proof of Theorem~\ref{th:k1}.] Recall that $p\geqslant 7$, $a\in [3,p-3]$ and $k > k_0(a,p)$ is sufficiently large with $k\equiv1\pmod p$. Let $I:=[a]$ and $I'=[a-1]\cup\{a\}$.
Suppose first that $a$ and $k$ are both even. Let $A\subseteq\Z p$ be an arbitrary $a$-set not affine equivalent to the interval~$I$. By Lemma~\ref{lm:MaxNontriv}, $I$ attains $m_1(a)$ (exactly at $x=\pm 1$), while $\rho(A)<m_1(a)$. Also, $m_2'(a)<m_1(a)$, where $
m_2'(a):=\max_{\gamma\in [2,p-2]}|\fourier{\mathbbm{1}_I}(\gamma)|$. Thus
\begin{eqnarray*}
F(I) - F(A) &\stackrel{(\ref{eq:SkF1}),(\ref{skI})}{\leq}& 2\sum_{\gamma=1}^{\frac{p-1}{2}} (-1)^\gamma\left|\fourier{\mathbbm{1}_I}(\gamma)\right|^{k+1} + \sum_{\gamma=1}^{p-1} \left|\fourier{\mathbbm{1}_A}(\gamma)\right|^{k+1}\\
&\leq& -2(m_1(a))^{k+1} + (2p-4) (\max\{m_2(a),m_2'(a)\})^{k+1}\ <\ 0,
\end{eqnarray*}
where the last inequality uses the fact that $k$ is sufficiently large. So $s_k(a)=s_k(I)$. Using Lemma~\ref{lm:MaxNontriv}, the same argument shows that, for all $B \in \binom{\Z p}{a}$, we have $s_k(B)=s_k(a)$ if and only if $B$ is an affine image of $I$. This completes the proof of Part 1 of the theorem.
Suppose now that at least one of $a,k$ is odd. Let $A$ be an $a$-set not equivalent to~$I$. Again by Lemma~\ref{lm:MaxNontriv}, we have
\begin{eqnarray*}
F(I) - F(A) &\geq& \sum_{\gamma=1}^{p-1} \left|\fourier{\mathbbm{1}_I}(\gamma)\right|^{k+1} - \sum_{\gamma=1}^{p-1} \left|\fourier{\mathbbm{1}_A}(\gamma)\right|^{k+1}\\ &\geq& 2(m_1(a))^{k+1} - (p-1)(m_2(a))^{k+1} \ >\ 0. \end{eqnarray*} So the interval $I$ and its affine images have in fact the largest number of additive $(k+1)$-tuples among all $a$-subsets of $\Z p$. In particular, $s_k(a) < s_k(I)$.
Suppose that there is some $A \in \binom{\Z p}{a}$ which is not affine equivalent to $I$ or~$I'$. (If there is no such $A$, then the unique extremal sets are affine images of $I'$ for all $k > k_0(a,p)$, giving the required.) Write $\rho := re^{\theta i} = \fourier{\mathbbm{1}_{I'}}(1)$. Then by Lemma~\ref{lm:MaxNontriv}(ii), we have $r=m_2(a)$, and $\rho(A) \leq m_3(a)$. Given $k \geq 2$, let $s \in \mathbb{N}$ be such that $k=sp+1$. Then \begin{equation}\label{FI'}
\Big|F(I') - 2m_2(a)^{k+1}\cos (sp\theta)-F(A)\Big|\leqslant (p-1)m_3(a)^{k+1}+(p-3)\left(m_2'(a)\right)^{k+1}. \end{equation} Proposition~\ref{angleprop} implies that there is an even integer $\ell \in \I N$
for which $c := p\theta - \ell\pi \in (-\pi,\pi)\setminus\{0\}$. Let $\varepsilon := \frac{1}{3}\min\lbrace |c|,\pi-|c|\rbrace > 0$. Given an integer $t$, say that $s\in\I N$ is \emph{$t$-good} if $sc \in ((t-\frac{1}{2})\pi+\varepsilon,(t+\frac{1}{2})\pi-\varepsilon)$. This real interval has length $\pi-2\varepsilon > |c|>0$, so must contain at least one integer multiple of $c$. In other words, for all $t \in \mathbb{Z}\setminus\{0\}$ with the same sign as $c$, there exists a $t$-good integer $s> 0$. As $sp\theta\equiv sc\pmod{2\pi}$, the sign of $\cos(sp\theta)$ is $(-1)^{t}$. Moreover, Lemma~\ref{lm:MaxNontriv} implies that
$m_2(a)> m_3(a), m'_2(a)$. Thus, when $k=sp+1>k_0(a,p)$, the absolute value of $2m_2(a)^{k+1}\cos(sp\theta)$ is greater than the right-hand side of~(\ref{FI'}). Thus, for large $|t|$, we have $F(A) > F(I')$ if $t$ is even and $F(A)<F(I')$ if $t$ is odd, implying the theorem by~\eqref{eq:SkF1}. \qed
\end{document} | arXiv |
Weighting co-authorship networks
This note explains how to implement two edge weighting schemes that are relevant to co-authorship networks, based on my work on legislative cosponsorship networks.
For the rest of this note, the networks under consideration are directed one-mode networks that connect the first author of a text to his or her co-authors. Since co-authorship can occur more than once, and since the number of co-authors can vary, weighting the ties should be considered.
Newman-Fowler weights
In his research on legislative cosponsorship in the U.S. Congress, Fowler uses the same weighting scheme as Newman used on co-authorship networks. The only difference is that he applies the weights to directed graphs, which means that the weight of the edge from first author $i$ to coauthor $j$ is not necessarily (and in effect, rarely) equal to the reverse edge weight.
Newman and Fowler take the number of coauthors $c$ on each text into account by taking the inverse of that quantity to represent the intensity of the tie. The overall intensity of the tie between authors $i$ and $j$ is the sum of these fractions,
$$ w_{ij} = \sum_{k} \frac{ a_{k} }{ c_{k} } $$
where $a_{k} = 1$ if $i$ and $j$ are coauthors of text $k$ and $0$ otherwise.
This weighting scheme produces strictly positive weights with no upper boundary. It is easy to compute by hand: if coauthor $j$ is the sole coauthor of first author $i$ on three texts, then the intensity of the tie between them will be $3$. If each of these three texts has two coauthors, then the intensity of the tie between $i$ and each coauthor drops to $3 \cdot \frac{1}{2} = 1.5$.
See Fowler's Political Analysis paper, pages 468-469, for further details and examples.
Gross-Kirkland-Shalizi weights
In their paper on cosponsorship in the U.S. Senate, Gross, Kirkland and Shalizi suggest normalizing Newman-Fowler weights by the maximum possible value that these weights might take if $j$ appears on every text authored by $i$, i.e. if $a_{k} = 1$ on all $k$ texts. The resulting weights,
$$ w_{ij} = \sum_{k} \frac{ a_{k} }{ c_{k} } \cdot \Big( \sum_{k} \frac{ 1 }{ c_{k} } \Big) ^ {-1} $$
are bounded between $0$ and $1$, and stand for the weighted propensity that $j$ is a coauthor of $i$.
The Gross, Kirkland and Shalizi paper, initially written by Gross alone, has not yet been published, and the online versions are not always dated. My code uses the gsw acronym to designate these weights because the first version of the paper that I encountered was signed only by Gross and Shalizi.
Implementation in R
Let's find a way to implement the two edge weighting schemes outlined above, while also computing the "raw" edge weights equal to the total number of co-authorship ties between two authors.
The example data look like this:
A B 2
A C 2
B B 1
B A 1
B C 3
B D 3
B E 3
This is the edge list that you get when
author A has written two texts, the first one co-authored by B and C (number of co-authors: 2), the second one co-authored by C alone (number of co-authors: 1)
author B has also written two texts, the first one co-authored by A alone (number of co-authors: 1), the second one co-authored by C, D and E (number of co-authors: 3)
There are two first authors, A and B, who are also co-authors, and three more co-authors, C, D and E.
Note that the edge list identifies the first authors through the self-loops, which also tell you where each text "starts" in the edge list: the first three rows correspond to the first text, the next two rows correspond to the second text, and so on. The order of the co-authors might or might not be relevant.
Here's how to get the example data in R, using the dplyr package to generate it from binded data frames. The i column contains the first author of each text, the j column contains the co-authors, and the w column holds the number of co-authors, which is just the number of rows in the data frame, minus 1 (i.e. minus the first author):
Let's extract the self-loops and create a table object, n_au, which contains the number of texts by each first author. In this example, both A and B have authored two texts:
Going back to the main edge list, we drop the self-loops and count how many texts were co-authored by each author, storing the result in the n_co table object. In this example, the most active co-author is C, who co-authored three texts:
At that stage, remember that we also want to compute the "raw" edge weights, i.e. the number of times a tie exists between two authors. In order to get that quantity, we collapse the (directed) edge list to the character vector ij of the form X->Y, where X is the first author and Y the co-author:
The raw object, which contains the tabulation of the ij vector, correctly indicates that A and C have co-authored two texts together, while all other co-authorship ties are unique.
Let's now compute the Newman-Fowler weights. Since we have a column with the number of co-authors per text, these are pretty easy to get: all it takes is to apply an inverse sum function to each tie.
The operation above has collapsed the edge list into the following object, where the w column now holds the Newman-Fowler weights of each unique tie in the network. As expected, the strongest edge weight is that of the tie between authors A and C.
Let's now re-expand the edges object into a proper edge list by cutting the ij column into its two parts, while adding the raw number of co-authorship ties as the raw column, and renaming the w column to nfw, for "Newman-Fowler weights":
Note that the code above requires that none of the authors featured in the network contain the character string -> in their names.
We are left with the Gross-Kirkland-Shalizi weights to compute. The denominator of these weights is the maximum value that the Newman-Fowler weights might take, which can be computed from the self object that we created by extracting the self-loops. Here's the complete trick:
What did we do here?
The aggregate function computed the maximum possible value of the Newman-Fowler weight involving each first author i, storing the result into the w column.
The result was merged into the edge list, which now has Newman-Fowler weights in the nfw column and the denominator of the Gross-Kirkland-Shalizi weights in the w column.
The gsw column created the ratio of the two columns, which will vary between $0$ and $1$. We can actually check that this is the case by adding one line of control flow:
Last, we finalize the edge list by dropping the denominator of the Gross-Kirkland-Shalizi weights:
Creating the weighted network is fairly straightforward from there on, and the n_au and n_co objects can further be used to create vertex attributes indicating how many texts were first-authored or co-authored by each author.
This Gist contains all code showed in this note. The dependency on the dplyr package can be easily removed if necessary.
First published on September 18th, 2015 | CommonCrawl |
\begin{document}
\title{{f Moduli spaces of stable quotients and
wall-crossing phenomena}
\begin{abstract} The moduli space of holomorphic maps from Riemann surfaces to the Grassmannian is known to have two kinds of compactifications: Kontsevich's stable map compactification and Marian-Oprea-Pandharipande's stable quotient compactification. Over a non-singular curve, the latter moduli space is Grothendieck's Quot scheme. In this paper, we give the notion of `$\epsilon$-stable quotients' for a positive real number $\epsilon$, and show that stable maps and stable quotients are related by wall-crossing phenomena. We will also discuss Gromov-Witten type invariants associated to $\epsilon$-stable quotients, and investigate them under wall-crossing. \end{abstract} \section{Introduction} The purpose of this paper is to investigate wall-crossing phenomena of several compactifications of the moduli spaces of holomorphic maps from Riemann surfaces to the Grassmannian. So far, two kinds of compactifications are known: Kontsevich's stable map compactification~\cite{Ktor} and Marian-Oprea-Pandharipande's stable quotient compactification~\cite{MOP}. The latter moduli space is introduced rather recently, and it is Grothendieck's Quot scheme over a non-singular curve. In this paper, we will introduce the notion of \textit{$\epsilon$-stable quotients} for a positive real number $\epsilon \in \mathbb{R}_{>0}$, and show that the moduli space of $\epsilon$-stable quotients is a proper Deligne-Mumford stack over $\mathbb{C}$ with a perfect obstruction theory. It will turn out that there is a wall and chamber structure on the space of stability conditions $\epsilon \in \mathbb{R}_{>0}$, and the moduli spaces are constant at chambers but jump at walls, i.e. wall-crossing phenomena occurs.
We will see that stable maps and stable quotients are related by the above wall-crossing phenomena. We will also consider the virtual fundamental classes on the moduli spaces of $\epsilon$-stable quotients, the associated enumerative invariants, and investigate them under the change of $\epsilon \in \mathbb{R}_{>0}$. This is interpreted as a wall-crossing formula of Gromov-Witten (GW) type invariants.
\subsection{Stable maps and stable quotients} Let $C$ be a smooth projective curve
over $\mathbb{C}$ of genus $g$, and $\mathbb{G}(r, n)$ the Grassmannian which parameterizes $r$-dimensional $\mathbb{C}$-vector subspaces in $\mathbb{C}^n$. Let us consider a holomorphic map \begin{align}\label{map} f\colon C \to \mathbb{G}(r, n), \end{align} satisfying the following, \begin{align*} f_{\ast}[C]=d \in H_2(\mathbb{G}(r, n), \mathbb{Z})\cong \mathbb{Z}. \end{align*} By the universal property of $\mathbb{G}(r, n)$, giving a map (\ref{map}) is equivalent to giving a quotient, \begin{align}\label{exseq} \mathcal{O}_C^{\oplus n} \twoheadrightarrow Q, \end{align} where $Q$ is a locally free sheaf of rank $n-r$ and degree $d$. The moduli space of maps (\ref{map}) is not compact, and two kinds of compactifications are known: compactification as maps (\ref{map}) or compactification as quotients (\ref{exseq}). \begin{itemize} \item {\bf Stable map compactification:} We attach trees of rational curves to $C$, and consider moduli space of maps from the attached nodal curves to $\mathbb{G}(r, n)$ with finite automorphisms. \item {\bf Quot scheme compactification:} We consider the moduli space of quotients (\ref{exseq}), allowing torsion subsheaves in $Q$. The resulting moduli space is Grothendieck's Quot scheme on $C$. \end{itemize} In the above compactifications, the (stabilization of the) source curve $C$ is fixed in the moduli. If we vary the curve $C$ as a nodal curve and give $m$-marked points on it, we obtain two kinds of compact moduli spaces, \begin{align}\label{Stmap} &\overline{M}_{g, m}(\mathbb{G}(r, n), d), \\ \label{Stquo} &\overline{Q}_{g, m}(\mathbb{G}(r, n), d). \end{align} The space (\ref{Stmap}) is a moduli space of Kontsevich's \textit{stable maps}~\cite{Ktor}. Namely this is the moduli space of data, \begin{align*} (C, p_1, \cdots, p_m, f\colon C\to \mathbb{G}(r, n)), \end{align*} where $C$ is a genus $g$, $m$-pointed nodal curve and $f$ is a morphism with finite automorphisms.
The space (\ref{Stquo}) is a moduli space of Marian-Oprea-Pandharipande's stable quotients~\cite{MOP},
which we call \textit{MOP-stable quotients}.
By definition a
MOP-stable quotient consists of data,
\begin{align}\label{data}
(C, p_1, \cdots, p_m, \mathcal{O}_C^{\oplus n} \stackrel{q}{\twoheadrightarrow} Q),
\end{align}
for an $m$-pointed nodal curve $C$ and a quotient sheaf $Q$ on it, satisfying the following stability condition. \begin{itemize} \item The coherent sheaf $Q$ is locally free near nodes and markings. In particular, the determinant line bundle $\det (Q)$ is well-defined. \item The $\mathbb{R}$-line bundle \begin{align}\label{Rline} \omega_C(p_1 +\cdots +p_m)\otimes \det(Q)^{\otimes \epsilon}, \end{align} is ample for \textit{every} $\epsilon>0$. \end{itemize}
The space (\ref{Stquo}) is the moduli space of
MOP-stable quotients (\ref{data}) with $C$ genus $g$,
$\mathop{\rm rank}\nolimits(Q)=n-r$ and $\deg (Q)=d$. Both moduli spaces (\ref{Stmap}) and (\ref{Stquo}) have the following properties. \begin{itemize} \item The moduli spaces (\ref{Stmap}) and (\ref{Stquo}) are proper
Deligne-Mumford stacks over $\mathbb{C}$ with perfect obstruction theories~\cite{BGW}, \cite{MOP}.
\item The moduli spaces (\ref{Stmap}), (\ref{Stquo}) carry proper morphisms, \begin{align}\label{MQdia} \xymatrix{ \overline{M}_{g, m}(\mathbb{G}(r, n), d) \ar[dr] & & \overline{Q}_{g, m}(\mathbb{G}(r, n), d) \ar[dl] \\ & \overline{M}_{g, m}.& } \end{align} \end{itemize} Here $\overline{M}_{g, m}$ is the moduli space of genus $g$, $m$-pointed stable curves. Taking the fibers of the diagram (\ref{MQdia}) over a non-singular curve $[C] \in \overline{M}_{g, 0}$, we obtain the compactifications as maps (\ref{map}), quotients (\ref{exseq}) respectively. Also the associated virtual fundamental classes
on the moduli spaces (\ref{Stmap}), (\ref{Stquo}) are compared in~\cite[Section~7]{MOP}. \subsection{$\epsilon$-stable quotients} The purpose of this paper is to introduce
a variant of stable quotient theory, depending on a positive real number, \begin{align}\label{fixed} \epsilon \in \mathbb{R}_{>0}. \end{align} We define an \textit{$\epsilon$-stable quotient} to be data (\ref{data}), which has the same property to MOP-stable quotients except the following. \begin{itemize} \item The $\mathbb{R}$-line bundle (\ref{Rline}) is only ample with respect to the \textit{fixed} stability parameter $\epsilon\in \mathbb{R}_{>0}$. \item For any $p\in C$, the torsion subsheaf $\tau(Q)\subset Q$ satisfies \begin{align*} \epsilon \cdot \mathop{\rm length}\nolimits \tau(Q)_{p} \le 1. \end{align*} \end{itemize} The idea of $\epsilon$-stable quotients originates from Hassett's weighted pointed stable curves. In~\cite{BH}, Hassett introduces the notion of weighted pointed stable curves $(C, p_1, \cdots, p_m)$, where $C$ is a nodal curve and $p_i \in C$ are marked points. The stability condition depends on a choice of a weight, \begin{align}\label{intro:weight} (a_1, a_2, \cdots, a_m) \in (0, 1]^m, \end{align} which put a similar constraint for the pointed curve $(C, p_1, \cdots, p_m)$ to our $\epsilon$-stability. (See Definition~\ref{def:weighted}.) A choice of $\epsilon$ in our situation corresponds to a choice of a weight (\ref{intro:weight})
for weighted pointed stable curves.
The moduli space of
$\epsilon$-stable quotients (\ref{data}) with $C$ genus $g$,
$\mathop{\rm rank}\nolimits(Q)=n-r$ and $\deg (Q)=d$ is denoted
by
\begin{align}\label{eStquo}
\overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d).
\end{align} We show the following result. (cf.~Theorem~\ref{thm:rep}, Subsection~\ref{subsec:Pro}, Proposition~\ref{prop:wall}, Proposition~\ref{wall2}, Theorem~\ref{thm:cross}.) \begin{thm}\label{thm:main} (i) The moduli space $\overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)$ is a proper Deligne-Mumford stack
over $\mathbb{C}$ with a perfect obstruction theory. Also there is a proper morphism, \begin{align}\label{QMp} \overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d) \to \overline{M}_{g, m}. \end{align}
(ii) There is a finite number of values \begin{align*} 0=\epsilon_0<\epsilon_1< \cdots < \epsilon_k<\epsilon_{k+1}=\infty, \end{align*} such that we have \begin{align*} \overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)= \overline{Q}_{g, m}^{\epsilon_i}(\mathbb{G}(r, n), d), \end{align*} for $\epsilon \in (\epsilon_{i-1}, \epsilon_{i}]$.
(iii) We have the following. \begin{align*} \overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d) &\cong \overline{M}_{g, m}(\mathbb{G}(r, n), d), \quad \epsilon >2, \\ \overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d) &\cong \overline{Q}_{g, m}(\mathbb{G}(r, n), d), \quad 0<\epsilon \le 1/d. \end{align*} \end{thm} By Theorem~\ref{thm:main} (i), there is the associated virtual fundamental class, \begin{align*} [\overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)]^{\rm{vir}} \in A_{\ast} (\overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d), \mathbb{Q}). \end{align*} A comparison of the above virtual fundamental classes under change of $\epsilon$ is obtained as follows. (cf.~Theorem~\ref{thm:wcf}.) \begin{thm} For $\epsilon \ge \epsilon' >0$ satisfying $2g-2+\epsilon' \cdot d>0$, there is a diagram, \begin{align*} \xymatrix{
\overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d) \ar[r]^{\iota^{\epsilon}} & \overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(1,\dbinom{n}{r} ), d) \ar[d] _{c_{\epsilon, \epsilon'}}, \\
\overline{Q}_{g, m}^{\epsilon'}(\mathbb{G}(r, n), d) \ar[r]^{\iota^{\epsilon'}} & \overline{Q}_{g, m}^{\epsilon'}(\mathbb{G}(1, \dbinom{n}{r}), d), } \end{align*} such that we have \begin{align*} c_{\epsilon, \epsilon' \ast}\iota_{\ast}^{\epsilon} [\overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)]^{\rm vir} =\iota_{\ast}^{\epsilon'} [\overline{Q}_{g, m}^{\epsilon'}(\mathbb{G}(r, n), d)]^{\rm vir} \end{align*} \end{thm}
The above theorem, which is a refinement of the result in~\cite[Section~7]{MOP},
is interpreted as a wall-crossing formula relevant to the GW theory. \subsection{Invariants on Calabi-Yau 3-folds} The idea of $\epsilon$-stable quotients is also applied to define new quantum invariants on some compact or non-compact Calabi-Yau 3-folds. One of the interesting examples is a system of invariants on a quintic Calabi-Yau 3-fold $X\subset \mathbb{P}^4$. In Section~\ref{sec:enu}, we associate the substack, \begin{align*} \overline{Q}_{0, m}^{\epsilon}(X, d) \subset \overline{Q}_{0, m}^{\epsilon}(\mathbb{P}^4, d), \end{align*} such that when $\epsilon>2$, it coincides with the moduli space of genus zero, degree $d$ stable maps to $X$. There is a perfect obstruction theory on the space $\overline{Q}_{0, m}^{\epsilon}(X, d)$, hence the virtual class, \begin{align*} [\overline{Q}_{0, m}^{\epsilon}(X, d)]^{\rm vir} \in A_{\ast}(\overline{Q}_{0, m}^{\epsilon}(X, d), \mathbb{Q}), \end{align*} with virtual dimension $m$. In particular, the zero-pointed moduli space yields the invariant, \begin{align*} N_{0, d}^{\epsilon}(X)= \int_{[\overline{Q}_{0, 0}^{\epsilon}(X, d)]^{\rm vir}}1 \in \mathbb{Q}. \end{align*} For $\epsilon>2$, the invariant $N_{0, d}^{\epsilon}(X)$ coincides with the GW invariant counting genus zero, degree $d$ stable maps to $X$. However for a smaller $\epsilon$, the above invariant may be different from the GW invariant of $X$. The understanding of wall-crossing phenomena of such invariants seem relevant to the study of the GW theory. In Section~\ref{sec:enu}, we will also discuss such invariants in several other cases.
\subsection{Relation to other works} As pointed out in~\cite[Section~1]{MOP}, only a few proper moduli spaces carrying virtual classes are known, e.g. stable maps~\cite{BGW}, stable sheaves on surfaces or 3-folds~\cite{LTV},~\cite{Thom}, Grothendieck's Quot scheme on non-singular curves~\cite{MO} and MOP-stable quotients~\cite{MOP}. By the result of Theorem~\ref{thm:main}, we have constructed a new family of moduli spaces which have virtual classes.
Before the appearance of stable maps~\cite{Ktor}, the Quot scheme was used for an enumeration problem of curves on the Grassmannian~\cite{Bert1}, \cite{Bert2}, \cite{BDW}. Some relationship between compactifications as maps (\ref{map}) and quotients (\ref{exseq}) is discussed in~\cite{PR}. The fiber of the morphism (\ref{QMp}) over a non-singular curve is an intermediate moduli space between the above two compactifications. This fact seems to give a new insight to the work~\cite{PR}.
Wall-crossing phenomena for stable maps or GW type invariants are discussed in~\cite{BH}, \cite{BaMa}, \cite{GuAl}. In their works, a stability condition is a weight on the marked points, not on maps. In particular, there is no wall-crossing phenomena if there is no point insertion.
After the author finished the work of this paper, a closely related work of Mustat$\check{\rm{a}}$-Mustat$\check{\rm{a}}$~\cite{MMu} was informed to the author.
They construct some compactifications
of the moduli space of maps from
Riemann surfaces to the projective space,
which are interpreted as moduli spaces of $\epsilon$-stable quotients of rank one. However they do not address higher rank quotients, virtual classes nor wall-crossing formula. In this sense, the prewent work is interpreted as a combination of the works~\cite{MOP} and~\cite{MMu}.
Recently wall-crossing formula of Donaldson-Thomas (DT) type invariants have been developed by Kontsevich-Soibelman~\cite{K-S} and Joyce-Song~\cite{JS}. The DT invariant is a counting invariant of stable sheaves on a Calabi-Yau 3-fold, while GW invariant is a counting invariant of stable maps. The relationship between GW invariants and DT invariants is proposed by Maulik-Nekrasov-Okounkov-Pandharipande (MNOP)~\cite{MNOP}, called \textit{GW/DT correspondence}. In the DT side, a number of applications of wall-crossing formula
to the MNOP conjecture have been found recently, such as \textit{DT/PT-correspondence}, \textit{rationality conjecture}. (cf.~\cite{BrH}, \cite{StTh}, \cite{Tcurve1}, \cite{Tolim2}.) It seems worth trying to find a similar wall-crossing phenomena in GW side and give an application to the MNOP conjecture. The work of this paper grew out from such an attempt.
\section{Stable quotients} In this section we introduce the notion of $\epsilon$-stable quotients for a positive real number $\epsilon\in \mathbb{R}_{>0}$, study their properties, and give some examples. The $\epsilon$-stable quotients are extended notion of stable quotients introduced by Marian-Oprea-Pandharipande~\cite{MOP}. \subsection{Definition of $\epsilon$-stable quotients} Let $C$ be a connected projective curve over $\mathbb{C}$ with at worst nodal singularities. Suppose that the arithmetic genus of $C$ is $g$, \begin{align*} g=\dim H^1(C, \mathcal{O}_C). \end{align*} Let $C^{ns}\subset C$ be the non-singular locus of $C$. We say the data \begin{align*} (C, p_1, \cdots, p_m), \end{align*} with distinct markings $p_i \in C^{ns}\subset C$ a genus $g$, $m$-pointed, \textit{quasi-stable curve}. The notion of quasi-stable quotients is introduced in~\cite[Section~2]{MOP}. \begin{defi}\emph{ Let $C$ be a pointed quasi-stable curve and $q$ a quotient, \begin{align*} \mathcal{O}_C^{\oplus n} \stackrel{q}{\twoheadrightarrow} Q. \end{align*} We say that $q$ is a \textit{quasi-stable quotient} if $Q$ is locally free near nodes and markings. In particular, the torsion subsheaf $\tau(Q)\subset Q$ satisfies, \begin{align*} \mathop{\rm Supp}\nolimits \tau(Q) \subset C^{ns} \setminus \{p_1, \cdots, p_m\}. \end{align*}} \end{defi} Let $\mathcal{O}_C^{\oplus n} \stackrel{q}{\twoheadrightarrow} Q$ be a quasi-stable quotient. The quasi-stability implies that the sheaf $Q$ is perfect, i.e. there is a finite locally free resolution $P^{\bullet}$ of $Q$. In particular, the determinant line bundle, \begin{align*} \det(Q)=\bigotimes_{i}(\bigwedge^{\mathop{\rm rk}\nolimits P^{i}}P^i )^{\otimes (-1)^i} \in \mathop{\rm Pic}\nolimits(C), \end{align*} makes sense. The degree of $Q$ is defined by the degree of $\det(Q)$.
We say that a quasi-stable quotient $\mathcal{O}_C^{\oplus n} \twoheadrightarrow Q$ is of \textit{type $(r, n, d)$}, if the following holds, \begin{align*} \mathop{\rm rank}\nolimits Q=n-r, \quad \deg Q=d. \end{align*} For a quasi-stable quotient $\mathcal{O}_C^{\oplus n}\stackrel{q}{\twoheadrightarrow}Q$ and $\epsilon \in \mathbb{R}_{>0}$, the $\mathbb{R}$-line bundle $\mathcal{L}(q, \epsilon)$ is defined by \begin{align}\label{def:L} \mathcal{L}(q, \epsilon)\cneq \omega_C(p_1 +\cdots +p_m) \otimes (\det Q)^{\otimes \epsilon}. \end{align} The notion of stable quotients introduced in~\cite{MOP}, which we call \textit{MOP-stable quotients}, is defined as follows. \begin{defi}\emph{{\bf \cite{MOP}} A quasi-stable quotient $\mathcal{O}_C^{\oplus n}\stackrel{q}{\twoheadrightarrow}Q$ is a \textit{MOP-stable quotient} if the $\mathbb{R}$-line bundle $\mathcal{L}(q, \epsilon)$ is ample for every $\epsilon>0$.} \end{defi} The idea of $\epsilon$-stable quotient is that, we only require the ampleness of $\mathcal{L}(q, \epsilon)$ for a fixed $\epsilon$, (not every $\epsilon>0$,) and put an additional condition on the length of the torsion subsheaf of the quotient sheaf. \begin{defi}\emph{ Let
$\mathcal{O}_C^{\oplus n}\stackrel{q}{\twoheadrightarrow} Q$ be a quasi-stable quotient and $\epsilon$ a positive real number. We say that $q$ is an \textit{$\epsilon$-stable quotient} if the following conditions are satisfied. } \begin{itemize} \item \emph{The $\mathbb{R}$-line bundle $\mathcal{L}(q, \epsilon)$ is ample.} \item \emph{For any point $p\in C$, the torsion subsheaf $\tau(Q)\subset Q$ satisfies the following inequality,} \begin{align}\label{ineq} \epsilon \cdot \mathop{\rm length}\nolimits \tau(Q)_{p} \le 1. \end{align} \end{itemize} \end{defi} Here we give some remarks. \begin{rmk} As we mentioned in the introduction, the definition of $\epsilon$-stable quotients is motivated by Hassett's weighted pointed stable curves~\cite{BH}. We will discuss the relationship between $\epsilon$-stable quotients and weighted pointed stable curves in Subsection~\ref{subsec:Hassett}. \end{rmk} \begin{rmk} The ampleness of $\mathcal{L}(q, \epsilon)$ for every $\epsilon>0$ is equivalent to the ampleness of $\mathcal{L}(q, \epsilon)$ for $0<\epsilon \ll 1$. If $\epsilon>0$ is sufficiently small, then the condition (\ref{ineq}) does not say anything, so MOP-stable quotients coincide with $\epsilon$-stable quotients for $0<\epsilon \ll 1$. \end{rmk} \begin{rmk} For a quasi-stable quotient $\mathcal{O}_C^{\oplus n}\twoheadrightarrow Q$, take the exact sequence, \begin{align*} 0 \to S \to \mathcal{O}_C^{\oplus n} \to Q \to 0. \end{align*} The quasi-stability implies that $S$ is locally free. By taking the dual of the above exact sequence, giving a quasi-stable quotient is equivalent to giving a locally free sheaf $S^{\vee}$ and a morphism \begin{align*} \mathcal{O}_C^{\oplus n} \stackrel{s}{\to} S^{\vee}, \end{align*} which is surjective on nodes and marked points. The $\epsilon$-stability is also defined in terms of data $(S^{\vee}, s)$. \end{rmk}
\begin{rmk}\label{e1} By definition, a
quasi-stable quotient $\mathcal{O}_{C}^{\oplus n}\stackrel{q}{\twoheadrightarrow} Q$ of type $(r, n, d)$ induces a rational map, \begin{align*} f\colon C \dashrightarrow \mathbb{G}(r, n), \end{align*} such that we have \begin{align}\label{eq:deg} \deg f_{\ast}[C] +\mathop{\rm length}\nolimits \tau(Q)=d. \end{align} If $\epsilon >1$, then the condition (\ref{ineq}) is equivalent to that $Q$ is a locally free sheaf. Hence $f$ is an actual map, and the quotient $q$ is isomorphic to the pull-back of the universal quotient on $\mathbb{G}(r, n)$. \end{rmk} Let $C$ be a marked quasi-stable curve. A point $p\in C$ is called \textit{special} if $p$ is a singular point of $C$ or a marked point. For an irreducible component $P\subset C$, we denote by $s(P)$ the number of special points in $P$. The following lemma is obvious. \begin{lem}\label{lem:ob} Let $\mathcal{O}_C^{\oplus n} \stackrel{q}{\twoheadrightarrow} Q$ be a quasi-stable quotient and take $\epsilon \in \mathbb{R}_{>0}$. Then the $\mathbb{R}$-line bundle $\mathcal{L}(q, \epsilon)$ is ample if and only if for any irreducible component $P\subset C$ with genus $g(P)$, the following condition holds. \begin{align}\label{ob1}
&\deg (Q|_{P}) >0, \quad (s(P), g(P))=(2, 0), (0, 1) \\ \label{ob2}
&\deg (Q|_{P})>1/\epsilon, \quad (s(P), g(P))=(1, 0), \\ \label{ob3}
& \deg (Q|_{P})>2/\epsilon, \quad (s(P), g(P))=(0, 0). \end{align} \end{lem} \begin{proof} For an irreducible component $P\subset C$, we have \begin{align*}
\deg (\mathcal{L}(q, \epsilon)|_{P})
=2g(P)-2+s(P)+\epsilon \cdot \deg(Q|_{P}). \end{align*}
Also since $q$ is surjective, we have $\deg(Q|_{P})\ge 0$. Therefore the lemma follows. \end{proof} Here we give some examples. We will discuss some more examples in Section~\ref{sec:ex}. \begin{exam} (i) Let $C$ be a smooth projective curve of genus $g$ and $f\colon C\to \mathbb{G}(r, n)$ a map. Suppose that $f$ is non-constant if $g\le 1$. By pulling back the universal quotient \begin{align*} \mathcal{O}_{\mathbb{G}(r, n)}^{\oplus n} \twoheadrightarrow \mathcal{Q}_{\mathbb{G}(r, n)}, \end{align*} on $\mathbb{G}(r, n)$, we obtain the quotient $\mathcal{O}_C^{\oplus n} \stackrel{q}{\twoheadrightarrow} Q$. It is easy to see that the quotient $q$ is an $\epsilon$-stable quotient for $\epsilon >2$.
(ii) Let $C$ be as in (i) and take distinct points $p_1, \cdots, p_m \in C$. For an effective divisor $D=a_1 p_1 +\cdots a_m p_m$ with $a_i>0$, the quotient \begin{align*} \mathcal{O}_C \stackrel{q}{\twoheadrightarrow} \mathcal{O}_D, \end{align*} is an $\epsilon$-stable quotient if and only if \begin{align*} 2g-2+\epsilon \cdot \sum_{i=1}^{m}a_i>0, \quad 0<\epsilon \le 1/a_i, \end{align*} for all $1\le i\le m$. In this case, the quotient $q$ is MOP-stable if
$g \ge 1$, but this is not the case in genus zero.
(iii) Let $\mathbb{P}^1 \cong C\subset \mathbb{P}^n$ be a line and take distinct points $p_1, p_2 \in C$. By restricting the Euler sequence to $C$, we obtain the exact sequence, \begin{align*} 0 \to \mathcal{O}_C(-1) \stackrel{s}{\to}
\mathcal{O}_C^{\oplus n+1} \to T_{\mathbb{P}^n}(-1)|_{C} \to 0. \end{align*} Composing the natural inclusion $\mathcal{O}_C(-p_1-p_2 -1)\subset \mathcal{O}_C(-1)$ with $s$, we obtain the exact sequence, \begin{align*} 0\to \mathcal{O}_C(-p_1-p_2-1)\to \mathcal{O}_C^{\oplus n+1} \stackrel{q}{\to} Q \to 0. \end{align*} It is easy to see that the
quotient $q$ is $\epsilon$-stable for $\epsilon=1$. Note that $q$ is not a MOP-stable quotient nor a quotient corresponding to a stable map as in (i). \end{exam}
\subsection{Moduli spaces of $\epsilon$-stable quotients} Here we define the moduli functor of the family of $\epsilon$-stable quotients. We use the language of stacks, and readers can refer~\cite{GL} for their introduction. First we recall the moduli stack of quasi-stable curves. For a $\mathbb{C}$-scheme $B$, a \textit{family of genus $g$, $m$-pointed quasi-stable curves over $B$} is defined to be data \begin{align*} (\pi \colon \mathcal{C} \to B, p_1, \cdots, p_m), \end{align*} which satisfies the following. \begin{itemize} \item The morphism $\pi \colon \mathcal{C} \to B$ is flat, proper and locally of finite presentation. Its relative dimension is one and $p_1, \cdots, p_m$ are sections of $\pi$. \item For each closed point $b\in B$, the data \begin{align*} (\mathcal{C}_b \cneq \pi^{-1}(b), p_1(b), \cdots, p_m(b)), \end{align*} is an $m$-pointed quasi-stable curve. \end{itemize} The families of genus $g$, $m$-pointed quasi-stable curves form a groupoid $\mathcal{M}_{g, m}(B)$ with the set of isomorphisms, \begin{align*} \mathop{\rm Isom}\nolimits_{\mathcal{M}_{g, m}(B)}((\mathcal{C}, p_1, \cdots, p_m), (\mathcal{C}', p_1', \cdots, p_m')), \end{align*} given by the isomorphisms of schemes over $B$, \begin{align*} \phi \colon \mathcal{C} \stackrel{\cong}{\to}\mathcal{C}', \end{align*} satisfying $\phi(p_i)=p_i'$ for each $1\le i\le m$. The assignment $B\mapsto \mathcal{M}_{g, m}(B)$ forms a 2-functor, \begin{align*} \mathcal{M}_{g, m}\colon \mathop{\rm Sch}\nolimits/\mathbb{C} \to (\mathrm{groupoid}), \end{align*} which is known to be an algebraic stack locally of finite type over $\mathbb{C}$. \begin{defi}\emph{ For a given data \begin{align*} \epsilon \in \mathbb{R}_{>0}, \quad (r, n, d)\in \mathbb{Z}^{\oplus 3}, \end{align*} we define the \textit{stack of genus $g$, $m$-pointed $\epsilon$-stable quotient of type $(r, n, d)$} to be the 2-functor, \begin{align}\label{2func} \overline{\mathcal{Q}}^{\epsilon}_{g, m}(\mathbb{G}(r, n), d) \colon \mathop{\rm Sch}\nolimits/\mathbb{C} \to (\mathrm{groupoid}), \end{align} which sends a $\mathbb{C}$-scheme $B$ to the groupoid whose objects consist of data, \begin{align}\label{C} (\pi\colon \mathcal{C} \to B, p_1, \cdots, p_m, \mathcal{O}_{\mathcal{C}}^{\oplus n} \stackrel{q}{\twoheadrightarrow} \mathcal{Q}), \end{align} satisfying the following. } \begin{itemize} \item \emph{$(\pi \colon \mathcal{C} \to B, p_1, \cdots, p_m)$ is a family of genus $g$, $m$-pointed quasi-stable curve over $B$. } \item \emph{$\mathcal{Q}$ is flat over $B$ such that for any $b\in B$, the data \begin{align*} (\mathcal{C}_b, p_1(b), \cdots, p_m(b), \mathcal{O}_{\mathcal{C}_b}^{\oplus n} \stackrel{q_b}{\twoheadrightarrow} \mathcal{Q}_b), \end{align*} is an $\epsilon$-stable quotient of type $(r, n, d)$.} \end{itemize} \emph{For another object over $B$, \begin{align}\label{Can} (\pi'\colon \mathcal{C}' \to B, p_1', \cdots, p_m', \mathcal{O}_{\mathcal{C}'}^{\oplus n} \stackrel{q'}{\twoheadrightarrow} \mathcal{Q}'), \end{align} the set of isomorphisms between (\ref{C}) and (\ref{Can}) is given by \begin{align*} \{ \phi \in \mathop{\rm Isom}\nolimits_{\mathcal{M}_{g, m}(B)}((\mathcal{C}, p_1, \cdots, p_m), (\mathcal{C}', p_1', \cdots, p_m')) \colon \mathop{\rm ker}\nolimits(q)=\mathop{\rm ker}\nolimits(\phi^{\ast}(q'))\}. \end{align*}} \end{defi} By the construction, there is an obvious forgetting 1-morphism, \begin{align}\label{forget} \overline{\mathcal{Q}}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d) \to \mathcal{M}_{g, m}. \end{align} The following lemma shows that the automorphism groups in $\overline{\mathcal{Q}}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)$ are finite. \begin{lem}\label{lem:aut} For a genus $g$, $m$-pointed $\epsilon$-stable quotient $(\mathcal{O}_C^{\oplus n} \stackrel{q}{\twoheadrightarrow} Q)$ of type $(r, n, d)$, we have \begin{align}\label{aut:fin} \sharp \mathop{\rm Aut}\nolimits(\mathcal{O}_C^{\oplus n} \stackrel{q}{\twoheadrightarrow} Q) <\infty, \end{align} in the groupoid $\overline{\mathcal{Q}}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)(\mathop{\rm Spec}\nolimits \mathbb{C})$. \end{lem} \begin{proof} It is enough to show that for each irreducible component $P\subset C$, we have \begin{align*}
\sharp \mathop{\rm Aut}\nolimits(\mathcal{O}_P^{\oplus n} \stackrel{q|_{P}}{\twoheadrightarrow}
Q|_{P})<\infty. \end{align*} Hence we may assume that $C$ is irreducible. The cases we need to consider are the following, \begin{align*} (s(C), g(C))=(0, 0), (0, 1), (1, 0), (2, 0). \end{align*} Here we have used the notation in Lemma~\ref{lem:ob}. For simplicity we treat the case of $(s(C), g(C))=(1, 0)$. The other cases are similarly discussed.
Let $f$ be a rational map, \begin{align*} f\colon C \dashrightarrow \mathbb{G}(r, n), \end{align*} determined by the quotient $q$. (cf.~Remark~\ref{e1}.) If $f$ is non-constant, then (\ref{aut:fin}) is obviously satisfied. Hence we may assume that $f$ is a constant rational map. By the equality (\ref{eq:deg}), this implies that the torsion subsheaf $\tau(Q) \subset Q$ satisfies \begin{align*} \mathop{\rm length}\nolimits \tau(Q)=\deg Q. \end{align*} Also if $\sharp \mathop{\rm Supp}\nolimits \tau(Q) \ge 2$, then (\ref{aut:fin}) is satisfied, since any automorphism preserves torsion points and special points. Hence we may assume that there is unique $p\in C$ such that \begin{align*} \mathop{\rm length}\nolimits \tau(Q)_{p}=\mathop{\rm length}\nolimits \tau(Q)=\deg Q. \end{align*} However this contradicts to the condition (\ref{ineq}) and Lemma~\ref{lem:ob}. \end{proof} We will show the following theorem. \begin{thm}\label{thm:rep} The 2-functor $\overline{\mathcal{Q}}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)$ is a proper Deligne-Mumford stack of finite type over $\mathbb{C}$ with a perfect obstruction theory. \end{thm} \begin{proof} The construction of the moduli space and the properness will be postponed in Section~\ref{sec:proof}. The existence of the perfect obstruction theory will be discussed in Theorem~\ref{thm:vir}. \end{proof}
By theorem~\ref{thm:rep}, the 2-functor (\ref{2func}) is interpreted as a geometric object, rather than an abstract 2-functor. In order to emphasize this, we slightly change the notation as follows. \begin{defi} \emph{We denote the Deligne-Mumford moduli stack of genus $g$, $m$-pointed $\epsilon$-stable quotients of type $(r, n, d)$ by \begin{align*} \overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d). \end{align*}} \end{defi} When $r=1$, we occasionally write \begin{align*} \overline{Q}_{g, m}^{\epsilon}(\mathbb{P}^{n-1}, d) \cneq \overline{Q}_{g, m}^{\epsilon}(\mathbb{P}^{n-1}, d). \end{align*} The universal curve is denoted by \begin{align}\label{univ1} \pi^{\epsilon} \colon U^{\epsilon} \to \overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d), \end{align} and we have the universal quotient, \begin{align}\label{univ2} 0\to S_{U^{\epsilon}} \to \mathcal{O}_{U^{\epsilon}}^{\oplus n} \stackrel{q_{U^{\epsilon}}}{\to} Q_{U^{\epsilon}} \to 0. \end{align} \subsection{Structures of the moduli spaces of $\epsilon$-stable quotients} \label{subsec:Pro}
Below we discuss some structures on
the moduli spaces of $\epsilon$-stable
quotients. Similar structures for MOP-stable
quotients are discussed in~\cite[Section~3]{MOP}.
Let $\overline{M}_{g, m}$ be the moduli stack of genus $g$, $m$-pointed stable curves. By composing (\ref{forget}) with the stabilization morphism, we obtain the proper morphism between Deligne-Mumford stacks, \begin{align*} \nu^{\epsilon} \colon \overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d) \to \overline{M}_{g, m}. \end{align*} For an $\epsilon$-stable quotient $\mathcal{O}_C^{\oplus n} \twoheadrightarrow Q$ with markings $p_1, \cdots, p_m$, the sheaf $Q$ is locally free at $p_i$. Hence it determines an evaluation map, \begin{align}\label{mor:ev} \mathop{\rm ev}\nolimits_i \colon \overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d) \to \mathbb{G}(r, n). \end{align} Taking the fiber product, \begin{align}\label{fib:pro} \xymatrix{ \overline{Q}_{g_1, m_1+1}^{\epsilon}(\mathbb{G}(r, n), d_1) \times_{\mathop{\rm ev}\nolimits}\overline{Q}_{g_2, m_2+1}^{\epsilon}(\mathbb{G}(r, n), d_2) \ar[r]\ar[d] & \overline{Q}_{g_1, m_1+1}^{\epsilon}(\mathbb{G}(r, n), d_1) \ar[d]^{\mathop{\rm ev}\nolimits_{m_1+1}}, \\ \overline{Q}_{g_2, m_2+1}^{\epsilon}(\mathbb{G}(r, n), d_2) \ar[r]^{\mathop{\rm ev}\nolimits_1} & \mathbb{G}(r, n), } \end{align} we have the natural morphism, \begin{align}\notag \overline{Q}_{g_1, m_1+1}^{\epsilon}(\mathbb{G}(r, n), d_1) \times_{\mathop{\rm ev}\nolimits}\overline{Q}_{g_2, m_2+1}^{\epsilon}&(\mathbb{G}(r, n), d_2) \\ \label{glue} &\to \overline{Q}_{g_1+g_2, m_1+m_2}^{\epsilon}(\mathbb{G}(r, n), d_1+d_2), \end{align} defined by gluing $\epsilon$-stable quotients at the marked points. The standard $\mathop{\rm GL}\nolimits_{n}(\mathbb{C})$-action on $\mathcal{O}_C^{\oplus n}$ induces an $\mathop{\rm GL}\nolimits_n(\mathbb{C})$-action on $\overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)$, i.e. \begin{align*} g\cdot (\mathcal{O}_C^{\oplus n}\stackrel{q}{\twoheadrightarrow} Q)=(\mathcal{O}_C^{\oplus n}\stackrel{q\circ g}{\twoheadrightarrow} Q), \end{align*} for $g\in \mathop{\rm GL}\nolimits_n(\mathbb{C})$. The morphisms (\ref{mor:ev}), (\ref{glue}) are $\mathop{\rm GL}\nolimits_n(\mathbb{C})$-equivariant. \subsection{Virtual fundamental classes} The moduli space of $\epsilon$-stable quotients have the associated virtual fundamental class. The following is an analogue of \cite[Theorem~2, Lemma~4]{MOP} in our situation. \begin{thm}\label{thm:vir} There is a $\mathop{\rm GL}\nolimits_n(\mathbb{C})$-equivariant
$2$-term perfect obstruction theory on $\overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)$. In particular there is a virtual fundamental class, \begin{align*} [\overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)]^{\rm vir} \in A_{\ast}^{\mathop{\rm GL}\nolimits_n(\mathbb{C})} (\overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d), \mathbb{Q}), \end{align*} in the $\mathop{\rm GL}\nolimits_n(\mathbb{C})$-equivariant Chow group. The virtual dimension is given by \begin{align*} nd +r(n-r)(1-g)+3g-3+m, \end{align*} which does not depend on a choice of $\epsilon$. \end{thm} \begin{proof} The same argument of~\cite[Theorem~2, Lemma~4]{MOP} works. For the reader's convenience, we provide the argument. For a fixed marked quasi-stable curve, \begin{align*} (C, p_1, \cdots, p_m)\in \mathcal{M}_{g, m}, \end{align*} the moduli space of $\epsilon$-stable quotients is an open set of the Quot scheme. On the other hand, the deformation theory of the Quot scheme on a non-singular curve is obtained in ~\cite{CK}, \cite{MO}. Noting that any quasi-stable quotient is locally free near nodes, the analogues construction yields the 2-term obstruction theory relative to the forgetting 1-morphism $\nu$, \begin{align*} \nu \colon \overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d) \to \mathcal{M}_{g, m}, \end{align*} given by $\mathbf{R} \pi^{\epsilon}_{\ast}
\mathcal{H} om(S_{U^{\epsilon}}, Q_{U^{\epsilon}})^{\ast}$. (See~(\ref{univ1}), (\ref{univ2}).) The absolute obstruction theory is given by the cone $E^{\bullet}$ of the morphism~\cite{BF}, \cite{GP}, \begin{align*} \mathbf{R} \pi^{\epsilon}_{\ast}
\mathcal{H} om(S_{U^{\epsilon}}, Q_{U^{\epsilon}})^{\ast}
\to \nu^{\ast}\mathbb{L}_{\mathcal{M}_{g, m}}[1], \end{align*} where $\mathbb{L}_{\mathcal{M}_{g, m}}$ is the cotangent complex of the algebraic stack $\mathcal{M}_{g, m}$. By Lemma~\ref{lem:aut}, the complex $E^{\bullet}$ is concentrated on $[-1, 0]$. Let $\mathcal{O}_C^{\oplus n} \twoheadrightarrow Q$ be an $\epsilon$-stable quotient with kernel $S$ and marked points $p_1, \cdots, p_m$. By the above description of the obstruction theory and the Riemann-Roch theorem, the virtual dimension is given by \begin{align*} \chi(S, Q)-\chi(T_{C}(-\sum_{i=1}^m p_i)) =nd+r(n-r)(1-g)+3g-3+m. \end{align*} \end{proof} By the proof of the above theorem, the tangent space $\mathop{\rm Tan}\nolimits_{q}$ and the obstruction space $\mathop{\rm Obs}\nolimits_{q}$ at the $\epsilon$-stable quotient $q\colon \mathcal{O}_C^{\oplus n}\twoheadrightarrow Q$ with kernel $S$ and marked points $p_1, \cdots, p_m$ fit into the exact sequence, \begin{align}\notag 0 & \to H^0(C, T_C(-\sum_{i=1}^{m}p_i)) \to \mathop{\rm Hom}\nolimits(S, Q) \to \mathop{\rm Tan}\nolimits_{q} \\ \label{tanob} & \to H^1(C, T_C(-\sum_{i=1}^{m}p_i)) \to \mathop{\rm Ext}\nolimits^1(S, Q) \to \mathop{\rm Obs}\nolimits_{q}\to 0. \end{align} In the genus zero case, the obstruction space vanishes hence the moduli space is non-singular. \begin{lem}\label{lem:nonsing} The Deligne-Mumford stack $\overline{Q}_{0, m}^{\epsilon}(\mathbb{G}(r, n), d)$ is non-singular of expected dimension $nd+r(n-r)+m-3$. \end{lem} \begin{proof} In the notation of the exact sequence (\ref{tanob}), it is enough to see that \begin{align}\label{genus0} \mathop{\rm Ext}\nolimits^1(S, Q)=H^0(C, S\otimes \widetilde{Q}^{\vee}\otimes \omega_C)^{\ast}=0, \end{align} when the genus of $C$ is zero. Here $\widetilde{Q}$ is the free part of $Q$, i.e. $Q/\tau(Q)$ for the torsion subsheaf $\tau(Q)\subset Q$. For any irreducible component $P\subset C$ with $s(P)=1$, it is easy to see that \begin{align*}
\deg(S\otimes \widetilde{Q}^{\ast}\otimes \omega_C)|_{P}<0, \end{align*} by Lemma~\ref{lem:ob}. Then it is easy to deduce the vanishing (\ref{genus0}). \end{proof}
\subsection{Wall-crossing phenomena of $\epsilon$-stable quotients} Here we see that there is a finite number of values in $\mathbb{R}_{>0}$ so that the moduli spaces of $\epsilon$-stable quotients are constant on each interval. First we treat the case of \begin{align}\label{gm} (g, m)\neq (0, 0). \end{align} We set \begin{align*} 0=\epsilon_{0}<\epsilon_1 <\cdots <\epsilon_d<\epsilon_{d+1}=\infty, \end{align*} as follows, \begin{align}\label{ei} \epsilon_{i}=\frac{1}{d-i+1}, \quad 1\le i\le d. \end{align} \begin{prop}\label{prop:wall} Under the condition (\ref{gm}), take $\epsilon \in (\epsilon_{i-1}, \epsilon_i]$ where $\epsilon_i$ is given by (\ref{ei}). Then we have \begin{align*} \overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)= \overline{Q}_{g, m}^{\epsilon_i}(\mathbb{G}(r, n), d). \end{align*} \end{prop} \begin{proof} Let us take a quasi-stable quotient of type $(r, n, d)$, \begin{align}\label{takest} (\mathcal{O}_C^{\oplus n} \stackrel{q}{\twoheadrightarrow} Q). \end{align} First we show that if (\ref{takest}) is $\epsilon$-stable, then it is also $\epsilon_i$-stable. Since $\epsilon\le \epsilon_i$, the ampleness of $\mathcal{L}(q, \epsilon)$ also implies the ampleness of $\mathcal{L}(q, \epsilon_i)$. For $p\in C$, let us denote by $l_p$ the length of $\tau(Q)$ at $p$. If $l_p\neq 0$, the condition (\ref{ineq}) implies \begin{align}\label{ineq2} 0<\epsilon \le \frac{1}{l_p}. \end{align} Since $l_p \le d$, the inequality (\ref{ineq2}) also implies $\epsilon_i \le 1/l_p$, which in turn implies the condition (\ref{ineq}) for $\epsilon_i$.
Conversely suppose that the quasi-stable quotient (\ref{takest}) is $\epsilon_i$-stable. The inequality (\ref{ineq}) for $\epsilon_i$ also implies (\ref{ineq}) for $\epsilon$ since $\epsilon \le \epsilon_i$. In order to see that $\mathcal{L}(q, \epsilon)$ is ample, take an irreducible component $P\subset C$ and check (\ref{ob1}), (\ref{ob2}) and (\ref{ob3}). The condition (\ref{ob1}) does not depend on $\epsilon$, so (\ref{ob1}) is satisfied. Also the assumption (\ref{gm}) implies that the case (\ref{ob3}) does not occur, hence we only have to check (\ref{ob2}).
We denote by $d_P$ the degree of $Q|_{P}$. If $s(P)=1$ and $g(P)=0$, we have \begin{align}\label{ineqd} \epsilon_i>\frac{1}{d_P}, \end{align} by the condition (\ref{ob2}) for $\epsilon_i$. Since $d_P\le d$, (\ref{ineqd}) implies \begin{align*} \epsilon>\epsilon_{i-1} \ge \frac{1}{d_P}, \end{align*} which in turn implies the condition (\ref{ob2}) for $\epsilon$. Hence (\ref{takest}) is $\epsilon$-stable. \end{proof} Next we treat the case of $(g, m)=(0, 0)$. In this case, the moduli space is empty for small $\epsilon$. \begin{lem}\label{lem:empty} For $0<\epsilon \le 2/d$, we have \begin{align*} \overline{Q}_{0, 0}^{\epsilon}(\mathbb{G}(r, n), d)=\emptyset. \end{align*} \end{lem} \begin{proof} If $\overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)$ is non-empty, the ampleness of $\mathcal{L}(q, \epsilon)$ yields, \begin{align*} 2g-2+m+\epsilon \cdot d>0. \end{align*} Hence if $g=m=0$, the $\epsilon$ should satisfy $\epsilon >2/d$. \end{proof} Let $d'\in \mathbb{Z}$ be the integer part of $d/2$. We set $0=\epsilon_0<\epsilon_1<\cdots$ in the following way. \begin{align}\notag &\epsilon_1=2, \ \epsilon_2=\infty, \quad (d=1) \\ \label{d:odd} &\epsilon_1=\frac{2}{d}, \ \epsilon_i=\frac{1}{d'-i+2}, \ (2\le i\le d'+1), \ \epsilon_{d'+2}=\infty, \quad ( d\ge 3 \mbox{ is odd.}) \\ \label{d:even} & \epsilon_i=\frac{1}{d'-i+1}, \ (1\le i\le d'), \ \epsilon_{d'+1}=\infty, \quad ( d \mbox{ is even.}) \end{align} We have the following. \begin{prop}\label{wall2} For $\epsilon_{\bullet}$ as above, we have \begin{align*} \overline{Q}_{0, 0}^{\epsilon}(\mathbb{G}(r, n), d)= \overline{Q}_{0, 0}^{\epsilon_i}(\mathbb{G}(r, n), d), \end{align*} for $\epsilon \in (\epsilon_{i-1}, \epsilon_i]$. \end{prop} \begin{proof} By Lemma~\ref{lem:empty}, we may assume that $\epsilon_{i-1}\ge 2/d$. Then we can follow the essentially same argument of Proposition~\ref{prop:wall}. The argument is more subtle since we have to take
the condition (\ref{ob3}) into consideration,
but we leave the detail to the reader. \end{proof} Let $\overline{M}_{g, m}(\mathbb{G}(r, n), d)$ be the moduli space of genus $g$, $m$-pointed stable maps $f\colon C \to \mathbb{G}(r, n)$, satisfying \begin{align*} f_{\ast}[C]=d \in H_2(\mathbb{G}(r, n), \mathbb{Z})\cong \mathbb{Z}. \end{align*} (cf.~\cite{Ktor}.) Also we denote by $\overline{Q}_{g, m}(\mathbb{G}(r, n), d)$ the moduli space of MOP-stable quotients of type $(r, n, d)$, constructed in~\cite{MOP}. By the following result, we see that both moduli spaces are related by wall-crossing phenomena of $\epsilon$-stable quotients. \begin{thm}\label{thm:cross} (i) For $\epsilon>2$, we have \begin{align}\label{isom1} \overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d) \cong \overline{M}_{g, m}(\mathbb{G}(r, n), d). \end{align} (ii) For $0<\epsilon \le 1/d$, we have \begin{align} \label{isom2} \overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d) \cong \overline{Q}_{g, m}(\mathbb{G}(r, n), d). \end{align} \end{thm} \begin{proof} (i) First take an $\epsilon$-stable quotient $\mathcal{O}_C^{\oplus n}\stackrel{q}{\twoheadrightarrow} Q$ for some $\epsilon>2$, with marked points $p_1, \cdots, p_m$.
By Proposition~\ref{prop:wall} and Proposition~\ref{wall2}, we may take $\epsilon=3$. The condition (\ref{ineq}) implies that $Q$ is locally free, hence $q$ determines a map, \begin{align}\label{stabmap} f\colon C \to \mathbb{G}(r, n). \end{align} Also the ampleness of $\mathcal{L}(q, 3)$ is equivalent to the ampleness of the line bundle \begin{align}\label{amp:imp} \omega_C(p_1+\cdots p_m)\otimes f^{\ast}O_{G}(3), \end{align} where $\mathcal{O}_G(1)$ is the restriction of $\mathcal{O}(1)$ to $\mathbb{G}(r, n)$ via the Pl$\ddot{\rm{u}}$cker embedding. The ampleness of (\ref{amp:imp}) implies that the map $f$ is a stable map.
Conversely take an $m$-pointed stable map, \begin{align*} f\colon C\to \mathbb{G}(r, n), \quad p_1, \cdots, p_m \in C, \end{align*} and a quotient $\mathcal{O}_C^{\oplus n} \stackrel{q}{\twoheadrightarrow} Q$ by pulling back the universal quotient on $\mathbb{G}(r, n)$ via $f$. Then the stability of the map $f$ implies the ampleness of the line bundle (\ref{amp:imp}), hence the ampleness of $\mathcal{L}(q, 3)$. Also the condition (\ref{ineq}) is automatically satisfied for $\epsilon=3$ since $Q$ is locally free. Hence we obtain the isomorphism (\ref{isom1}).
(ii) If $(g, m)=(0, 0)$, then both sides of (\ref{isom2}) are empty, so we may assume that $(g, m)\neq (0, 0)$. Let us take an $\epsilon$-stable quotient $\mathcal{O}_C^{\oplus n}\stackrel{q}{\twoheadrightarrow} Q$ for $0<\epsilon \le 1/d$. For any irreducible component $P\subset C$, we have
$\deg(Q|_{P})\le d$. By Lemma~\ref{lem:ob}, this implies that there is no irreducible component $P\subset C$ with \begin{align*} (s(P), g(P))=(0, 0) \mbox{ or }(0, 1). \end{align*} Hence applying Lemma~\ref{lem:ob} again, we see that $q$ is MOP-stable.
Conversely take a MOP-stable quotient $\mathcal{O}_C^{\oplus n}\stackrel{q}{\twoheadrightarrow} Q$ and $0<\epsilon \le 1/d$. By the definition of MOP-stable quotient, the line bundle $\mathcal{L}(q, \epsilon)$ is ample. Also for any point $p\in C$, the length of the torsion part of $Q$ is less than or equal to $d$. (cf.~Remark~\ref{e1}). Hence the condition (\ref{ineq}) is satisfied and $q$ is $\epsilon$-stable. Therefore the desired isomorphism (\ref{isom2}) holds. \end{proof} \subsection{Morphisms between moduli spaces of $\epsilon$-stable quotients}\label{subsec:Mor} In this subsection, we construct some natural morphisms between moduli spaces of $\epsilon$-stable quotients. The first one is an analogue of the Pl$\ddot{\rm{u}}$cker embedding. (See~\cite[Section~5]{MOP} for the corresponding morphism between MOP-stable quotients.) \begin{lem} There is a natural morphism, \begin{align}\label{iota} \iota^{\epsilon} \colon \overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d) \to \overline{Q}_{g, m}^{\epsilon}( \mathbb{G}(1, \dbinom{n}{r}), d). \end{align} \end{lem} \begin{proof} For a quasi-stable quotient $\mathcal{O}_C^{\oplus n}\stackrel{q}{\twoheadrightarrow} Q$ of type $(r, n, d)$ with kernel $S$, we associate the exact sequence, \begin{align*} 0\to \wedge^{r} S \to \wedge^{r}\mathcal{O}_C^{\oplus n} \stackrel{q'}{\to}Q' \to 0. \end{align*} It is easy to see that $q$ is $\epsilon$-stable if and only if $q'$ is $\epsilon$-stable. The map $q\mapsto q'$ gives the desired morphism. \end{proof} Next we treat the case of $r=1$. \begin{prop}\label{r=1} For $\epsilon \ge \epsilon'$, there is a natural morphism, \begin{align}\label{mor:c} c_{\epsilon, \epsilon'} \colon \overline{Q}_{g, m}^{\epsilon}(\mathbb{P}^{n-1}, d) \to \overline{Q}_{g, m}^{\epsilon'}(\mathbb{P}^{n-1}, d). \end{align} \end{prop} \begin{proof} For simplicity we deal with the case of $(g, m)\neq (0, 0)$. By Proposition~\ref{prop:wall}, it is enough to construct a morphism \begin{align}\label{ci} c_{i+1, i}\colon \overline{Q}_{g, m}^{\epsilon_{i+1}}(\mathbb{P}^{n-1}, d) \to \overline{Q}_{g, m}^{\epsilon_i}(\mathbb{P}^{n-1}, d), \end{align} where $\epsilon_i$ is given by (\ref{ei}). Let us take an $\epsilon_{i+1}$-stable quotient $\mathcal{O}_C^{\oplus n}\stackrel{q}{\twoheadrightarrow}Q$, and the set of irreducible components $T_1, \cdots, T_k$ of $C$ satisfying \begin{align}\label{T:sat}
(s(T_j), g(T_j))=(1, 0), \quad \deg(Q|_{T_j})=d-i+1. \end{align} Note that $T_j$ and $T_{j'}$ are disjoint for $j\neq j'$, by the assumption $(g, m)\neq (0, 0)$. We set $T$ and $C'$ to be \begin{align}\label{TC'} T=\amalg_{j=1}^{k}T_j, \quad C'=\overline{C\setminus T}. \end{align} The intersection $T_j \cap C'$ consists of one point $x_j$, unless $(g, m)=(0, 1)$, $k=1$ and $i=1$. In the latter case, the space $\overline{Q}_{0, 1}^{\epsilon_1}(\mathbb{P}^{n-1}, d)$ is empty, so there is nothing to prove. Let $S$ be the kernel of $q$. We have the sequence of inclusions, \begin{align*}
S'\cneq S|_{C'}(-\sum_{j=1}^k (d-i+1)x_j) \hookrightarrow
S|_{C'} \hookrightarrow \mathcal{O}_{C'}^{\oplus n}, \end{align*} and the exact sequence, \begin{align*} 0 \to S' \to \mathcal{O}_{C'}^{\oplus n} \stackrel{q'}{\to}Q' \to 0. \end{align*} It is easy to see that $q'$ is an $\epsilon_i$-stable quotient. Then the map $q\mapsto q'$ gives the desired morphism (\ref{ci}). \end{proof}
\begin{rmk}
Suppose that $(g, m)\neq (0, 0)$.
By Proposition~\ref{r=1}, Proposition~\ref{prop:wall}
and Theorem~\ref{thm:cross},
we have the sequence of morphisms,
\begin{align}\notag
\overline{M}_{g, m}(\mathbb{P}^{n-1}, d) =
\overline{Q}_{g, m}^{\epsilon_{d+1}}(\mathbb{P}^{n-1}, d)
\to \overline{Q}_{g, m}^{\epsilon_{d}}(\mathbb{P}^{n-1}, d)
\to \cdots \\ \label{mor:seq}
\cdots \to
\overline{Q}_{g, m}^{\epsilon_{2}}(\mathbb{P}^{n-1}, d)
\to \overline{Q}_{g, m}^{\epsilon_{1}}(\mathbb{P}^{n-1}, d)
=\overline{Q}_{g, m}(\mathbb{P}^{n-1}, d).
\end{align}
The composition of the above morphism
\begin{align}\label{mor:cc}
c\colon \overline{M}_{g, m}(\mathbb{P}^{n-1}, d) \to
\overline{Q}_{g, m}(\mathbb{P}^{n-1}, d)
\end{align}
coincides with the morphism
constructed in~\cite[Section~5]{MOP}.
The morphism $c$
also appears for the Quot scheme of a fixed
non-singular curve in~\cite{PR}.
\end{rmk}
Let us investigate the morphism (\ref{ci})
more precisely.
For $k\in \mathbb{Z}_{\ge 0}$,
we consider a subspace,
\begin{align}\label{sub:k}
\overline{Q}_{g, m}^{\epsilon_{i+1}, k+}(\mathbb{P}^{n-1}, d)
\subset \overline{Q}_{g, m}^{\epsilon_{i+1}}(\mathbb{P}^{n-1}, d),
\end{align}
consisting of $\epsilon_{i+1}$-stable
quotients with
exactly $k$-irreducible
components $T_1, \cdots, T_k$
satisfying (\ref{T:sat}).
Setting $d_i=d-i+1$, the subspace (\ref{sub:k}) fits into the Cartesian diagram, \begin{align}\label{Car} \xymatrix{
\overline{Q}_{g, m}^{\epsilon_{i+1}, k+}(\mathbb{P}^{n-1}, d)
\ar[r]\ar[d] &
\overline{Q}_{0, 1}^{\epsilon_{i+1}}(\mathbb{P}^{n-1}, d_i)^{\times k}
\ar[d]^{(\mathop{\rm ev}\nolimits_1)^{\times k}}, \\
\overline{Q}_{g, k+m}^{\epsilon_{i}, \epsilon_{i+1}} (\mathbb{P}^{n-1}, d-kd_i)
\ar[r] & (\mathbb{P}^{n-1})^{\times k}. } \end{align} Here the bottom arrow is the evaluation map with respect to the first $k$-marked points, and
the space
\begin{align}\label{both}
\overline{Q}_{g, m}^{\epsilon_{i}, \epsilon_{i+1}}
(\mathbb{P}^{n-1}, d)
\end{align}
is the moduli space of genus $g$, $m$-marked quasi-stable
quotients of type $(1, n, d)$, which
is both $\epsilon_i$ and $\epsilon_{i+1}$-stable.
The space (\ref{both}) is an open Deligne-Mumford
substack of $\overline{Q}_{g, m}^{\epsilon}
(\mathbb{P}^{n-1}, d)$
for both $\epsilon=\epsilon_{i}$ and $\epsilon_{i+1}$. Note that the left arrow of (\ref{Car}) is surjective since the right arrow is surjective.
We also consider a subspace
\begin{align*}
\overline{Q}_{g, m}^{\epsilon_{i}, k-}(\mathbb{P}^{n-1}, d)
\subset \overline{Q}_{g, m}^{\epsilon_{i}}(\mathbb{P}^{n-1}, d),
\end{align*}
consisting of $\epsilon_i$-stable quotients
$\mathcal{O}_C^{\oplus n}\stackrel{q}{\twoheadrightarrow} Q$
with exactly $k$-distinct points $x_1, \cdots, x_k \in C$
satisfying
\begin{align*}
\mathop{\rm length}\nolimits \tau(Q)_{x_j}=d_i, \quad
1\le j\le k.
\end{align*}
Obviously we have the isomorphism,
\begin{align}\label{isom:ob}
\overline{Q}_{g, m}^{\epsilon_{i}, k-}(\mathbb{P}^{n-1}, d)
\cong
\overline{Q}_{g, k+m}^{\epsilon_{i}, \epsilon_{i+1}}
(\mathbb{P}^{n-1}, d-kd_i),
\end{align}
and the construction of (\ref{ci}) yields the Cartesian
diagram,
\begin{align}\label{dig:Car}
\xymatrix{
\overline{Q}_{g, m}^{\epsilon_{i+1}, k+}(\mathbb{P}^{n-1}, d)
\ar[r]\ar[d] &
\overline{Q}_{g, m}^{\epsilon_{i+1}}(\mathbb{P}^{n-1}, d)
\ar[d]^{c_{i+1, i}}, \\
\overline{Q}_{g, m}^{\epsilon_{i}, k-}(\mathbb{P}^{n-1}, d)
\ar[r] &
\overline{Q}_{g, m}^{\epsilon_{i}}(\mathbb{P}^{n-1}, d). }
\end{align}
The left arrow of the diagram (\ref{dig:Car})
coincides with the left arrow of (\ref{Car})
under the isomorphism (\ref{isom:ob}), and in particular it is surjective. The above argument implies the following. \begin{lem}\label{lem:surj} The morphism $c_{\epsilon, \epsilon'}$ constructed in Proposition~\ref{r=1} is surjective. \end{lem}
For $r>1$, it seems that there is no natural morphism between $\overline{M}_{g, m}(\mathbb{G}(r, n), d)$ and $\overline{Q}_{g, m}(\mathbb{G}(r, n), d)$, as pointed out in~\cite{MOP}, \cite{PR}. However for $\epsilon=1$, there is a natural morphism between moduli spaces of stable maps and those of $\epsilon$-stable quotients. The following lemma will be used in Lemma~\ref{lem:eqconn} below. \begin{lem}\label{lem:worth} There is a natural surjective morphism, \begin{align*} c'\colon \overline{M}_{g, m}(\mathbb{G}(r, n), d) \to \overline{Q}_{g, m}^{\epsilon=1}(\mathbb{G}(r, n), d). \end{align*} \end{lem} \begin{proof} For simplicity, we assume that $(g, m)\neq (0, 0)$. For a stable map $f\colon C \to \mathbb{G}(r, n)$ of degree $d$, pulling back the universal quotient yields the exact sequence, \begin{align}\label{asseq} 0 \to S \to \mathcal{O}_C^{\oplus n} \stackrel{q}{\to}Q \to 0. \end{align} Here $Q$ is a locally free sheaf on $C$ and the quotient $q$ is of type $(r, n, d)$. Let $T_1, \cdots, T_k$ be the set of irreducible components of $C$, satisfying the following, \begin{align*}
(s(T_j), g(T_j))=(1, 0), \quad \deg(Q|_{T_j})=1. \end{align*} By the exact sequence (\ref{asseq}) and the degree reason, the following isomorphisms exist, \begin{align}\label{isomS}
Q|_{T_j}\cong \mathcal{O}_{\mathbb{P}^1}(1)\oplus \mathcal{O}_{\mathbb{P}^1}^{\oplus n-r-1}, \quad
S|_{T_j}\cong \mathcal{O}_{\mathbb{P}^1}(-1)\oplus \mathcal{O}_{\mathbb{P}^1}^{\oplus r-1}. \end{align} We set $T$ and $C'$ as in (\ref{TC'}), and set $x_j=T_j \cap C'$. Let $\pi$ be the morphism \begin{align*} \pi \colon C \to C', \end{align*} which is identity outside $T$ and contracts $T_j$ to $x_j$. The exact sequences \begin{align}
0 \to Q|_{T}(-\sum_{j=1}^{k}x_j) \to Q \to Q|_{C'} \to 0, \\
0 \to S|_{C'}(-\sum_{j=1}^{k}x_j) \to S \to S|_{T} \to 0, \end{align} and the isomorphisms (\ref{isomS}) show that $\pi_{\ast}Q$ has torsion at $x_j$ with length one and $R^1 \pi_{\ast}S=0$. Therefore applying $\pi_{\ast}$ to (\ref{asseq}) yields the exact sequence, \begin{align*} 0 \to \pi_{\ast}S \to \mathcal{O}_{C'}^{\oplus n} \stackrel{q'}{\to}
\pi_{\ast}Q \to 0. \end{align*} It is easy to see that $q'$ is an $\epsilon$-stable quotient with $\epsilon=1$, and the map $f \mapsto q'$ gives the desired morphism $c'$. An argument similar to Lemma~\ref{lem:surj} shows that the morphism $c'$ is surjective. \end{proof} \subsection{Wall-crossing formula of virtual fundamental classes} In~\cite[Theorem~3, Theorem~4]{MOP}, the virtual fundamental classes on moduli spaces of stable maps and those of MOP-stable quotients are compared. Such a comparison result also holds for $\epsilon$-stable quotients. Note that the arguments in Subsections~\ref{subsec:Pro}, \ref{subsec:Mor} yield the following diagram: \begin{align*} \xymatrix{ & \ar[dl]_{\mathop{\rm ev}\nolimits_i} \overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d) \ar[r]^{\iota^{\epsilon}} & \overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(1,\dbinom{n}{r} ), d) \ar[dd] _{c_{\epsilon, \epsilon'}}, \\ \mathbb{G}(r, n) & & \\ & \ar[ul]^{\mathop{\rm ev}\nolimits_i} \overline{Q}_{g, m}^{\epsilon'}(\mathbb{G}(r, n), d) \ar[r]^{\iota^{\epsilon'}} & \overline{Q}_{g, m}^{\epsilon'}(\mathbb{G}(1, \dbinom{n}{r}), d). } \end{align*} The following theorem, which is a refinement of~\cite[Theorem~4]{MOP}, is interpreted as a wall-crossing formula of GW type invariants. The proof will be given in Section~\ref{sec:cycle}. \begin{thm}\label{thm:wcf} Take $\epsilon \ge \epsilon'>0$ satisfying $2g-2+\epsilon' \cdot d>0$. We have the formula, \begin{align}\label{WCF} c_{\epsilon, \epsilon' \ast}\iota^{\epsilon}_{\ast}
[\overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)]^{\rm vir} = \iota^{\epsilon'}_{\ast}
[\overline{Q}_{g, m}^{\epsilon'}(\mathbb{G}(r, n), d)]^{\rm vir}. \end{align} In particular for classes $\gamma_i \in A_{\mathop{\rm GL}\nolimits_n(\mathbb{C})}^{\ast}(\mathbb{G}(r, n), \mathbb{Q})$, the following holds, \begin{align}\notag & c_{\epsilon, \epsilon' \ast}\iota^{\epsilon}_{\ast}\left( \prod_{i=1}^{m}\mathop{\rm ev}\nolimits_i^{\ast}(\gamma_i) \cap
[\overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)]^{\rm vir} \right) = \\ \label{WCF2} & \qquad \qquad \qquad \qquad \qquad \iota^{\epsilon'}_{\ast}\left( \prod_{i=1}^{m}\mathop{\rm ev}\nolimits_i^{\ast}(\gamma_i) \cap
[\overline{Q}_{g, m}^{\epsilon'}(\mathbb{G}(r, n), d)]^{\rm vir} \right). \end{align} \end{thm} \begin{rmk} The formula (\ref{WCF}) in particular implies the formula, \begin{align}\label{WCF2} c_{\ast}
[\overline{Q}_{g, m}^{\epsilon}(\mathbb{P}^{n-1}, d)]^{\rm vir} = [\overline{Q}_{g, m}^{\epsilon'}(\mathbb{P}^{n-1}, d)]^{\rm vir}. \end{align} Here the morphism $c$ is given by (\ref{mor:cc}). Applying the formula (\ref{WCF2}) to the diagram (\ref{mor:seq}) repeatedly, we obtain the following formula, \begin{align*} c_{\ast}
[\overline{M}_{g, m}(\mathbb{P}^{n-1}, d)]^{\rm vir} = [\overline{Q}_{g, m}(\mathbb{P}^{n-1}, d)]^{\rm vir}, \end{align*} which reconstructs the result of~\cite[Theorem~3]{MOP}, \end{rmk} \section{Type $(1, 1, d)$-quotients}\label{sec:ex} In this section, we investigate the moduli spaces of
$\epsilon$-stable quotients of type $(1, 1, d)$ and relevant wall-crossing phenomena. \subsection{Relation to Hassett's weighted pointed stable curves}\label{subsec:Hassett} Here we see that $\epsilon$-stable quotients of type $(1, 1, d)$ are closely related to Hassett's weighted pointed stable curves~\cite{BH}, which we recall here. Let us take a sequence, \begin{align*} a=(a_1, a_2, \cdots, a_m)\in (0, 1]^{m}. \end{align*} \begin{defi}\label{def:weighted}\emph{ A data $(C, p_1, \cdots, p_m)$ of a nodal curve $C$ and (possibly not distinct) marked points $p_i \in C^{ns}$ is called \textit{$a$-stable} if the following conditions hold. } \begin{itemize} \item \emph{The $\mathbb{R}$-divisor $K_C +\sum_{i=1}^{m}a_i p_i$ is ample. } \item \emph{For any $p\in C$, we have $\sum_{p_i=p}a_i \le 1$. } \end{itemize} \end{defi} Note that setting $a_i=1$ for all $i$ yields the usual $m$-pointed stable curves. The moduli space of genus $g$, $m$-pointed $a$-stable curves is constructed in~\cite{BH} as a proper smooth Deligne-Mumford stack over $\mathbb{C}$. Among weights, we only use the following weight for $\epsilon \in (0, 1]$, \begin{align}\label{amd} a(m, d, \epsilon)\cneq (\displaystyle\overbrace{1, \cdots, 1}^{m}, \displaystyle\overbrace{\epsilon, \cdots, \epsilon}^{d}). \end{align} The moduli space of genus $g$, $m+d$-pointed $a(m, d, \epsilon)$-stable curves is denoted by \begin{align}\label{denoted}
\overline{M}_{g, m|d}^{\epsilon}. \end{align} If $m=0$, we simply write (\ref{denoted}) as $\overline{M}_{g, d}^{\epsilon}$. For $\epsilon \ge \epsilon'$, there is a natural birational contraction~\cite[Theorem~4.3]{BH}, \begin{align}\label{nat:bir} c_{\epsilon, \epsilon'}\colon
\overline{M}_{g, m|d}^{\epsilon} \to
\overline{M}_{g, m|d}^{\epsilon'}. \end{align} Now we describe the moduli spaces of $\epsilon$-stable quotients of type $(1, 1, d)$, and relevant wall-crossing phenomena. In what follows, we denote by \begin{align*} \mathop{\rm pt}\nolimits \cneq \mathbb{P}^{0}=\mathbb{G}(1, 1) \cong \mathop{\rm Spec}\nolimits \mathbb{C}. \end{align*} We have the following proposition. (See~\cite[Proposition~3]{MOP} for the corresponding result of MOP-stable quotients.) \begin{prop}\label{prop:MOP} We have the isomorphism, \begin{align}\label{isom:phi} \phi \colon
\overline{M}_{g, m|d}^{\epsilon}/S_d \stackrel{\sim}{\to} \overline{Q}_{g, m}^{\epsilon}(\mathop{\rm pt}\nolimits, d), \end{align} where the symmetric group $S_d$ acts by permuting the last $d$-marked points. \end{prop} \begin{proof} Take a genus $g$, $m+d$-pointed $a(m, d, \epsilon)$-stable curve, \begin{align*} (C, p_1, \cdots, p_m, \widehat{p}_1, \cdots, \widehat{p}_d). \end{align*} We associate the genus $g$, $m$-pointed quasi-stable quotient of type $(1, 1, d)$ by the exact sequence, \begin{align*} 0 \to \mathcal{O}_C(-\sum_{j=1}^{d}\widehat{p}_j) \to \mathcal{O}_C \stackrel{q}{\to} Q \to 0, \end{align*} with $m$-marked points $p_1, \cdots, p_m$. The $a(m, d, \epsilon)$-stability immediately implies the $\epsilon$-stability for the quotient $q$. The map $(C, p_{\bullet}, \widehat{p}_{\bullet}) \mapsto q$ is $S_d$-equivariant, hence we obtain the map $\phi$. It is straightforward to check that $\phi$ is an isomorphism. \end{proof} \begin{rmk}\label{rmk:coin} The morphism (\ref{nat:bir}) is $S_d$-equivariant, hence it determines a morphism, \begin{align*}
c_{\epsilon, \epsilon'} \colon \overline{M}^{\epsilon}_{g, m|d}/S_d
\to \overline{M}^{\epsilon'}_{g, m|d}/S_d. \end{align*} It is easy to see that the above morphism coincides with (\ref{mor:c}) under the isomorphism (\ref{isom:phi}). \end{rmk} \subsection{The case of $(g, m)=(0, 0)$} Here we investigate $\epsilon$-stable quotients of type $(1, 1, d)$ with $(g, m)=(0, 0)$. First we take $d$ to be an odd integer with $d=2d'+1$, $d'\ge 1$. We take $\epsilon_{\bullet}$ as in (\ref{d:odd}). Applying the morphism (\ref{nat:bir}) repeatedly, we obtain the sequence of birational morphisms, \begin{align}\label{seq:bir} \overline{M}_{0, d}=\overline{M}_{0, d}^{\epsilon_{d'+1}=1} \to \overline{M}_{0, d}^{\epsilon_{d'}} \to \cdots \to \overline{M}_{0, d}^{\epsilon_3} \to \overline{M}_{0, d}^{\epsilon_2 =1/d'}.\end{align} It is easy to see that $\overline{M}_{0, d}^{1/d'}$ is the moduli space of configurations of $d$-points in $\mathbb{P}^1$ in which at most $d'$-points coincide. This space is well-known to be isomorphic to the GIT quotient~\cite{Mum}, \begin{align}\label{GIT} \overline{M}_{0, d}^{1/d'} \cong (\mathbb{P}^1)^d /\hspace{-.3em}/ \mathop{\rm SL}\nolimits_2(\mathbb{C}). \end{align} Here $\mathop{\rm SL}\nolimits_{2}(\mathbb{C})$ acts on $(\mathbb{P}^1)^d$ diagonally, and we take the linearization on $\mathcal{O}(\overbrace{1, \cdots, 1}^{d})$ induced by the standard linearization on $\mathcal{O}_{\mathbb{P}^1}(1)$. Since the sequence (\ref{seq:bir}) is $S_d$-equivariant, taking the quotients of (\ref{seq:bir}) and combining the isomorphism (\ref{isom:phi}) yield the sequence of birational morphisms, \begin{align}\notag &\overline{Q}_{0, 0}^{\epsilon_{d'+1}=1} (\mathop{\rm pt}\nolimits, d) \to \overline{Q}_{0, 0}^{\epsilon_{d'}} (\mathop{\rm pt}\nolimits, d) \to \cdots \\ \label{seq:Q}
& \qquad \qquad \cdots \to \overline{Q}_{0, 0}^{\epsilon_{3}} (\mathop{\rm pt}\nolimits, d) \to \overline{Q}_{0, 0}^{\epsilon_{2}=1/d'} (\mathop{\rm pt}\nolimits, d) \cong \mathbb{P}^d /\hspace{-.3em}/ \mathop{\rm SL}\nolimits_2(\mathbb{C}). \end{align} Here the last isomorphism is obtained by taking the quotient of (\ref{GIT}) by the $S_d$-action. By Remark~\ref{rmk:coin},
each morphism in (\ref{seq:Q}) coincides with the morphism (\ref{mor:c}). Recently Kiem-Moon~\cite{KIMO} show that
each birational morphism in the sequence (\ref{seq:bir}) is a blow-up at a union of transversal smooth subvarieties of same dimension. As pointed out in~\cite[Remark~4.5]{KM}, the sequence (\ref{seq:Q}) is a sequence of weighted blow-ups from $\mathbb{P}^d /\hspace{-.3em}/ \mathop{\rm SL}\nolimits_2(\mathbb{C})$.
When $d$ is even with $d=2d'$, let us take $\epsilon_{\bullet}$ as in (\ref{d:even}). We also have a similar sequence to (\ref{seq:bir}), \begin{align}\notag \overline{M}_{0, d}=\overline{M}_{0, d}^{\epsilon_{d'}=1} \to \overline{M}_{0, d}^{\epsilon_{d'-1}} \to \cdots \to \overline{M}_{0, d}^{\epsilon_3} \to \overline{M}_{0, d}^{\epsilon_2 =1/(d'-1)}. \end{align} which is a sequence of blow-ups~\cite{KIMO}. In this case, instead of the isomorphism (\ref{GIT}), there is a birational morphism, (cf.~\cite[Theorem~1.1]{KIMO},) \begin{align*} \overline{M}_{0, d}^{1/(d'-1)} \to (\mathbb{P}^1)^{d}/\hspace{-.3em}/ \mathop{\rm SL}\nolimits_2(\mathbb{C}), \end{align*} obtained by the blow-up along the singular locus which consists of $\frac{1}{2}{d \atopwithdelims() d'}$ points in the RHS. As mentioned in~\cite{KIMO}, $\overline{M}_{0, d}^{1/(d'-1)}$ is Kirwan's partial desingularization~\cite{FCK} of the GIT quotient $(\mathbb{P}^1)^{d}/\hspace{-.3em}/ \mathop{\rm SL}\nolimits_2(\mathbb{C})$. By taking the quotients with respect to the $S_d$-actions, we obtain a sequence similar to (\ref{seq:Q}), \begin{align}\notag &\overline{Q}_{0, 0}^{\epsilon_{d'}=1} (\mathop{\rm pt}\nolimits, d) \to \overline{Q}_{0, 0}^{\epsilon_{d'-1}} (\mathop{\rm pt}\nolimits, d) \to \cdots \\ \label{seq:Q2}
& \qquad \qquad \qquad \cdots \to \overline{Q}_{0, 0}^{\epsilon_{2}=1/(d'-1)}(\mathop{\rm pt}\nolimits, d) \to (\mathbb{P}^d)/\hspace{-.3em}/ \mathop{\rm SL}\nolimits_2(\mathbb{C}) , \end{align} a sequence of weighted blow-ups. Finally Theorem~\ref{thm:cross} yields that \begin{align*} \overline{Q}_{0, 0}^{\epsilon}(\mathop{\rm pt}\nolimits, d) =\emptyset, \quad \epsilon>1 \mbox{ or }d=1. \end{align*} As a summary, we obtain the following. \begin{thm} The moduli space $\overline{Q}_{0, 0}^{\epsilon}(\mathop{\rm pt}\nolimits, d)$ is either empty or obtained by a sequence of weighted blow-ups starting from the GIT quotient $\mathbb{P}^{d}/\hspace{-.3em}/ \mathop{\rm SL}\nolimits_2(\mathbb{C})$. \end{thm} \subsection{The case of $(g, m)=(0, 1), (0, 2)$} In this subsection, we study moduli spaces of genus zero, $1$ or $2$-pointed $\epsilon$-stable quotients of type $(1, 1, d)$. Note that for small $\epsilon$, we have \begin{align*} \overline{Q}_{0, 1}^{\epsilon}(\mathop{\rm pt}\nolimits, d)=\emptyset, \quad 0<\epsilon \le 1/d. \end{align*} The first interesting situation happens at $\epsilon=1/(d-1)$ and $d\ge 2$. For an object \begin{align*} (C, p, \widehat{p}_1, \cdots, \widehat{p}_d) \in
\overline{M}^{1/(d-1)}_{0, 1|d}, \end{align*} applying Lemma~\ref{lem:ob} immediately implies that $C\cong \mathbb{P}^1$. We may assume that $p=\infty \in \mathbb{P}^1$, hence $\widehat{p}_i \in \mathbb{A}^1$. The stability condition is equivalent to that at least two points among $\widehat{p}_1, \cdots, \widehat{p}_d$ are distinct. Let $\Delta$ be the small diagonal, \begin{align*} \Delta=\{(x_1, \cdots, x_d)\in \mathbb{A}^d : x_1=x_2=\cdots =x_d\}. \end{align*}
Noting that the subgroup of automorphisms of $\mathbb{P}^1$ preserving $p\in \mathbb{P}^1$ is $\mathbb{A}^1 \rtimes \mathbb{G}_m$, we have \begin{align*}
\overline{M}^{1/(d-1)}_{0, 1|d} &\cong (\mathbb{A}^{d}\setminus \Delta)/\mathbb{A}\rtimes \mathbb{G}_m \\ &\cong \mathbb{P}^{d-2}. \end{align*} By Proposition~\ref{prop:MOP}, we obtain \begin{align}\label{gm01} \overline{Q}_{0, 1}^{1/(d-1)}(\mathop{\rm pt}\nolimits, d) \cong \mathbb{P}^{d-2}/S_d. \end{align} In particular for each $\epsilon \in \mathbb{R}_{>0}$, the moduli space
$\overline{Q}_{0, 1}^{\epsilon}(\mathop{\rm pt}\nolimits, d)$ is either empty or admits a birational morphism to $\mathbb{P}^{d-2}/S_d$.
Next we look at the case of $(g, m)=(0, 2)$. An $\epsilon$-stable quotient is a MOP-stable quotient for $0<\epsilon \le 1/d$, and in this case the moduli space is described in~\cite[Section~4]{MOP}. In fact for any MOP-stable quotient $\mathcal{O}_C^{\oplus n} \stackrel{q}{\twoheadrightarrow}Q$, the curve $C$ is a chain of rational curves and two marked points lie at distinct rational tails if $C$ is not irreducible. If $k$ is the number of irreducible components of $C$, then giving a MOP stable quotient is equivalent to giving a partition $d_1+\cdots +d_k=d$ and length $d_i$-divisors on each irreducible component up to rotations. Therefore we have (set theoretically) \begin{align}\label{set} \overline{Q}_{0, 2}(\mathop{\rm pt}\nolimits, d) =\coprod_{\begin{subarray}{c}k\ge 1 \\ d_1+\cdots +d_k=d \end{subarray}} \prod_{j=1}^{k}\mathop{\rm Sym}\nolimits ^{d_i}(\mathbb{C}^{\ast})/\mathbb{C}^{\ast}. \end{align} For $1/d<\epsilon \le 1/(d-1)$, a MOP stable quotient $\mathcal{O}_C^{\oplus n} \stackrel{q}{\twoheadrightarrow}Q$ is not $\epsilon$-stable if and only if $C\cong \mathbb{P}^1$ and the support of $\tau(Q)$ consists of one point. Such stable quotients consist of one point in the RHS of (\ref{set}). Noting the isomorphism (\ref{gm01}), the Cartesian diagram (\ref{dig:Car}) is described as follows, \begin{align*}
\xymatrix{
\mathbb{P}^{d-2}/S_d
\ar[r]\ar[d] &
\overline{Q}_{0, 2}^{\epsilon}(\mathop{\rm pt}\nolimits, d)
\ar[d], \\
\mathop{\rm Spec}\nolimits \mathbb{C}
\ar[r] &
\overline{Q}_{0, 2}(\mathop{\rm pt}\nolimits, d). }
\end{align*}
\section{Proof of Theorem~\ref{thm:rep}}\label{sec:proof} In this section, we give a proof of Theorem~\ref{thm:rep}. We first show that $\overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)$ is a Deligne-Mumford stack of finite type over $\mathbb{C}$, following the argument of~\cite{MOP}, \cite{BH}. Next we show the properness of $\overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)$ using the valuative criterion. The argument to show the properness of MOP-stable quotients~\cite[Section~6]{MOP} is not
applied for $\epsilon$-stable quotients. Instead we give an alternative argument, which also gives another proof of~\cite[Theorem~1]{MOP}. \subsection{Construction of the moduli space} The same arguments of Proposition~\ref{prop:wall} and Proposition~\ref{wall2} show the similar result for the 2-functors (\ref{2func}). For $\epsilon >1$, the moduli space of $\epsilon$-stable quotients is either empty or isomorphic to the moduli space of stable maps to the Grassmannian. Therefore we assume that \begin{align*} \epsilon=\frac{1}{l}, \quad l=1, 2, \cdots, d, \end{align*} and construct the moduli space $\overline{Q}_{g, m}^{1/l}(\mathbb{G}(r, n), d)$ as a global quotient stack. If $\epsilon=1/d$, then the moduli space coincides with that of MOP-stable quotients, (cf.~Theorem~\ref{thm:cross},)
and the construction is given in~\cite[Section~6]{MOP}. We need to slightly modify the argument to construct the moduli spaces for a general $\epsilon$, but the essential idea is the same. First we show the following lemma. \begin{lem}\label{veryample} Take an $\epsilon=1/l$-stable quotient $\mathcal{O}_C^{\oplus n}\stackrel{q}{\twoheadrightarrow}Q$ and an integer $k\ge 5$. Then the line bundle $\mathcal{L}(q, 1/l)^{\otimes lk}$ is very ample. Here $\mathcal{L}(q, 1/l)$ is defined in (\ref{def:L}). \end{lem} \begin{proof} It is enough to show that for $x_1, x_2 \in C$, we have \begin{align}\label{x1x2} H^1(C, \mathcal{L}(q, 1/l)^{\otimes lk}\otimes I_{x_1}I_{x_2})=0. \end{align} Here $I_{x_i}$ is the ideal sheaf of $x_i$. By the Serre duality, (\ref{x1x2}) is equivalent to \begin{align}\label{xx12} \mathop{\rm Hom}\nolimits(I_{x_1}I_{x_2}, \omega_{C}\otimes \mathcal{L}(q, 1/l)^{\otimes (-lk)})=0. \end{align} Suppose that $x_1, x_2 \in C^{ns}$. For an irreducible component $P\subset C$,
we set $d_P=\deg(Q|_{P})$. In the notation of Lemma~\ref{lem:ob}, we have \begin{align}\notag
&\deg(\omega_C(x_1+x_2)\otimes \mathcal{L}(q, 1/l)^{\otimes (-lk)}|_{P}) \\ \notag &\le 2g(P)-2+s(P)+2-lk(2g(P)-2+s(P)+d_{P}/l) \\ \label{deg:ineq} &=(2g(P)-2+s(P))(1-lk)+2-d_{P}k. \end{align} In the case of \begin{align*} 2g(P)-2+s(P)>0, \end{align*} then (\ref{deg:ineq}) is obviously negative. Otherwise $(g(P), s(P))$ is either one of the following, \begin{align*} (g(P), s(P))=(1, 0), (0, 2), (0, 1), (0, 0). \end{align*} In these cases, (\ref{deg:ineq}) is negative by Lemma~\ref{lem:ob}. Therefore (\ref{xx12}) holds.
When $x_1$ or $x_2$ or both of them are node, for instance $x_1$ is node and $x_2 \in C^{ns}$, then we take the normalization at $x_1$, \begin{align*} \pi \colon \widetilde{C}\to C, \end{align*} with $\pi^{-1}(x_1)=\{x_1', x_1''\}$. Then (\ref{xx12}) is equivalent to \begin{align}\label{xxx12} H^0(\widetilde{C}, \omega_{\widetilde{C}}(x_1'+x_1''+x_2)\otimes \mathcal{L}(q, 1/l)^{\otimes (-lk)})=0, \end{align} and the same calculation as above shows (\ref{xxx12}). The other cases are also similarly discussed. \end{proof} By Lemma~\ref{veryample}, we have \begin{align}\label{notdepend} h^{0}(C, \mathcal{L}(q, 1/l)^{\otimes kl}) =1-g+kl(2g-2)+kd+m, \end{align} which does not depend on a choice of $1/l$-stable quotient of type $(r, n, d)$. Let $V$ be an $\mathbb{C}$-vector space of dimension (\ref{notdepend}). The very ample line bundle $\mathcal{L}(q, 1/l)^{\otimes kl}$ on $C$ determines an embedding, \begin{align*} C\hookrightarrow \mathbb{P}(V), \end{align*} and marked points determine points in $\mathbb{P}(V)$. Therefore $1/l$-stable quotient associates a point, \begin{align} (C, p_1, \cdots, p_m) \in \mathop{\rm Hilb}\nolimits(\mathbb{P}(V)) \times \mathbb{P}(V)^{\times m}. \end{align} Let \begin{align*} \mathcal{H} \subset \mathop{\rm Hilb}\nolimits(\mathbb{P}(V))\times \mathbb{P}(V)^{\times m} \end{align*} be the locally closed subscheme which parameterizes $(C, p_1, \cdots, p_m)$ satisfying the following. \begin{itemize} \item The subscheme $C\subset \mathbb{P}(V)$ is a connected nodal curve of genus $g$. \item We have $p_i \in C^{ns}$ and $p_i \neq p_j$ for $i\neq j$. \end{itemize} Let $\pi \colon \mathcal{C} \to \mathcal{H}$ be the universal curve and \begin{align*} \mathop{\rm Quot}\nolimits(n-r, d) \to \mathcal{H} \end{align*} the relative Quot scheme which parameterizes rank $n-r$, degree $d$ quotients $\mathcal{O}_C^{\oplus n} \twoheadrightarrow Q$ on the fibers of $\pi$. We define \begin{align*} \mathcal{Q} \subset \mathop{\rm Quot}\nolimits(n-r, d), \end{align*} to be the locally closed subscheme corresponding to quotients $\mathcal{O}_C^{\oplus n} \stackrel{q}{\twoheadrightarrow} Q$ satisfying the following. \begin{itemize} \item The coherent sheaf $Q$ is locally free near nodes and $p_i$. \item For any $p\in C$, we have $\mathop{\rm length}\nolimits \tau(Q)_{p} \le l$. \item The line bundle $\mathcal{L}(q, 1/l)^{\otimes lk}$
coincides with $\mathcal{O}_{\mathbb{P}(V)}(1)|_{C}$. \end{itemize} The natural $\mathop{\rm PGL}\nolimits_{2}(\mathbb{C})$-action on $\mathcal{H}$ lifts to the action on $\mathcal{Q}$, and the desired moduli space is the following quotient stack, \begin{align*} \overline{Q}_{g, m}^{1/l}(\mathbb{G}(r, n), d) =[\mathcal{Q}/\mathop{\rm PGL}\nolimits_{2}(\mathbb{C})]. \end{align*} By Lemma~\ref{lem:aut}, the stabilizer groups of closed points in $\overline{Q}_{g, m}^{1/l}(\mathbb{G}(r, n), d)$ are finite. Hence this is a Deligne-Mumford stack of finite type over $\mathbb{C}$. \subsection{Valuative criterion} In this subsection, we prove the properness of the moduli stack $\overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)$. Before this, we introduce some notation. Let $X$ be a variety and $F$ a locally sheaf of rank $r$ on $X$. For $n\ge r$ and a morphism, \begin{align*} s\colon \mathcal{O}_X^{\oplus n} \to F, \end{align*} we associate the degenerate locus, \begin{align*} Z(s)\subset X. \end{align*} Namely $Z(s)$ is defined by the ideal, locally generated by $r\times r$-minors of the matrix given by $s$. For a point $g\in \mathbb{G}(r, n)$, let us choose a lift of $g$ to an embedding \begin{align}\label{lift} g\colon \mathbb{C}^{r} \hookrightarrow \mathbb{C}^n. \end{align} Here by abuse of notation, we have also denoted the above embedding by $g$. We have the sequence, \begin{align*}s_g \colon \mathcal{O}_{X}^{\oplus r}\stackrel{g}{\hookrightarrow} \mathcal{O}_X^{\oplus n} \stackrel{s}{\to} F. \end{align*} The morphism $s_g$ is determined by $g\in \mathbb{G}(r, n)$ up to the $\mathop{\rm GL}\nolimits_{r}(\mathbb{C})$-action on $\mathcal{O}_X^{\oplus r}$. Note that if $s_g$ is injective, then $Z(s_g)$ is a divisor on $X$ which does not depend on a choice of a lift (\ref{lift}). The divisor $Z(s_g)$ fits into the exact sequence, \begin{align*} 0 \to \bigwedge^{r}F^{\vee} \to \mathcal{O}_X \to \mathcal{O}_{Z(s_g)} \to 0. \end{align*} When $X=\mathbb{G}(n-r, n)$ and $s$ is a universal rank $r$ quotient, then $s_g$ is injective and $H_g \cneq Z(s_g)$ is a divisor in $\mathbb{G}(n-r, n)$. \begin{lem}\label{lem:div} Let $\mathcal{O}_C^{\oplus n}\stackrel{q}{\twoheadrightarrow} Q$ be an $\epsilon$-stable quotient with kernel $S$ and marked points $p_1, \cdots, p_m$. Let $s\colon \mathcal{O}_{C}^{\oplus n}\to S^{\vee}$ be the dual of the inclusion $S\hookrightarrow \mathcal{O}_C^{\oplus n}$. Then for a general choice of $g\in \mathbb{G}(r, n)$, the degenerate locus $Z(s_g)\subset C$ is a divisor written as \begin{align*} Z(s_g)=Z(s)+D_g. \end{align*} Here $D_g$ is a reduced divisor on $C$ satisfying \begin{align*} D_g \cap \{Z(s)\cup \{p_1, \cdots, p_m\} \}=\emptyset. \end{align*} \end{lem} \begin{proof} Let $F\subset S^{\vee}$ be the image of $s$. Note that $F$ is a locally free sheaf of rank $r$, hence it determines a map, \begin{align*} \pi_{F}\colon C \to \mathbb{G}(n-r, n). \end{align*} It is easy to see that a general $g\in \mathbb{G}(r, n)$ satisfies the following. \begin{itemize} \item The divisor $H_g \subset \mathbb{G}(n-r, n)$ intersects the image of $\pi_{F}$ transversally. (Or the intersection is empty if $\pi_{F}(C)$ is a point.) \item For $p\in \mathop{\rm Supp}\nolimits \tau(Q) \cup \{p_1, \cdots, p_m\}$, we have $\pi_{F}(p)\notin H_g$. \end{itemize} Then we have \begin{align*} Z(s_g)=Z(s)+\pi_{F}^{\ast}H_g, \end{align*} and $D_g \cneq \pi_{F}^{\ast}H_g$ satisfies the desired property. \end{proof} In the next proposition, we show that the moduli space of $\epsilon$-stable quotients is separated. Let $\Delta$ be a non-singular curve with a closed point $0\in \Delta$. We set \begin{align*} \Delta^{\ast}=\Delta \setminus\{0\}. \end{align*} \begin{prop}\label{qstable} For $i=1, 2$, let $\pi_i \colon \mathcal{X}_i \to \Delta$ be flat families of quasi-stable curves with disjoint sections $p_1^{(i)}, \cdots, p_m^{(i)} \colon \Delta \to \mathcal{X}_i$. Let $q_i \colon \mathcal{O}_{\mathcal{X}_i}^{\oplus n} \twoheadrightarrow \mathcal{Q}_i$ be flat families of $\epsilon$-stable quotients of type $(r, n, d)$ which are isomorphic over $\Delta^{\ast}$. Then possibly after base change ramified over $0$, there is an isomorphism $\phi \colon \mathcal{X}_1 \stackrel{\sim}{\to}\mathcal{X}_2$ over $\Delta$ and an isomorphism $\psi \colon \phi^{\ast}\mathcal{Q}_2 \stackrel{\sim}{\to} \mathcal{Q}_1$ such that the following diagram commutes, \begin{align*} \xymatrix{ \mathcal{O}_{\mathcal{X}_1}^{\oplus n} \ar[r]^{\phi^{\ast}q_2} \ar[d]_{\textrm{id}} & \phi^{\ast}\mathcal{Q}_2 \ar[d]^{\psi} \\ \mathcal{O}_{\mathcal{X}_1}^{\oplus n} \ar[r]^{q_1} & \mathcal{Q}_1.} \end{align*} \end{prop} \begin{proof} Since the relative Quot scheme is separated, it is enough show that the isomorphism over $\Delta^{\ast}$ extends to the families of marked curves $\pi_i \colon \mathcal{X}_i \to \Delta$. By taking the base change and the normalization, we may assume that the general fibers of $\pi_i$ are non-singular irreducible curves, by adding the preimage of the nodes to the marking points. Let us take exact sequences, \begin{align*} 0 \to \mathcal{S}_i \to \mathcal{O}_{\mathcal{X}_i}^{\oplus n} \to \mathcal{Q}_i \to 0. \end{align*}
Since $\mathcal{S}_i|_{\mathcal{X}_{i, t}}$ is locally free for any $t\in \Delta$, where $\mathcal{X}_{i, t}\cneq \pi_{i}^{-1}(t)$, the sheaf $\mathcal{S}_i$ is a locally free sheaf on $\mathcal{X}_i$. Taking the dual, we obtain the morphism, \begin{align*} s_i \colon \mathcal{O}_{\mathcal{X}_i}^{\oplus n}\to \mathcal{S}_i^{\vee}. \end{align*} Let us take a general point $g\in \mathbb{G}(r, n)$ and the degenerate locus, \begin{align*} D_i \cneq Z(s_{i, g})\subset \mathcal{X}_i. \end{align*} By Lemma~\ref{lem:div}, the divisor
$D_{i, t}\cneq D_i|_{\mathcal{X}_{i, t}}$ is written as \begin{align*} D_{i, t}=Z(s_{i, t})+D_{i, t}^{\circ}, \end{align*} where $D_{i, t}^{\circ}$ is a reduced divisor on $\mathcal{X}_{i, t}$, satisfying \begin{align*} D_{i, t}^{\circ} \cap \{ Z(s_{i, t}) \cup \{p_1(t), \cdots, p_m(t)\} \} =\emptyset. \end{align*} Then the $\epsilon$-stability of $\mathcal{O}_{\mathcal{X}_{i, t}}^{\oplus n} \stackrel{q_{i, t}}{\twoheadrightarrow}
\mathcal{Q}_{i}|_{\mathcal{X}_{i, t}}$ implies the following. \begin{itemize} \item The coefficients of the $\mathbb{R}$-divisor $\sum_{j=1}^{m}p_j^{(i)}(t) +\epsilon \cdot D_{i, t}$ have less than or equal to $1$. \item The $\mathbb{R}$-divisor $K_{\mathcal{X}_{i, t}}+\sum_{j=1}^{m} p_j^{(i)}(t) +\epsilon \cdot D_{i, t}$ is ample on $\mathcal{X}_{i, t}$. \end{itemize} The first condition implies that the pairs \begin{align}\label{quot:pair} (\mathcal{X}_i, \sum_{j=1}^{m}p_j^{(i)}+\epsilon \cdot D_i), \quad i=1, 2, \end{align} have only log canonical singularities. (cf.~\cite{KM}, \cite{KMM}.) Also since the divisors $\sum_{j=1}^{m}p_j^{(i)}+\epsilon \cdot D_i$ do not contain curves supported on the central fibers, we have \begin{align*} \phi_{\ast}\left(\sum_{j=1}^{m}p_j^{(1)}+\epsilon \cdot D_1\right) =\sum_{j=1}^{m}p_j^{(2)}+\epsilon \cdot D_2, \end{align*} where $\phi$ is the birational map $\phi \colon \mathcal{X}_1 \dashrightarrow \mathcal{X}_2$. Therefore the pairs (\ref{quot:pair}) are birational log canonical models over $\Delta$. Since two birational log canonical models are isomorphic, the birational map $\phi$ extends to an isomorphism $\phi \colon \mathcal{X}_1 \stackrel{\cong}{\to} \mathcal{X}_2$. \end{proof} Finally we show that the moduli space $\overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)$ is complete. \begin{prop} Suppose that the following data is a flat family of $m$-pointed $\epsilon$-stable quotients of type $(r, n, d)$ over $\Delta^{\ast}$, \begin{align}\label{given} \pi^{\ast} \colon \mathcal{X}^{\ast} \to \Delta^{\ast}, \quad p_1^{\ast},
\cdots, p_m^{\ast} \colon \Delta^{\ast} \to \mathcal{X}^{\ast}, \quad q^{\ast}\colon \mathcal{O}_{\mathcal{X}^{\ast}}^{\oplus n} \twoheadrightarrow \mathcal{Q}^{\ast}. \end{align} Then possibly after base change ramified over $0\in \Delta$, there is a flat family of $m$-pointed $\epsilon$-stable quotients over $\Delta$, \begin{align}\label{ext-fami} \pi \colon \mathcal{X} \to \Delta, \quad p_1, \cdots, p_m \colon \Delta \to \mathcal{X}, \quad q\colon \mathcal{O}_{\mathcal{X}}^{\oplus n} \twoheadrightarrow \mathcal{Q}, \end{align} which is isomorphic to (\ref{given}) over $\Delta^{\ast}$. \end{prop} \begin{proof} As in the proof of Proposition~\ref{qstable}, we may assume that
the general fibers of $\pi^{\ast}$ are non-singular irreducible curves. Let $\mathcal{S}^{\ast}$ be the kernel of $q^{\ast}$. Taking the dual of the inclusion $\mathcal{S}^{\ast} \subset \mathcal{O}_{\mathcal{X}^{\ast}}^{\oplus n}$, we obtain the morphism \begin{align*} s^{\ast}\colon \mathcal{O}_{\mathcal{X}^{\ast}}^{\oplus n} \to \mathcal{S}^{\ast \vee}. \end{align*} We choose a general point, \begin{align}\label{choose} g\in \mathbb{G}(r, n), \end{align} and set $D^{\ast}\cneq Z(s^{\ast}_{g}) \subset \mathcal{X}^{\ast}$. As in the proof of Proposition~\ref{qstable}, the $\epsilon$-stability implies that the pair \begin{align}\label{indeed} (\mathcal{X}^{\ast}, \sum_{j=1}^{m}p_{j}^{\ast}+\epsilon \cdot D^{\ast}) \end{align} is a log canonical model over $\Delta^{\ast}$.
Indeed, the family (\ref{indeed}) can be interpreted as a family of Hassett's weighted pointed stable curves~\cite{BH}. Let us write \begin{align*} D^{\ast}=\sum_{j=1}^{k}m_j D_j^{\ast}, \end{align*} for distinct irreducible divisors $D_j^{\ast}$ and $m_j \ge 1$. Since the family (\ref{given}) is of type $(r, n, d)$, we have \begin{align*} m_1+ m_2+ \cdots +m_k=d. \end{align*} By shrinking $\Delta$ if necessary, we may assume that each $D_j^{\ast}$ is a section of $\pi^{\ast}$. Then the data \begin{align}\label{fam:wei} (\pi^{\ast} \colon \mathcal{X}^{\ast}\to \Delta^{\ast}, p_1^{\ast}, \cdots, p_m^{\ast}, \overbrace{D_1^{\ast}, \cdots, D_1^{\ast}}^{m_1}, \cdots, \overbrace{D_k^{\ast}, \cdots, D_k^{\ast}}^{m_k}), \end{align} is a family of $a(m, d, \epsilon)$-stable $m+d$-pointed curves~\cite{BH} over $\Delta^{\ast}$. (See Definition~\ref{def:weighted} and (\ref{amd}).)
By the properness of $\overline{M}_{g, m|d}^{\epsilon}$, (cf.~\cite{BH}, (\ref{denoted}),) there is a family of $a(m, d, \epsilon)$-stable $m+d$-pointed curves over $\Delta$, \begin{align}\label{a(mde)} (\pi \colon \mathcal{X} \to \Delta, p_1, \cdots, p_m, \overbrace{D_1, \cdots, D_1}^{m_1}, \cdots, \overbrace{D_k, \cdots, D_k}^{m_k}), \end{align} which is isomorphic to the family (\ref{fam:wei}) over $\Delta^{\ast}$. In particular we have an extension of $D^{\ast}$ to $\mathcal{X}$, \begin{align*} D=\sum_{j=1}^{k}m_j D_j, \quad
D|_{\mathcal{X}^{\ast}}=D^{\ast}. \end{align*}
By the properness of the relative Quot scheme, there is an exact sequence \begin{align}\label{Quot-ex} 0 \to \mathcal{S} \to \mathcal{O}_{\mathcal{X}}^{\oplus n}\stackrel{q}{\to} \mathcal{Q} \to 0, \end{align} such that $q$ is isomorphic to $q^{\ast}$ over $\Delta^{\ast}$. Restricting to $\mathcal{X}_0$, we obtain the exact sequence, \begin{align}\label{seq:res} 0 \to \mathcal{S}_0 \to \mathcal{O}_{\mathcal{X}_0}^{\oplus n} \stackrel{q_0}{\to} \mathcal{Q}_0 \to 0. \end{align} We claim that the quotient $q_0$ is an $\epsilon$-stable quotient, hence the family $(\mathcal{X}, p_1, \cdots, p_m)$ and $q$ gives a desired extension (\ref{ext-fami}). We prove the following lemma. \begin{lem} The sheaf $\mathcal{S}$ is a locally free sheaf on $\mathcal{X}$. \end{lem} \begin{proof} First we see that the sheaf $\mathcal{S}$ is reflexive, i.e. $\mathcal{S}^{\vee \vee}\cong \mathcal{S}$. We have the morphism of exact sequence of sheaves on $\mathcal{X}$, \begin{align*} \xymatrix{0 \ar[r] & \mathcal{S} \ar[r]\ar[d] & \mathcal{O}_{\mathcal{X}}^{\oplus n} \ar[r]\ar[d] & \mathcal{Q} \ar[r]\ar[d] & 0 \\ 0 \ar[r] & \mathcal{S}^{\vee \vee} \ar[r] & \mathcal{O}_{\mathcal{X}}^{\oplus n} \ar[r] & \mathcal{Q}' \ar[r] & 0, } \end{align*} where the left arrow is an injection. By the snake lemma, there is an inclusion, \begin{align*} \mathcal{S}^{\vee \vee}/\mathcal{S} \hookrightarrow \mathcal{Q}, \end{align*} and $\mathcal{S}^{\vee \vee}/\mathcal{S}$ is supported on $\mathcal{X}_0$, which contradicts to that $\mathcal{Q}$ is flat over $\Delta$. In particular setting \begin{align*} U=\mathcal{X} \setminus (\mbox{nodes of }\mathcal{X}_0), \end{align*} the sheaf $\mathcal{S}$ is a push-forward of some locally free sheaf on $U$ to $\mathcal{X}$. We only need to check that $\mathcal{S}$ is free at nodes on $\mathcal{X}_0$.
Taking the dual of the inclusion $\mathcal{S} \hookrightarrow \mathcal{O}_{\mathcal{X}}^{\oplus n}$ and composing with $g\colon \mathcal{O}_{\mathcal{X}}^{\oplus r} \hookrightarrow \mathcal{O}_{\mathcal{X}}^{\oplus n}$, where $g$ is taken in (\ref{choose}), we obtain a morphism \begin{align*} s_g \colon \mathcal{O}_{\mathcal{X}}^{\oplus r} \stackrel{g}{\hookrightarrow} \mathcal{O}_{\mathcal{X}}^{\oplus n} \to \mathcal{S}^{\vee}. \end{align*} Restricting to $U$, we obtain the divisor in $U$, \begin{align*}
D^{\dag}_{U}\cneq Z(s_g|_{U}) \subset U, \end{align*} and the closure of $D^{\dag}_{U}$ in $\mathcal{X}$ is denoted by $D^{\dag}$. We have the following. \begin{itemize} \item By the construction, we have
$D|_{\mathcal{X}^{\ast}}=D^{\dag}|_{\mathcal{X}^{\ast}}$. \item Replacing $g$ by another general point in $\mathbb{G}(r, n)$ if necessary, the divisors $D^{\dag}$ and $D$ do not contain any irreducible component of $\mathcal{X}_0$. \end{itemize} These properties imply that $D^{\dag}=D$. Noting that the divisor $D$ has support away from nodes of $\mathcal{X}_0$, the support of the cokernel of $s_g$ is written as \begin{align}\label{supp:cok} \mathop{\rm Supp}\nolimits \mathop{\rm Cok}\nolimits(s_g)=\mathop{\rm Supp}\nolimits(D) \amalg V, \end{align} where $V$ is a finite set of points contained in the nodes of $\mathcal{X}_0$. However if $V$ is non-empty, then there is a nodal point $x\in \mathcal{X}_{0}$ and an injection $\mathcal{O}_x \hookrightarrow \mathcal{S}^{\vee}$, which contradicts to that $\mathcal{S}$ is torsion free. Therefore $V$ is empty, and the morphism $s_g$ is isomorphic on nodes of $\mathcal{X}_0$. Hence $\mathcal{S}^{\vee}$ is a locally free sheaf on $\mathcal{X}_{0}$,
and the sheaf $\mathcal{S}$ is also locally free since $\mathcal{S} \cong \mathcal{S}^{\vee \vee}$. \end{proof} Note that the locally freeness of $\mathcal{S}$ implies that the divisor $Z(s_g)$ is well-defined, and the proof of the above lemma immediately implies that \begin{align}\label{Z=D} Z(s_g)=D^{\dag}=D. \end{align}
Next let us see that $q_0$ is a quasi-stable quotient. Taking $\mathcal{H} om(\ast, \mathcal{O}_{\mathcal{X}_0})$ to the exact sequence (\ref{seq:res}), we obtain the exact sequence, \begin{align*} 0 \to \mathcal{Q}_0^{\vee} \to \mathcal{O}_{\mathcal{X}_0}^{\oplus n} \stackrel{s_0}{\to} \mathcal{S}_0^{\vee} \to \mathcal{E} xt^1_{\mathcal{X}_0}(\mathcal{Q}_0, \mathcal{O}_{\mathcal{X}_0}) \to 0, \end{align*} and the vanishing $\mathcal{E} xt^{i}_{\mathcal{X}_0}(\mathcal{Q}_0, \mathcal{O}_{\mathcal{X}_0})=0$ for $i\ge 2$. We have the surjection, \begin{align}\label{RHSsur} \mathop{\rm Cok}\nolimits(s_{0, g}) \twoheadrightarrow \mathcal{E} xt^{1}_{\mathcal{X}_0}(\mathcal{Q}_0, \mathcal{O}_{\mathcal{X}_0}), \end{align} and the LHS of (\ref{RHSsur}) has support away from nodes and markings by (\ref{Z=D}). Therefore for a nodal point or marked point $p\in \mathcal{X}_{0}$, we have \begin{align*} \mathcal{E} xt^{i}_{\mathcal{X}_0}(\mathcal{Q}_0, \mathcal{O}_{\mathcal{X}_0})_{p}=0, \quad i\ge 1, \end{align*} which implies that $\mathcal{Q}_0$ is locally free at $p$, i.e. $q_0 \colon \mathcal{O}_{\mathcal{X}_0}^{\oplus n} \twoheadrightarrow \mathcal{Q}_0$ is a quasi-stable quotient.
Finally we check the $\epsilon$-stability of $q_0$. The ampleness of $\mathcal{L}(q_{0}, \epsilon)$ is equivalent to the ampleness of the divisor, \begin{align}\label{eq:div} K_{\mathcal{X}_0}+p_1(0)+\cdots +p_m(0)+\epsilon \cdot Z(s_{g, 0}). \end{align} Noting the equality (\ref{Z=D}), we have
$Z(s_{g, 0})=D|_{\mathcal{X}_0}$. Since the data (\ref{a(mde)}) is a family of $a(m, d, \epsilon)$-stable curves, the divisor (\ref{eq:div}) on $\mathcal{X}_0$ is ample. Also the surjection (\ref{RHSsur})
and the fact $Z(s_{g, 0})=D|_{\mathcal{X}_0}$ imply that \begin{align}\notag \epsilon \cdot \mathop{\rm length}\nolimits \tau(\mathcal{Q}_0)_{p} &\le \epsilon \cdot \mathop{\rm length}\nolimits \mathop{\rm Cok}\nolimits(s_{g, 0})_{p} \\ \notag &= \epsilon \cdot \mathop{\rm length}\nolimits \mathcal{O}_{Z_{s_{g, 0}}, p}, \\ \label{length}
&= \epsilon \cdot \mathop{\rm length}\nolimits \mathcal{O}_{D}|_{\mathcal{X}_0, p}, \end{align} for any $p\in \mathcal{X}_0$. Again noting that (\ref{a(mde)}) is a family of $a(m, d, \epsilon)$-stable curves, we conclude that $(\ref{length}) \le 1$. Therefore $q_0$ is an $\epsilon$-stable quotient. \end{proof}
\section{Wall-crossing formula}\label{sec:cycle} The purpose of this section is to give an argument to prove Theorem~\ref{thm:wcf}. Our strategy is to modify ~\cite[Secton~7]{MOP} so that $\epsilon$ is involved in the argument. Therefore we only focus on the arguments to be modified, and we leave several details to the reader. \subsection{Localization}\label{subsec:locl} Let $T$ be a torus $T=\mathbb{G}_m^{n}$ acting on $\mathbb{C}^n$ via \begin{align*} (t_1, \cdots, t_n) \cdot (x_1, \cdots, x_n) =(t_1x_1, \cdots, t_n x_n). \end{align*} The above $T$-action induces a $T$-action on $\mathbb{G}(r, n)$ and $\overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)$. Over the moduli space of MOP-stable quotients, the $T$-fixed loci are obtained in~\cite[Section~7]{MOP} via certain combinatorial data. The $T$-fixed loci of $\epsilon$-stable quotients are similarly obtained, but we need to take the $\epsilon$-stability into consideration.
They are indexed by the following data, \begin{align}\label{ind:graph} \theta=(\Gamma, \iota, \gamma, s, \beta, \delta, \mu). \end{align} \begin{itemize} \item $\Gamma=(V, E)$ is a connected graph,
where $V$ is the vertex set and $E$ is the edge set with no self edges. \item $\iota$ is an assignment of an inclusion, \begin{align*} \iota_{v}\colon \{1, \cdots, r\} \to \{1, \cdots, n\}, \end{align*} to each $v\in V$. In particular, the induced subspace $\mathbb{C}^{r} \hookrightarrow \mathbb{C}^n$ by $\iota_{v}$ determines a map, \begin{align*} \nu \colon V \to \mathbb{G}(r, n)^{T}. \end{align*} \item $\gamma$ is a genus assignment $\gamma \colon V\to \mathbb{Z}_{\ge 1}$, satisfying \begin{align*} \sum_{v\in V}\gamma(v)+h^{1}(\Gamma)=g. \end{align*} \item For each $v\in V$, $s(v)=(s_1(v), \cdots, s_r(v))$ with $s_i(v)\in \mathbb{Z}_{\ge 0}$. We set \begin{align*} {\bf s}(v)=\sum_{i=1}^{r}s_i(v). \end{align*} \item $\beta$ is an assignment to each $e\in E$ of a $T$-invariant curve $\beta(e)$ of $\mathbb{G}(r, n)$. The two vertices incident to $e\in E$ are mapped via $\nu$ to the two $T$-fixed points incident to $\beta(e)$. \item $\delta \colon E \to \mathbb{Z}_{\ge 1}$ is an assignment of a covering number, satisfying \begin{align*} \sum_{v\in V}{\bf s}(v)+\sum_{e\in E}\delta(e)=d. \end{align*} \item $\mu$ is a distribution of the $m$-markings to the vertices of $V$. \item For each $v\in V$, we set \begin{align*} w(v)=\min\{0, 2\gamma(v)-2+\epsilon \cdot {\bf s}(v)+\mathop{\rm val}\nolimits(v) \}. \end{align*} Then for each edge $e\in E$ with incident vertex $v_1, v_2 \in V$, we have \begin{align}\label{graph:ample} \epsilon \cdot \delta(e)+w(v_1)+w(v_2)>0. \end{align} \end{itemize} The condition (\ref{graph:ample}) corresponds to the ampleness of (\ref{def:L})
at the irreducible component determined by $e$. Given a data $\theta$ as in (\ref{ind:graph}), the isomorphism classes of $T$-fixed $\epsilon$-stable quotients indexed by $\theta$ form a product of the quotients of the moduli spaces of weighted pointed stable curves, \begin{align}\label{wei:poi} Q^{T}(\theta)=
\prod_{v\in V}\left(\overline{M}_{\gamma(v), \mathop{\rm val}\nolimits(v)|{\bf s}(v)}^{\epsilon}/\Pi_{i=1}^{r}S_{s_i(v)}\right). \end{align} Here if $v\in V$ does not satisfy the condition, \begin{align}\label{pos} 2\gamma(v)-2+\epsilon \cdot s(v)+\mathop{\rm val}\nolimits(v) >0, \end{align} we set \begin{align*}
\overline{M}_{\gamma(v), \mathop{\rm val}\nolimits(v)|{\bf s}(v)}^{\epsilon}= \left\{ \begin{array}{cc} \mathop{\rm Spec}\nolimits \mathbb{C}, & V \neq \{ v \}, \\ \emptyset, & V=\{v\}. \end{array} \right. \end{align*} The corresponding $T$-fixed $\epsilon$-stable quotients are described in the following way. \begin{itemize} \item For $v\in V$, suppose that the condition (\ref{pos}) holds. A point in the $v$-factor of (\ref{wei:poi}) determines a curve $C_v$ and $r$-tuple of divisors on it $D_1, \cdots, D_r$ with $\deg(D_i)=s_i(v)$. Then an $\epsilon$-stable quotient is obtained by the exact sequence, \begin{align}\label{tuple} 0 \to \oplus_{i=1}^{r}\mathcal{O}_{C_v}(-D_i) \to \mathcal{O}_{C_v}^{\oplus n} \to Q \to 0. \end{align} Here the first inclusion is the composition of the natural inclusion \begin{align*} \oplus_{i=1}^{r}\mathcal{O}_{C_v}(-D_i) \hookrightarrow \mathcal{O}_{C_v}^{\oplus r}, \end{align*} and the inclusion $\mathcal{O}_{C_v}^{\oplus r} \hookrightarrow \mathcal{O}_{C_v}^{\oplus n}$ induced by $\iota_v$. \item For $e\in E$,
consider the degree $\delta(e)$-covering ramified over the two torus fixed points, \begin{align}\label{f_e} f_e \colon C_e \to \beta (e) \subset \mathbb{G}(r, n). \end{align} Note that $f_e$ is a finite map between projective lines. We obtain the exact sequence, \begin{align}\label{taut} 0\to S \to \mathcal{O}_{C_e}^{\oplus n} \stackrel{q}{\to} Q \to 0, \end{align} hence a quotient $q$,
by pulling back the tautological sequence on $\mathbb{G}(r, n)$ to $C_e$. Let $v$ and $v'$ be the two vertices incident to $e$, and $x, x' \in C_e$ the corresponding ramification points respectively. We have the following cases.
(i) Suppose that both of $v$ and $v'$
satisfy (\ref{pos}). Then we take the quotient $q$.
(ii) Suppose that exactly one of $v$ or $v'$, say $v$,
does not satisfy (\ref{pos}). For simplicity, we assume that $\iota_{v}(j)=j$ for $1\le j \le n$, and \begin{align*} \iota_{v'}(j)=j, \quad 1\le j\le r-1, \quad \iota_{v'}(r)=r+1. \end{align*} Then the exact sequence (\ref{taut}) is identified with the sequence, \begin{align}\label{iden:seq} 0 \to \mathcal{O}_{C_e}^{\oplus r-1} \oplus \mathcal{O}_{C_e}(-\delta(e)) \to \mathcal{O}_{C_e}^{\oplus n} \to \mathcal{O}_{C_e}(\delta(e)) \oplus \mathcal{O}_{C_e}^{\oplus n-r-1} \to 0. \end{align} Here the embedding \begin{align*} \mathcal{O}_{C_e}^{\oplus r-1} \oplus \mathcal{O}_{C_e}(-\delta(e)) \subset \mathcal{O}_{C_e}^{\oplus n}, \end{align*} is the composition, \begin{align*} \mathcal{O}_{C_e}^{\oplus r-1} \oplus \mathcal{O}_{C_e}(-\delta(e)) \subset \mathcal{O}_{C_e}^{\oplus r-1}\oplus \mathcal{O}_{C_e}^{\oplus 2} \subset \mathcal{O}_{C_e}^{\oplus n}, \end{align*} where the first embedding is the direct sum of the identity and the pull-back of the tautological embedding via $f_e$, and the second one is the embedding into the first $r+1$-factors. Composing the embedding \begin{align*} 0 \to \bigoplus_{i=1}^{r-1} \mathcal{O}_{C_e}(-s_i(v)x) \oplus \mathcal{O}_{C_e}(-s_r(v)x-\delta(e)) \to \mathcal{O}_{C_e}^{\oplus r-1} \oplus \mathcal{O}_{C_e}(-\delta(e)), \end{align*} with the sequence (\ref{iden:seq}), we obtain the exact sequence, \begin{align*} 0 \to \bigoplus_{i=1}^{r-1} \mathcal{O}_{C_e}(-s_i(v)x) \oplus \mathcal{O}_{C_e}(-s_r(v)x-\delta(e)) \to \mathcal{O}_{C_e}^{\oplus n} \stackrel{q'}{\to} Q' \to 0. \end{align*} Then we take the quotient $q'$.
(iii) Suppose that both $v$ and $v'$ do not satisfy (\ref{pos}). Then as above, we take the exact sequence, \begin{align*} 0 \to \bigoplus_{i=1}^{r-1} \mathcal{O}_{C_e}(-s_i(v)x-s_i(v')x') \oplus & \mathcal{O}_{C_e}(-s_r(v)x-s_r(v')x'-\delta(e)) \\ & \to \mathcal{O}_{C_e}^{\oplus n} \stackrel{q''}{\to} Q'' \to 0, \end{align*} and we take the quotient $q''$. \end{itemize} By gluing the above quotients, we obtain a curve $C$ and a quotient from $\mathcal{O}_C^{\oplus n}$. The condition (\ref{graph:ample}) ensures that the resulting quotient is $\epsilon$-stable. \subsection{Virtual localization formula}\label{sub:vir} Let $Q^{T}(\theta)$ be the $T$-fixed locus (\ref{wei:poi}), and $i_{\theta}$ the inclusion, \begin{align*} i_{\theta}\colon Q^{T}(\theta) \hookrightarrow \overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d). \end{align*} We denote by $N^{\mathop{\rm vir}\nolimits}(\theta)$ the virtual normal bundle of $Q^{T}(\theta)$ in $\overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)$. The virtual localization formula~\cite{GP} in this case is written as \begin{align}\label{vloc} [\overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)]^{\mathop{\rm vir}\nolimits} &=\sum_{\theta}i_{\theta !}\left(\frac{[Q^{T}(\theta)]}{{\bf e} (N^{\mathop{\rm vir}\nolimits}(\theta))}\right) \\ \notag & \in A_{\ast}^{T}(\overline{Q}_{g, m}^{\epsilon} (\mathbb{G}(r, n), d), \mathbb{Q})\otimes_{R} \mathbb{Q}(\lambda_1, \cdots, \lambda_n). \end{align} Here $R$ is the equivariant Chow ring of a point with respect to the trivial $T$-action, \begin{align*} R=\mathbb{Q}[\lambda_1, \cdots, \lambda_n], \end{align*} with $\lambda_i$ equivariant parameters.
Let $v$ be a vertex in the data (\ref{ind:graph}) which satisfies the condition (\ref{pos}). We see the contribution of $v$ to the RHS of (\ref{vloc}). For simplicity, we assume that $\iota_v(j)=j$ for $1\le j\le r$. The vertex $v$ corresponds to the space \begin{align*}
\overline{M}_{\gamma(v), \mathop{\rm val}\nolimits(v)|{\bf s}(v)}^{\epsilon}/\Pi_{i=1}^{r} S_{s_i(v)}. \end{align*} Similarly to the sequence (\ref{tuple}), each point on the above space corresponds to an exact sequence, \begin{align*} 0 \to S=\oplus_{i=1}^{r}S_i \to \mathcal{O}_{C}^{\oplus n} \to Q \to 0, \end{align*} for $S_i=\mathcal{O}_{C_v}(-D_i)$ with $\deg(D_i)=s_i(v)$, and $\mathop{\rm val}\nolimits(v)$-marked points. The exact sequence (\ref{tanob}) and the argument of~\cite[Section~7]{MOP} show that the contribution of the vertex $v$ is \begin{align}\label{cont1} \mathop{\rm Cont}\nolimits(v)&= \frac{{\bf e}(\mathbb{E}^{\ast}\otimes T_{\nu(v)})}{{\bf e}(T_{\nu(v)})} \frac{1}{\prod_{e}\frac{\lambda(e)}{\delta(e)}-\psi_e} \\ \label{cont2}
& \quad \frac{1}{\prod_{i\neq j}{\bf e}(H^0(O_C(S_i)|_{S_j})\otimes [\lambda_j-\lambda_i])} \cdot \\ \label{cont3}
& \quad \frac{1}{\prod_{i\neq j^{\ast}}{\bf e}(H^0(O_C(S_i)|_{S_i})\otimes [\lambda_{j^{\ast}}-\lambda_i])}. \end{align} Here each factor is as follows. \begin{itemize} \item The symbol ${\bf e}$ denotes the Euler class, $T_{\nu(v)}$ is the $T$-representation on the tangent space of $\mathbb{G}(r, n)$ at $\nu(v)$, and $\mathbb{E}$ is the Hodge bundle, \begin{align}\label{Hodge}
\mathbb{E} \to \overline{M}_{\gamma(v), \mathop{\rm val}\nolimits(v)|{\bf s}(v)}^{\epsilon}. \end{align} \item The product in the denominator of (\ref{cont1}) is over all half-edges $e$ incident to $v$. The factor $\lambda(e)$ denotes the $T$-weight of the tangent representation along the corresponding $T$-fixed edge, and $\psi_e$ is the first chern class of the cotangent line at the corresponding marking of
$\overline{M}_{\gamma(v), \mathop{\rm val}\nolimits(v)|{\bf s}(v)}^{\epsilon}$. (See (\ref{1stch}) below.) \item The products in (\ref{cont2}), (\ref{cont3}) satisfy the following conditions, \begin{align*} 1\le i\le r, \quad 1 \le j \le r, \quad r+1 \le j^{\ast} \le n. \end{align*} The brackets $[\lambda_j-\lambda_i]$ denotes the trivial bundle with specified weights. \end{itemize}
The same argument describing $\mathop{\rm Cont}\nolimits(v)$ as above is also applied to see the contribution term of the edge $e$ to the formula (\ref{vloc}). However we do not need to know its precise formula, and it is enough to notice that \begin{align*} \mathop{\rm Cont}\nolimits(e) \in \mathbb{Q}(\lambda_1, \cdots, \lambda_n). \end{align*} The above fact is easily seen by the description of the $T$-fixed $\epsilon$-stable quotients in the last subsection. Then the RHS of (\ref{vloc}) is the sum of the products, \begin{align*} \sum_{\theta}i_{\theta !}\left( \prod_{e}\mathop{\rm Cont}\nolimits(e)\prod_{v}\mathop{\rm Cont}\nolimits(v)[Q^{T}(\theta)]\right). \end{align*}
\subsection{Classes on $\overline{M}_{g, m|d}^{\epsilon}$} \label{subsec:Class} As we have seen, each term of the virtual localization formula is a class on the moduli space of weighted
pointed stable curves $\overline{M}_{g, m|d}^{\epsilon}$.
The relevant classes on $\overline{M}_{g, m|d}^{\epsilon}$ for a sufficiently small $\epsilon$ is discussed in~\cite[Section~4]{MOP}. For arbitrary $0<\epsilon \le 1$ the similar classes are also available, which we recall here.
For every subset $J\subset \{1, \cdots, d\}$ of size at least $2$, there is a diagonal class, \begin{align*}
D_{J} \in A^{|J|-1}(\overline{M}_{g, m|d}^{\epsilon}, \mathbb{Q}), \end{align*} corresponding to the weighted pointed stable curves \begin{align*} (C, p_1, \cdots, p_m, \widehat{p}_1, \cdots, \widehat{p}_d) \end{align*} satisfying \begin{align*} \widehat{p}_j=\widehat{p}_{j'}, \quad j, j' \in J. \end{align*} Note that $D_J=0$ if $\epsilon \cdot \lvert J \rvert >1$.
Next we have the cotangent line bundles, \begin{align*}
\mathbb{L}_i \to \overline{M}_{g, m|d}^{\epsilon}, \quad
\widehat{\mathbb{L}}_j \to \overline{M}_{g, m|d}^{\epsilon}, \end{align*} for $1\le i\le m$ and $1\le j \le d$, corresponding to the respective markings. We have the associated first chern classes, \begin{align}\label{1stch} \psi_{i}=c_1(\mathbb{L}_i), \ \widehat{\psi}_{j}=c_1(\widehat{\mathbb{L}}_j) \in
A^1(\overline{M}_{g, m|d}^{\epsilon}, \mathbb{Q}). \end{align} The above classes are related as follows. For a subset $J\subset \{1, \cdots, d\}$, the class \begin{align}\label{depJ}
\widehat{\psi}_{J}\cneq \widehat{\psi}_{j}|_{D_J}, \end{align} does not depend on $j\in J$. If $J$ and $J'$ have non-trivial intersections, it is easy to see that \begin{align}\label{psiD} D_{J}\cdot D_{J'}=(-\widehat{\psi}_{J\cup J'})^{\lvert J\cap J' \rvert -1} D_{J\cup J'}. \end{align} By the above properties, we obtain the notion of \textit{canonical forms}, (cf.~\cite[Section~4]{MOP},) for any monomial $M(\widehat{\psi}_j, D_J)$ of $\widehat{\psi}_j$ and $D_J$. It is obtained as follows. \begin{itemize} \item We multiply the classes $D_J$ using the formula (\ref{psiD}) until we obtain the product of classes $\widehat{\psi}_j$ and $D_{J_1}D_{J_2}\cdots D_{J_l}$ with all $J_i$ disjoint. \item Using (\ref{depJ}), we collect the equal cotangent classes. \end{itemize} By extending the above operation linearly, we obtain the canonical form for any polynomial $P(\widehat{\psi}_j, D_{J})$.
\subsection{Standard classes under change of $\epsilon$} For $\epsilon \ge \epsilon'$, recall that there is a birational morphism, (cf.~(\ref{nat:bir}), Remark~\ref{rmk:coin},) \begin{align*}
c_{\epsilon, \epsilon'} \colon \overline{M}_{g, m|d}^{\epsilon}
\to \overline{M}_{g, m|d}^{\epsilon'}. \end{align*} For simplicity, we write $c_{\epsilon, \epsilon'}$ as $c$. Then we have the following, \begin{align}\label{form1} c^{\ast}\psi_i &=\psi_i, \quad 1\le i\le m, \\ \label{form2} c^{\ast}\widehat{\psi}_j &=\widehat{\psi}_j -\Delta_j, \quad 1\le j\le d. \end{align} Here $\Delta_{j}$ is given by \begin{align}\label{sum:delta} \Delta_{j}=\sum_{j\in J\subset \{1, \cdots, d\}}\Delta_{J}, \end{align}
where $\Delta_{J} \subset \overline{M}_{g, m|d}^{\epsilon}$ correspond to curves \begin{align*} C=C_1\cup C_2, \quad g(C_1)=0, \quad g(C_2)=g, \end{align*} with a single node which separates $C_1$ and $C_2$, and the markings of $J$ are distributed to $C_1$. The subsets $J$ in the sum (\ref{sum:delta}) should satisfy, \begin{align*} &\epsilon \cdot \lvert J \rvert -1>0, \\ &\epsilon' \cdot \lvert J \rvert -1 \le 0. \end{align*} Applying (\ref{form1}), (\ref{form2}) and the projection formula, we obtain the universal formula, \begin{align}\label{univ:form} c_{\ast}\left(\prod_{i=1}^{m}\psi_i^{m_i}\prod_{j=1}^{d}\widehat{\psi}_j^{n_j}
\right)=\prod_{i=1}^{m}\psi_i^{m_i}
\left(\prod_{j=1}^{d}\widehat{\psi}_j^{n_j}+ \cdots \right). \end{align} If $\epsilon=1$ and $0<\epsilon' \ll 1$, the above formula coincides with the formula obtained in~\cite[Lemma~3]{MOP}.
Also the Hodge bundle (\ref{Hodge}) satisfies \begin{align}\label{sat:Hodge} c^{\ast}\mathbb{E}\cong \mathbb{E}, \end{align} since $c$ contracts only rational tails. \subsection{The case of genus zero} In genus zero, note that the moduli space \begin{align*} \overline{Q}_{0, m}^{\epsilon}(\mathbb{G}(r, n), d) \end{align*} is non-singular by Lemma~\ref{lem:nonsing}.
If it is also connected, then it is irreducible and there is a birational map, \begin{align*} \overline{Q}_{0, m}^{\epsilon_1}(\mathbb{G}(r, n), d) \dashrightarrow \overline{Q}_{0, m}^{\epsilon_2}(\mathbb{G}(r, n), d), \end{align*} as long as $\epsilon_i>(2-m)/d$. In fact, we have the following. \begin{lem}\label{lem:eqconn} The moduli stack \begin{align}\label{eq:conn} \overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d) \end{align} is connected. \end{lem} \begin{proof} The connectedness of the stable map moduli spaces is proved in~\cite{KimPan}, and we reduce the connectedness of (\ref{eq:conn}) to that of the stable map moduli spaces. To do this, it is enough to see that any $\epsilon$-stable quotient $q\colon \mathcal{O}_C^{\oplus n} \twoheadrightarrow Q$ is deformed to a quotient obtained by a stable map. By applying the $T$-action, we may assume that $q$ is a $T$-fixed quotient. Then $q$ fits into an exact sequence, \begin{align*} 0 \to \oplus_{i=1}^{r}\mathcal{O}_C(-D_i) \to \mathcal{O}_{C}^{\oplus n} \stackrel{q}{\to} Q \to 0, \end{align*} for $r$-tuple divisors $D_i$ on $C$. (See Subsection~\ref{subsec:locl}.) By deforming $D_i$ to reduced divisors $D_i'$, we can deform the quotient $q$ to $q' \colon \mathcal{O}_{C}\twoheadrightarrow Q'$ which is $\epsilon$-stable for $\epsilon=1$. Then by Lemma~\ref{lem:worth}, we can deform $q'$ to a quotient corresponding to a stable map. \end{proof} The smoothness of the genus zero moduli spaces
and the above lemma show the formula, \begin{align}\label{wcf:0} c_{\epsilon, \epsilon' \ast}\iota_{\ast}^{\epsilon} \left([\overline{Q}_{0, m}^{\epsilon}(\mathbb{G}(r, n), d)]^{\mathop{\rm vir}\nolimits} \right)= \iota_{\ast}^{\epsilon'}\left( [\overline{Q}_{0, m}^{\epsilon'}(\mathbb{G}(r, n), d)]^{\mathop{\rm vir}\nolimits}\right). \end{align} Hence Theorem~\ref{thm:wcf} in the genus zero case is proved. \subsection{Sketch of the proof of Theorem~\ref{thm:wcf}} Under the map to $\overline{Q}_{g, m}^{\epsilon'}(\mathbb{G}(1, \dbinom{n}{r}), d)$, several rational tails on $\overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)$ with small degree collapse. Also the $T$-fixed loci of $\overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)$ have many splitting types of the subbundle $S$ which are collapsed. For a non-collapsed edge, its contribution exactly coincides, and we just need to show the matching on each vertex.
The equality (\ref{wcf:0}) implies that both sides are equal after $T$-equivariant localization. For each vertex $v$ on $\overline{Q}_{0, m}^{\epsilon}(\mathbb{G}(r, n), d)$, the contribution \begin{align*}
\mathop{\rm Cont}\nolimits(v) \in A_{\ast}^{T}(\overline{M}^{\epsilon}_{0, \mathop{\rm val}\nolimits(v)| {\bf s}(v)}, \mathbb{Q})\otimes_{R}\mathbb{Q}(\lambda_1, \cdots, \lambda_n), \end{align*} is given in Subsection~\ref{sub:vir}. In genus zero the Hodge bundle is trivial, and the class $\mathop{\rm Cont}\nolimits(v)$ is easily seen to be written as an element, \begin{align*} \mathop{\rm Cont}\nolimits(v) \in \mathbb{Q}(\lambda_1, \cdots, \lambda_n)[\psi_i, \widehat{\psi}_j, D_J], \end{align*}
symmetric with respect to the variables $\widehat{\psi}_{j}$. Let us take the push forward to $\overline{Q}_{0, m}^{\epsilon'}(\mathbb{G}(1, \dbinom{n}{r}), d)$ using (\ref{univ:form}), and take the canonical form. (cf.~Subsection~\ref{subsec:Class}.) At each vertex on $\overline{Q}_{0, m}^{\epsilon'}(\mathbb{G}(1, \dbinom{n}{r}), d)$, the vertices and the collapsed edges on $\overline{Q}_{0, m}^{\epsilon}(\mathbb{G}(r, n), d)$ contribute to the LHS of (\ref{wcf:0}) by the polynomial, \begin{align*} L^{C}(\psi_i, \widehat{\psi}_j, D_J). \end{align*} Also the vertices on $\overline{Q}_{0, m}^{\epsilon'}(\mathbb{G}(r, n), d)$ with collapsed splitting types contribute to the RHS of (\ref{wcf:0}) by the polynomial, \begin{align*} R^{C}(\psi_i, \widehat{\psi}_j, D_J). \end{align*} The equality (\ref{wcf:0}) implies the equality, \begin{align}\label{match} L^{C}(\psi_i, \widehat{\psi}_j, D_J)= R^{C}(\psi_i, \widehat{\psi}_j, D_J), \end{align} as \textit{classes} in the equivariant Chow ring.
Although (\ref{match}) is an equality after taking classes, the exactly same argument of~\cite[Lemma~5]{MOP} shows that the equality (\ref{match}) holds as \textit{abstract polynomials}. Also note that the genus dependent part involving Hodge bundles (\ref{cont1}) in the virtual localization formula (\ref{vloc}) does not depend on $\epsilon$ by (\ref{sat:Hodge}). Therefore the above argument immediately implies \begin{align}\label{wcf:g} c_{\epsilon, \epsilon' \ast}\iota_{\ast}^{\epsilon} \left([\overline{Q}_{g, m}^{\epsilon}(\mathbb{G}(r, n), d)]^{\mathop{\rm vir}\nolimits} \right)= \iota_{\ast}^{\epsilon'}\left( [\overline{Q}_{g, m}^{\epsilon'}(\mathbb{G}(r, n), d)]^{\mathop{\rm vir}\nolimits}\right), \end{align} for any $g\ge 0$. Hence we obtain the formula (\ref{WCF}).
\section{Invariants on (local) Calabi-Yau 3-folds}\label{sec:enu} In this section, we introduce some enumerative invariants of curves on (local) Calabi-Yau 3-folds and propose related problems. Similar invariants for MOP-stable quotients are discussed in~\cite[Section~9, 10]{MOP}. In what follows, we use the notation (\ref{univ1}), (\ref{univ2}) for universal curves and quotients. \subsection{Invariants on a local $(-1, -1)$-curve} Let us consider a crepant small resolution of a conifold singularity, that is the total space of $\mathcal{O}_{\mathbb{P}^{1}}(-1)^{\oplus 2}$, \begin{align*} X=\mathcal{O}_{\mathbb{P}^1}(-1) \oplus \mathcal{O}_{\mathbb{P}^1}(-1) \to \mathbb{P}^1. \end{align*} In a similar way to~\cite[Section~9]{MOP}, we define the $\mathbb{Q}$-valued invariant by \begin{align}\label{inv1} N_{g, d}^{\epsilon}(X)\cneq \int_{[\overline{Q}_{g, 0}^{\epsilon}(\mathbb{P}^1, d)]^{\mathop{\rm vir}\nolimits}} {\bf e}(R^1 \pi^{\epsilon}_{\ast}(S_{U^{\epsilon}})\oplus R^1 \pi^{\epsilon}_{\ast}(S_{U^{\epsilon}})). \end{align} It is easy to see that \begin{align*}\pi^{\epsilon}_{\ast}(S_{U^{\epsilon}})=0, \end{align*}
hence $R^1 \pi^{\epsilon}_{\ast}(S_{U^{\epsilon}})$
is a vector bundle and (\ref{inv1}) is well-defined. By Theorem~\ref{thm:cross} (i) and Lemma~\ref{lem:empty}, we have \begin{align*} N_{g, d}^{\epsilon}(X)&=N_{g, d}^{\rm{GW}}(X), \quad \epsilon>2, \\ N_{0, d}^{\epsilon}(X)&=0, \quad 0<\epsilon \le 2/d. \end{align*} Here $N_{g, d}^{\rm{GW}}(X)$ is the genus $g$, degree $d$ local GW invariant of $X$. The following result is obtained by the same method of~\cite[Proposition~6, 7]{MOP}, using the localization with respect to the twisted $\mathbb{C}^{\ast}$-action on $X$, and the vanishing result similar to~\cite{FaPan}. We leave the readers to check the detail. \begin{thm}\label{thm:check} We have the following. \begin{align*} N_{g, d}^{\epsilon}(X)=\left\{ \begin{array}{cc} N_{g, d}^{\rm{GW}}(X), & 2g-2+\epsilon \cdot d>0, \\ 0, & 2g-2+\epsilon \cdot d \le 0. \end{array}\right. \end{align*} \end{thm} Let $F^{\rm{GW}}(X)$ be the generating series, \begin{align*} F^{\rm{GW}}(X)=\sum_{g\ge 0, \ d>0}N_{g, d}^{\rm{GW}}(X)\lambda^{2g-2}t^d. \end{align*} Recall that we have the
following Gopakumar-Vafa formula, \begin{align}\label{GV} F^{\rm{GW}}(X)=\sum_{d\ge 1}\frac{t^d}{4d \sin^2 (d\lambda/2)}. \end{align} By Theorem~\ref{thm:check} and the formula (\ref{GV}), the generating series of $N_{g, d}^{\epsilon}(X)$ satisfies the formula, \begin{align*} F^{\epsilon}(X)&\cneq \sum_{g\ge 0, \ d>0}N_{g, d}^{\epsilon}(X)\lambda^{2g-2}t^d \\ &=\sum_{d\ge 1}\frac{t^d}{4d \sin^2 (d\lambda/2)} -\sum_{0<d\le 2/\epsilon}\frac{1}{d^3}\lambda^{-2}t^d. \end{align*} \subsection{Invariants on a local projective plane} Let us consider the total space of the canonical line bundle of $\mathbb{P}^2$, \begin{align*} X=\mathcal{O}_{\mathbb{P}^2}(-3) \to \mathbb{P}^2. \end{align*} As in the case of a $(-1, -1)$-curve, we can define the invariant by \begin{align*} N_{g, d}^{\epsilon}(X)\cneq \int_{[\overline{Q}_{g, 0}^{\epsilon}(\mathbb{P}^2, d)]^{\mathop{\rm vir}\nolimits}} {\bf e}(R^1 \pi^{\epsilon}_{\ast}(S_{U^{\epsilon}}^{\otimes 3}))\in \mathbb{Q}, \end{align*} since we have the vanishing, \begin{align*} \pi^{\epsilon}_{\ast}(S_{U^{\epsilon}}^{\otimes 3})=0. \end{align*} Note that $N_{g, d}^{\epsilon}(X)$ is a local GW invariant of $X$ when $\epsilon>2$. However for a small $\epsilon$, the following example shows that $N_{g, d}^{\epsilon}(X)$ is different from the local GW invariant of $X$. \begin{exam} For $X=\mathcal{O}_{\mathbb{P}^2}(-3)$, an explicit computation shows that \begin{align*} N_{1, 1}^{\epsilon}(X)=\left\{ \begin{array}{cc} \frac{1}{4}, & \epsilon>1, \\ \frac{3}{4}, & 0<\epsilon \le 1. \end{array}
\right. \end{align*} In fact if $\epsilon>1$, then $N_{1, 1}^{\epsilon}(X)$ coincides with the local GW invariant of $X$, and it is already computed. A list is available in~\cite[Table~1]{AMV} in a Gopakumar-Vafa form.
Let us compute $N_{1, 1}^{\epsilon}(X)$ for
$0<\epsilon \le 1$. In this case, any $\epsilon$-stable quotient of type $(1, 3, 1)$ is MOP-stable, and the moduli space is described as \begin{align*} \overline{Q}_{1, 0}^{\epsilon}(\mathbb{P}^2, 1)\cong \overline{M}_{1, 1}\times \mathbb{P}^2. \end{align*} (cf.~\cite[Example~5.4]{MOP}.) Also there is no obstruction in this case, \begin{align*} [\overline{Q}_{1, 0}^{\epsilon}(\mathbb{P}^2, 1)]^{\mathop{\rm vir}\nolimits} =[\overline{Q}_{1, 0}^{\epsilon}(\mathbb{P}^2, 1)]. \end{align*} Let \begin{align*} \pi \colon U \to \overline{M}_{1, 1}, \end{align*} be the universal curve with a section $D\subset U$. Then \begin{align*} U^{\epsilon}=U\times \mathbb{P}^2 \to \overline{Q}_{1, 0}^{\epsilon}(\mathbb{P}^2, 1), \end{align*} is the universal curve, and the universal subsheaf $S_{U^{\epsilon}} \subset \mathcal{O}_{U^{\epsilon}}^{\oplus 3}$ is given by \begin{align*} S_{U^{\epsilon}}\cong \mathcal{O}_{U}(-D)\boxtimes \mathcal{O}_{\mathbb{P}^2}(-1). \end{align*} Therefore we have \begin{align*} R^1 \pi_{\ast}^{\epsilon}(S_{U^{\epsilon}}^{\otimes 3}) \cong R^1 \pi_{\ast}\mathcal{O}_{U}(-3D) \boxtimes \mathcal{O}_{\mathbb{P}^2}(-3). \end{align*} The vector bundle $R^1\pi_{\ast}\mathcal{O}_U(-3D)$ on $\overline{M}_{1, 1}$ admits a filtration whose subquotients are line bundles $\mathbb{E}^{\vee}$, $\mathbb{L}_1$ and $\mathbb{L}_1^{\otimes 2}$. Therefore the integration of the Euler class is given by \begin{align*} \int_{\overline{Q}_{1, 0}^{\epsilon}(\mathbb{P}^2, 1)}{\bf e}(R^1 \pi_{\ast}^{\epsilon}(S_{U^{\epsilon}}^{\otimes 3})) &=9\cdot \int_{\overline{M}_{1, 1}}(3\psi_1 -c_1(\mathbb{E})), \\ &=\frac{3}{4}. \end{align*} Here the last equality follows from the computation in~\cite{FaPan}, \begin{align*} \int_{\overline{M}_{1, 1}}c_1(\mathbb{E})= \int_{\overline{M}_{1, 1}}\psi_1=\frac{1}{24}. \end{align*} \end{exam} By the above example, the following problem seems to be interesting. \begin{prob}\label{prob1} How do the invariants $N_{g, d}^{\epsilon}(X)$ depend on $\epsilon$, when $X=\mathcal{O}_{\mathbb{P}^2}(-3)$? \end{prob}
\subsection{Generalized tree level GW systems on hypersurfaces} Let $X$ be a smooth projective variety, defined by the degree $N$ homogeneous polynomial $f$ of $n+1$ variables, \begin{align*} X=\{f=0\} \subset \mathbb{P}^{n}. \end{align*} Recall that in Lemma~\ref{lem:nonsing}, the moduli stack $\overline{Q}_{0, m}^{\epsilon}(\mathbb{P}^n, d)$ is shown to be smooth of the expected dimension. We construct the closed substack, \begin{align}\label{close} \overline{Q}_{0, m}^{\epsilon}(X, d)\subset \overline{Q}_{0, m}^{\epsilon}(\mathbb{P}^n, d), \end{align} as follows. For an $\epsilon$-stable quotient of type $(1, n+1, d)$, \begin{align*} 0 \to S \to \mathcal{O}_{C}^{\oplus n+1} \to Q \to 0, \end{align*} we take the dual of the first inclusion, \begin{align*} (s_0, s_1, \cdots, s_n) \colon \mathcal{O}_{C}^{\oplus n+1} \to S^{\vee}. \end{align*} Applying $f$, we obtain the section, \begin{align}\label{fs} f(s_0, s_1, \cdots, s_n) \in H^0(C, S^{\otimes -N}). \end{align} In genus zero, we have the vanishing \begin{align*} R^1 \pi_{\epsilon}^{\ast}(S^{\otimes -N}_{U^{\epsilon}})=0, \end{align*} hence (\ref{fs}) determines a section of the vector bundle $\pi_{\ast}^{\epsilon}(S^{\otimes -N}_{U^{\epsilon}})$, which we denote \begin{align*} s_f \in H^0(\overline{Q}_{0, m}(\mathbb{P}^n, d), \pi_{\ast}^{\epsilon}(S^{\otimes -N}_{U^{\epsilon}})). \end{align*} Then we define (scheme theoretically) \begin{align}\label{QX} \overline{Q}_{0, m}^{\epsilon}(X, d)= \{s_{f}=0\}. \end{align} Note that if $\epsilon>2$, then the above space coincides with the moduli stack of genus zero, degree $d$ stable maps to $X$. Since (\ref{QX}) is a zero locus of a section of a vector bundle on a smooth stack, there is a perfect obstruction theory on it, determined by the two term complex, \begin{align*} (\pi_{\ast}^{\epsilon}(S^{\otimes -N}))^{\vee} \to \Omega_{\overline{Q}_{0, m}^{\epsilon}(\mathbb{P}^n, d)
}|_{\overline{Q}_{0, m}^{\epsilon}(X, d)}. \end{align*} The associated virtual class is denoted by \begin{align*} [\overline{Q}_{0, m}^{\epsilon}(X, d)]^{\mathop{\rm vir}\nolimits} \in A_{\ast}(\overline{Q}_{0, m}^{\epsilon}(X, d), \mathbb{Q}). \end{align*} The evaluation map factors through $X$, \begin{align*} \mathop{\rm ev}\nolimits_i \colon \overline{Q}_{0, m}^{\epsilon}(X, d) \to X, \end{align*} for $1\le i \le m$. Hence we obtain the diagram, \begin{align*} \xymatrix{ \overline{Q}_{0, m}^{\epsilon}(X, d) \ar[r]^{\alpha} \ar[d]_{(\mathop{\rm ev}\nolimits_1, \cdots, \mathop{\rm ev}\nolimits_m)}& \overline{M}_{0, m} \\ X\times \cdots \times X, & } \end{align*} and a system of maps, \begin{align}\label{system} I_{0, m, d}^{\epsilon} =&\alpha_{\ast}(\mathop{\rm ev}\nolimits_1, \cdots, \mathop{\rm ev}\nolimits_m)^{\ast}: H^{\ast}(X, \mathbb{Q})^{\otimes m} \to H^{\ast}(\overline{M}_{0, m}, \mathbb{Q}). \end{align} It is straightforward to check that the above system of maps (\ref{system}) satisfies the axiom of tree level GW system~\cite{KonMa}. In particular, we have the genus zero GW type invariants, \begin{align*} \langle I_{0, m, d}^{\epsilon}\rangle (\gamma_1 \otimes \cdots \otimes \gamma_m) =\int_{\overline{M}_{0, m}}I_{0, m, d}^{\epsilon} (\gamma_1 \otimes \cdots \otimes \gamma_m), \end{align*} for $\gamma_i \in H^{\ast}(X, \mathbb{Q})$. The formal function \begin{align*} \Phi^{\epsilon}(\gamma) =\sum_{m\ge 3, \ d\ge 0} \frac{1}{n!} \langle I_{0, m, d}^{\epsilon} \rangle (\gamma^{\otimes m})q^d, \end{align*} satisfies the WDVV equation~\cite{KonMa}, and induces the generalized big (small) quantum cohomology ring, \begin{align*} (H^{\ast}(X, \mathbb{Q})\db[ q \db], \circ^{\epsilon}), \end{align*} depending on $\epsilon\in \mathbb{R}_{>0}$. For $\epsilon>2$, the above ring coincides with the big (small) quantum cohomology ring defined by the GW theory on $X$. \begin{rmk} The above construction of the generalized tree level GW system can be easily generalized to any complete intersection of the Grassmannian $X\subset \mathbb{G}(r, n)$. \end{rmk} \begin{rmk} As discussed in~\cite[Section~10]{MOP} for MOP-stable quotients, it might be possible to define the substack (\ref{close}) and the virtual class on it for every genera. \end{rmk} \subsection{Enumerative invariants on projective Calabi-Yau 3-folds} The construction in the previous subsection enables us to construct genus zero GW type invariants without point insertions on several projective Calabi-Yau 3-folds. One of the interesting examples is a quintic 3-fold, \begin{align*} X \subset \mathbb{P}^4. \end{align*} We can define the invariant, \begin{align}\notag N_{0, d}^{\epsilon}(X)&= \int_{[\overline{Q}_{0, d}^{\epsilon}(X, d)]^{\mathop{\rm vir}\nolimits}}1 \\ \label{quin:inv} &=\int_{\overline{Q}_{0, d}^{\epsilon}(\mathbb{P}^4, d)} {\bf e} \left(\pi_{\ast}^{\epsilon}(S_{U^{\epsilon}}^{\vee \otimes 5})\right) \in \mathbb{Q}. \end{align} Another interesting example is a Calabi-Yau 3-fold obtained as a complete intersection of the Grassmannian $\mathbb{G}(2, 7)$. Let us consider the Pl$\ddot{\rm{u}}$cker embedding, \begin{align*} \mathbb{G}(2, 7)\hookrightarrow \mathbb{P}^{20}, \end{align*} and take general hyperplanes \begin{align}\label{hype} H_1, \cdots, H_7 \subset \mathbb{P}^{20}. \end{align} Then the intersection \begin{align*} X =\mathbb{G}(2, 7)\cap H_1 \cap \cdots \cap H_7, \end{align*} is a projective Calabi-Yau 3-fold. The hyperplanes (\ref{hype}) defines the section, \begin{align*} s_{H} \in H^0(\overline{Q}_{0, m}^{\epsilon}(\mathbb{G}(2, 7), d), \pi_{\ast}^{\epsilon}(\wedge^2 S^{\vee}_{U^{\epsilon}})^{\oplus 7}), \end{align*} and we define \begin{align}\label{QXp} \overline{Q}_{0, m}^{\epsilon}(X, d) =\{s_H=0\}. \end{align} As in the previous subsection, there is a perfect obstruction theory and the virtual class on (\ref{QXp}). In particular, we can define \begin{align}\notag N_{0, d}^{\epsilon}(X)&= \int_{[\overline{Q}_{0, d}^{\epsilon}(X, d)]^{\mathop{\rm vir}\nolimits}}1 \\ \label{grass:inv} &=\int_{\overline{Q}_{0, d}^{\epsilon}(\mathbb{G}(2, 7), d)} {\bf e} \left(\pi_{\ast}^{\epsilon} (\wedge^{2}S_{U^{\epsilon}}^{\vee})^{\oplus 7}\right) \in \mathbb{Q}. \end{align} For $\epsilon>2$, both
invariants (\ref{quin:inv}), (\ref{grass:inv}) coincide with the GW invariants of $X$. As in Problem~\ref{prob1}, we can address the following problem. \begin{prob} How do the invariants $N_{0, d}^{\epsilon}(X)$ depend on $\epsilon$, when $X$ is a quintic 3-fold in $\mathbb{P}^4$ or a complete intersection of $\mathbb{G}(2, 7)$ of codimension $7$? \end{prob}
Institute for the Physics and Mathematics of the Universe, University of Tokyo
\textit{E-mail address}:[email protected], [email protected]
2000 Mathematics Subject Classification. 14N35 (Primary); 14H60 (Secondary)
\end{document} | arXiv |
\begin{document}
\begin{frontmatter}
\title{Aggregated 2D Range Queries on Clustered Points\tnoteref{t1,t2}} \tnotetext[t1]{Funded in part by European Union's Horizon 2020 research and innovation programme under the Marie Sk{\l}odowska-Curie grant agreement No 690941, by Millennium Nucleus Information and Coordination in Networks ICM/FIC P10-024F (Chile), by MINECO (PGE and FEDER) Projects TIN2013-46238-C4-3-R and TIN2013-46801-C4-3-R (Spain), and also by Xunta de Galicia (GRC2013/053) (Spain). A preliminary partial version of this article appeared in {\em Proc. SPIRE 2014}, pp.\ 215--226.} \tnotetext[t2]{This is an Author's Original Manuscript of an article whose final and definitive form, the Version of Record, has been published in Information Systems [copyright Elsevier], available online at: http://dx.doi.org/10.1016/j.is.2016.03.004.}
\author[udc]{Nieves R. Brisaboa} \author[enx]{Guillermo De Bernardo} \author[uchile]{Roberto Konow} \author[uchile]{Gonzalo Navarro} \author[udec]{Diego Seco\tnoteref{t2}} \tnotetext[t2]{Corresponding author: [email protected]. Tel.: +56 41 2204692; fax: +56 41 2221770}
\address[udc]{University of A Coru\~na, Campus de Elvi\~na, A Coru\~na, Spain} \address[enx]{Enxenio S.L., Ba\~nos de Arteixo, A Coru\~na, Spain} \address[uchile]{DCC, University of Chile, Beauchef 851, Santiago, Chile} \address[udec]{University of Concepci\'on, Edmundo Larenas 219, Concepci\'on, Chile}
\begin{abstract} Efficient processing of aggregated range queries on two-dimensional grids is a common requirement in information retrieval and data mining systems, for example in Geographic Information Systems and OLAP cubes. We introduce a technique to represent grids supporting aggregated range queries that requires little space when the data points in the grid are clustered, which is common in practice. We show how this general technique can be used to support two important types of aggregated queries, which are ranked range queries and counting range queries. Our experimental evaluation shows that this technique can speed up aggregated queries up to more than an order of magnitude, with a small space overhead. \end{abstract}
\begin{keyword} Compact Data Structures \sep Grids \sep Query Processing \sep Aggregated queries \sep Clustered Points \end{keyword}
\end{frontmatter}
\section{Introduction}
Many problems in different domains can be interpreted geometrically by modeling data records as multidimensional points and transforming queries about the original records into queries on the point sets \cite[Chapter 5]{Berg}. In 2D, for example, orthogonal range queries on a grid can be used to solve queries of the form \emph{``report all employees born between $y_0$ and $y_1$ who earn between $s_1$ and $s_2$ dollars''}, which are very common in databases. In the same way, other aggregated range queries (e.g., top-$k$, counting, quantile, majority, etc.) have proved to be useful for data analysis in various domains, such as Geographic Information Systems (GIS), OLAP databases, Information Retrieval, and Data Mining, among others \cite{NNRtcs13}. In GIS, aggregated range queries can facilitate decision making \cite{Harvey09} by counting, for example, the number of locations within a specific area for which the values of pollution are above a threshold. Similarly, top-$k$ range queries on an OLAP database\footnote{The support of more than two dimensions is essential in OLAP databases. We discuss the extension to multi-dimensional structures in the conclusions.} of sales can be used to find the sellers with most sales in a time slice. In this example, the two dimensions of the grid are seller ids (arranged hierarchically in order to allow queries for sellers, stores, areas, etc.) and time (in periods of hours, days, weeks, etc.), and the weights associated to the data points are the amount of sales made by a seller during a time slice. Thus, the query asks for the $k$ heaviest points in some range $Q=[i_1,i_2]\times[t_1,t_2]$ of the grid.
The approach of modeling problems using a geometric formulation is well-known. There are many classical representations that support the queries required by the model and solve them efficiently. Range trees \cite{Bentley79} and $kd$-trees \cite{Ben75} are two paradigmatic examples. Some of these classical data structures are even optimal both in query time and space. However, such classical representations usually do not take advantage of the distribution of the data in order to reduce the space requirements. When dealing with massive data, which is the case of some of the aforementioned data mining applications, the use of space-efficient data structures can make the difference between maintaining the data in main memory or having to resort to (orders of magnitude slower) external memory.
In this work we consider the case where we have clustered points in a 2D grid, which is a common scenario in domains such as Geographic Information Systems, Web graphs, social networks, etc. There are some well-known principles that hold in most scenarios of that kind. Two examples are Tobler's first law of geography \cite{tobler}, which states that near things are more related than distant things, and the locality of reference for time-dependent data. This is also the case in Web graphs \cite{BoVWFI}, where clusters appear when the Web pages are sorted by URL. We take advantage of these clusters in order to reduce the space of the data structures for aggregated 2D range queries.
The $K^2$-tree \cite{ktree} (a space-efficient version of the classical Quadtree) is a good data structure to solve range queries on clustered points and it has been extensively evaluated in different domains \cite{Alvarez-GarciaB15,dBABNPspire13.3,CaroRB15}. We introduce a general technique to extend this data structure in order to support aggregated range queries. We then illustrate its potential by instantiating the technique in two emblematic cases: range counting and ranked (MAX/MIN) queries within a 2D range.
The paper is organized as follows. First, we introduce basic concepts and related work in Sections \ref{sec:basics} and \ref{sec:relwork}, respectively. In Section \ref{sec:generaltech} we describe the general technique to extend the $K^2$-tree to solve different aggregated range queries on grids. Two paradigmatic examples of such queries are described in Section \ref{sec:topk} (ranked range queries) and Section \ref{sec:rangecounting} (range counting queries). Section \ref{sec:experiments} presents an exhaustive empirical evaluation of the proposed solutions. Finally, Section \ref{sec:conclusions} concludes and sketches some interesting lines of future work.
\section{Basic Concepts}\label{sec:basics} \subsection{Aggregated queries on clustered points}
We consider two dimensional grids with $n$ columns and $m$ rows, where each cell $a_{ij}$ can either be empty or contain a weight in the range $[0,d-1]$ (see Fig. \ref{fig:matrix}). For some problems, we will omit the weights and just consider the non-empty cells, which can be represented as a binary matrix (Fig. \ref{fig:matrix_bin}) in which each cell contains a 1 (if there is a weighted point in the original matrix) or a 0 (in other case).
\begin{figure}\label{fig:matrix}
\label{fig:matrix_bin}
\label{fig:matrix_qry}
\end{figure}
Let $t$ be the number of 1s in the binary matrix (i.e., the number of weighted points). If we can partition the $t$ points into $c$ clusters, not necessarily disjoint and $c << t$, we will say that the points are clustered. This definition is used by Gagie et al.\ \cite{GHKNPSdcc15.2} to show that in such case a Quadtree needs only $O(c\log u + \sum_i t_i \log l_i)$ bits, where $u=\max(n,m)$, and $t_i$ and $l_i$ are the number of points and the diameter of cluster $i$, respectively.
A range query $Q=[x_1,x_2]\times[y_1,y_2]$ defines a rectangle with all the columns in the range $[x_1,x_2]$ and the rows in $[y_1,y_2]$ (see Fig. \ref{fig:matrix_qry}). An aggregated range query defines, in addition to the range, an aggregate function that must be applied to the data points in the query range. Examples of aggregated queries are $\mathcal{COUNT}(Q)$, which counts the number of data points in the query range, $\mathcal{MAX/MIN}(Q)$, which computes the maximum (alt. minimum) value in the query range, and its generalization top-$k$, which retrieves the $k$ lightest (alt. heaviest) points in the query range. These top-$k$ queries are also referred to in the literature as ranked range queries. For the range query $q$ in Fig. \ref{fig:matrix_qry} the result of $\mathcal{COUNT}(q)$ is 6, $\mathcal{MAX}(q)$ returns 7, $\mathcal{MIN}(q)$ returns 1, and the top-$3$ heaviest elements are 7, 4, and 3.
There are other interesting data-analysis queries on two-dimensional grids. For example, $\mathcal{QUANTILE}(Q,a)$ returns the $a$-th smallest value in $Q$, and $\mathcal{MAJORITY}(Q,\alpha)$ retrieves those values in $Q$ that appear with relative frequency larger than $\alpha$. These and other queries have been studied by Navarro et al.\ \cite{NNR13}, who introduce space-efficient data structures with good time performance. We restrict ourselves to an emblematic subset of these queries, and propose data structures that are even more space-efficient when the points in the set are clustered.
\subsection{Rank and select on bitmaps} Two basic primitives used by most space-efficient data structures are rank and select on bitmaps. Let $B[1,n]$ be a sequence of bits, or a bitmap. We define operation $rank_b(B,i)$ as the number of occurrences of $b \in \{0,1\}$ in $B[1,i]$, and $select_b(B,j)$ as the position in $B$ of the $j$-th occurrence of $b$. $B$ can be represented using $n+o(n)$ bits \cite{Jac89,Cla96}, so that both operations are solved in constant time. These operations have proved very efficient in practice \cite{CN08}. In addition, when the bitmaps are compressible, it is possible to reduce the space and still support these operations in constant time \cite{RRR02}.
\subsection{Wavelet tree and discrete grids}\label{subsec:wt}
An elegant generalization of rank and select queries to an arbitrary alphabet $\Sigma$ of size $\sigma$ is provided by the wavelet tree \cite{GGV03}. Given a sequence $S$ over the alphabet $\Sigma$, the wavelet tree supports rank, select and access in $O(\log \sigma)$ time with $n\log \sigma+o(n\log \sigma)$ bits. The wavelet tree is a complete binary tree, in which each node represents a range $R\subseteq [1,\sigma]$ of the alphabet $\Sigma$, its left child represents a subset $R_\ell \subset R$ and the right child the subset $R_r = R\setminus R_\ell$. Every node representing subset $R$ is associated with a subsequence $S'$ of the input sequence $S$ composed of the elements whose values are in $R$. The node only stores a bitmap of length $|S'|$ such that a $0$ bit at position $i$ means that $S'[i]$ belongs to $R_\ell$, and a $1$ bit means that it belongs to $R_r$. The three basic operations require to traverse the tree from the root to a leaf (for rank and access) or from a leaf to the root (for select) via rank and select operations on the bitmaps stored at the nodes of the wavelet tree (those bitmaps are then represented with the techniques cited above).
The wavelet tree can also be used to represent grids \cite{Cha88,MN06}. An $n \times m$ grid with $n$ points, exactly one per column (i.e., $x$ values are unique), can be represented using a {\em wavelet tree}. In this case, this is a perfect balanced binary tree of height $\lceil \log m\rceil$ where each node corresponds to a contiguous range of values $y \in [1,m]$ and represents the points falling in that $y$-range, sorted by increasing $x$-coordinate. The root represents $[1,m]$ and the two children of each node split its $y$-range by half. The leaves represent a single $y$-coordinate. Each internal node stores a bitmap, which tells whether each point corresponds to its left or right child. Using $rank$ and $select$ queries on the bitmaps, the wavelet tree uses $n\log m + o(n\log m)$ bits, and can count the number of points in a range in $O(\log m)$ time, because the query is decomposed into bitmap ranges on at most 2 nodes per wavelet tree level (see Section \ref{sec:relwork}). Any point can be tracked up (to find its $x$-coordinate) or down (to find its $y$-coordinate) in $O(\log m)$ time as well.
When the grids may contain more than one point per column, an additional bitmap $B$ is used to map from the original domain to a new domain that has one point per column. This bitmap stores, for each column, a bit 1 followed by as many zeros as the number of points in such column. Then, a range of columns $[c_s,c_e]$ in the original domain can be mapped to a new range $[select_1(B,c_s)-c_s+1,select_1(B,c_e+1)-c_e-1]$. If the grid is very sparse and/or the distribution of the data points is very skewed, this bitmap can be compressed with RRR \cite{RRR02} or the sd-bitvector \cite{OkanoharaS07}.
\subsection{$K^2$-trees}\label{sec_k2tree} The $K^2$-tree~\cite{ktree} is a data structure designed to compactly represent sparse binary matrices (which can also be regarded as point grids). The $K^2$-tree subdivides the matrix into $K^2$ submatrices of equal size. In this regard, when $K=2$, the $K^2$-tree performs the same space partitioning of the traditional Quadtree. The submatrices are considered left-to-right and top-to-bottom (i.e., in Morton order), and each is represented with a bit, set to 1 if the submatrix contains at least one non-zero cell. Each node whose bit is 1 is recursively decomposed, subdividing its submatrix into $K^2$ children, and so on. The subdivision ends when a fully-zero submatrix is found or when we reach the individual cells. A $K^2$-tree for the running example is shown in Fig. \ref{fig:k2tree}.
\begin{figure}
\caption{On the left, the conceptual $K^2$-tree for the binary matrix in Fig. \ref{fig:matrix_bin} (highlighted edges are traversed when computing the range query in Fig. \ref{fig:matrix_qry}). On the right, the two bitmaps that are actually used to store the tree.}
\label{fig:k2tree}
\end{figure}
The $K^2$-tree is stored in two bitmaps: $T$ stores the bits of all the levels except the last one, in a level-order traversal, and $L$ stores the bits of the last level (corresponding to individual cells). Given a node whose bit is at position $p$ in $T$, its children nodes are located after position $rank_1(T,p) \cdot K^2$. This property enables $K^2$-tree traversals using just $T$ and $L$.
The worst-case space, if $t$ points are in an $n \times n$ matrix, is $K^2\,t \log_{K^2} \frac{n^2}{t} (1+o(1))$ bits. This can be reduced to $t \log \frac{n^2}{t} (1+o(1))$ if the bitmaps are compressed. This is similar to the space achieved by a wavelet tree, but in practice $K^2$-trees use much less space when the points are clustered. Gagie et al.\ \cite{GHKNPSdcc15.2} show that this quadtree-like partitioning results in $O(c\log n+\sum_i t_i \log l_i)$ bits, when the $t$ points can be partitioned into $c$ clusters with $t_i, \ldots, t_c$ points and diameters $l_1,\ldots,l_c$. Therefore, the $K^2$-tree is competitive in domains where such clusters arise, for example in Web graphs or social networks.
Among other types of queries (such as direct/reverse neighbors, check edge, etc.), the $K^2$-tree can answer range queries with multi-branch top-down traversal of the tree, following only the branches that overlap the query range. This is illustrated in Algorithm \ref{alg:range} (adapted from \cite[Alg.\ 1]{ktree}), which solves the query $Q= [x_1,x_2] \times [y_1,y_2]$ by invoking $Range(n,x_1,x_2,y_1,y_2,0,0,-1)$.
To show an example, in Fig. \ref{fig:k2tree} the edges traversed in the computation of the range query in Fig. \ref{fig:matrix_qry} are highlighted. As mentioned above, this traversal is computed via rank queries on $T$. While this algorithm has no good worst-case time guarantees, in practice times are competitive.
\begin{algorithm}[t]
\caption{{\bf Range}$(n,x_1,x_2,y_1,y_2,d_p,d_q,p)$ lists all non-empty cells in {$[x_1,x_2]$} $\times$ {$[y_1,y_2]$ with a $k^2$-tree}}
\eIf (\tcc*[h]{leaf}){$p \ge |T|$}{
\lIf{$L[p-|T|]=1$}{output $(d_p,d_q)$}
}
(\tcc*[h]{internal node})
{
\If{$p = -1 ~\mathbf{or}~ T[p]=1$}{
$y \leftarrow rank_1(T,p) \cdot k^2$ \\
\For{$i\leftarrow\lfloor x_1 / (n/k) \rfloor \ldots \lfloor x_2 / (n/k) \rfloor$}{
\lIf{$i\leftarrow\lfloor x_1 / (n/k) \rfloor$}{
{$x_1' \leftarrow x_1~\textrm{mod}~(n/k)$}
}
\lElse{$x_1' \leftarrow 0$}
\lIf{$i\leftarrow\lfloor x_2 / (n/k) \rfloor$}{
{$x_2' \leftarrow x_2~\textrm{mod}~(n/k)$}
}
\lElse{$x_2' \leftarrow (n/k)-1$}
\For{$j\leftarrow\lfloor y_1 / (n/k) \rfloor \ldots \lfloor y_2 / (n/k) \rfloor$}{
\lIf{$j=\lfloor y_1 / (n/k) \rfloor$}{
{$y_1' \leftarrow y_1~\textrm{mod}~(n/k)$}
}
\lElse{$y_1' \leftarrow 0$}
\lIf{$j=\lfloor y_2 / (n/k) \rfloor$}{
{$y_2' \leftarrow y_2~\textrm{mod}~(n/k)$}
}
\lElse{$y_2' \leftarrow (n/k)-1$}
{{\bf Range}$(n/k,x_1',x_2',y_1',y_2',d_p+(n/k)\cdot i, d_q+(n/k)\cdot j,y+k\cdot i + j)$}
}
}
}
}
\label{alg:range} \end{algorithm}
\subsection{Treaps, priority search trees and ranked range queries}
A \emph{treap} \cite{SA96} is a binary search tree whose $n$ nodes have two attributes: \emph{key} and \emph{priority}. The treap maintains the binary search tree invariants for the keys and the heap invariants for the priorities, that is, the key of a node is larger than those in its left subtree and smaller than those in its right subtree, whereas its priority is not smaller than those in its subtree. The treap does not guarantee logarithmic height, except on expectation if priorities are independent of keys \cite{MS97}. A treap can also be regarded as the Cartesian tree \cite{Vui80} of the sequence of priorities once the values are sorted by keys. The succinct representations of the Cartesian tree topology \cite{FH11} are called range maximum query (RMQ) data structures, use just $2n+o(n)$ bits, and are sufficient to find the maximum in any range of the sequence. By also storing the priority data, they can answer top-$k$ queries in $O(k \log k)$ or $O(k \log\log n)$ time. The treap can also be used to compress the representation of keys and priorities \cite{KNCLOsigir13}. Similar data structures for two or more dimensions are convenient only for dense grids (full of points) \cite{GIKRR11}.
The \emph{priority search tree} \cite{mc85} is somewhat similar, but it is balanced. In this case, a node is not the one with highest priority in its subtree, but the highest-priority element is stored in addition to the element at the node, and removed from the subtree. Priority search trees can be used to solve 3-sided range queries on $t$-point grids, returning $k$ points in time $O(k+\log t)$. This has been used to add rank query capabilities to several index data structures such as suffix trees and range trees \cite{iwona05}.
\section{Related Work}\label{sec:relwork}
Navarro et al.\ \cite{NNR13} introduce compact data structures for various queries on two-dimensional weighted points, including range top-$k$ queries and range counting queries. Their solutions are based on wavelet trees. For range top-$k$ queries, the bitmap of each node of the wavelet tree is enhanced as follows: Let $x_1,\ldots,x_r$ be the points represented at a node, and $w(x)$ be the weight of point $x$. Then, a RMQ data structure built on $w(x_1),\ldots,w(x_r)$ is stored together with the bitmap. Such a structure uses $2r+o(r)$ bits and finds the maximum weight in any range $[w(x_i),\ldots,w(x_j)]$ in constant time \cite{FH11} and without accessing the weights themselves. Therefore, the total space becomes $3n\log m + o(n\log m)$ bits.
To solve top-$k$ queries on a grid range $Q= [x_1,x_2] \times [y_1,y_2]$, we first traverse the wavelet tree to identify the $O(\log m)$ bitmap intervals where the points in $Q$ lie. The heaviest point in $Q$ in each bitmap interval is obtained with an RMQ, but we need to obtain the actual priorities in order to find the heaviest among the $O(\log m)$ candidates. The priorities are stored sorted by $x$- or $y$-coordinate, so we obtain each one in $O(\log m)$ time by tracking the point with maximum weight in each interval. Thus a top-1 query is solved in $O(\log^2 m)$ time. For a top-$k$ query, we must maintain a priority queue of the candidate intervals and, each time the next heaviest element is found, we remove it from its interval and reinsert in the queue the two resulting subintervals. The total query time is $O((k+\log m)\log (km))$. It is possible to reduce the time to $O((k+\log m)\log^\epsilon m)$ time and $O(\frac{1}{\epsilon}n\log m)$ bits, for any constant $\epsilon>0$ \cite{NNsoda12}, but the space usage is much higher, even if linear.
Wavelet trees can also compute range counting queries in $O(\log m)$ time with $n\log m+o(n \log m)$ bits. The algorithm to solve range counting queries on a grid range $Q= [x_1,x_2] \times [y_1,y_2]$ also starts by traversing the wavelet tree to identify the $O(\log m)$ bitmap intervals where the points in $Q$ lie, but then it just adds up all the bitmap interval lengths.
A better result, using multi-ary wavelet trees, was introduced by Bose et al.\ \cite{BoseHMM09}. They match the optimal $O(\log n / \log\log n)$ time using just $n\log n +o(n\log n)$ bits on an $n \times n$ grid. Barbay et al.\ \cite{BCNic13} extended the results to $n \times m$ grids. This query time is optimal within space $O(n~polylog(n))$ \cite{Patrascu07}.
\section{Augmenting the $K^2$-tree}\label{sec:generaltech} In this section we describe a general technique that can be used to solve aggregated range queries on clustered points. We then present two applications of this technique to answer two paradigmatic examples, ranked range queries and range counting queries. These examples illustrate how to adjust and tune the general technique for particular operations.
Let $M[n\times n]$ be a matrix in which cells can be empty or contain a weight in the range $[0, d-1]$ and let $BM[n\times n]$ be a binary matrix in which each cell contains a zero or a one. Matrix $BM$ represents the topology of $M$, that is, $BM[i][j]=1$ iff $M[i][j]$ is not empty.
We store separately the topology of the matrix and the weights associated with the non-empty cells. For the topology, we use a $K^2$-tree representation of $BM$ (recall Section \ref{sec_k2tree}), which will take advantage of clustering.
A level-wise traversal of the $K^2$-tree can be used to map each node to a position in an array of aggregated values, which stores a {\em summary} of the weights in the submatrix of the node. Thus the position where the aggregated value of a node is stored is easily computed from the node position in $T$.
The specific value of this summary depends on the operation. For example, for ranked range queries (Section \ref{sec:topk}) the summary represents the maximum weight in the corresponding submatrix, whereas for counting queries (Section \ref{sec:rangecounting}), it represents the number of non-empty cells in the submatrix. However, a common property is that the value associated with a node aggregates information about its $K^2$ children. Therefore, we use a sort of differential encoding \cite{CNS14} to encode the values of each node with respect to the value of its parent. In other words, the information of a node (such as its summary and number of children) is used to represent the information of its children in a more compact way. In order to access the original (non-compressed) information of a node we need to first access its parent (i.e., the operations in this technique are restricted to root-to-leaf traversals).
To summarize, we use a $K^2$-tree to represent the topology of the data and augment each node of such a tree with additional values that represent the aggregated information related with the operation to be supported. These aggregated values are differentially encoded with respect to information of the parent node in order to store them in reduced space. In the following sections we show how both the $K^2$-tree and the differentially encoded values can be tuned to efficiently solve two types of queries.
Finally, note that we present our results for matrices of size $n \times n$. This does not lose generality, as we can extend a matrix $M'[n \times m]$ with zeros to complete a square matrix $M[n \times n]$ (w.l.o.g. we assume $m \leq n$). As the topology of the matrix is represented with a $K^2$-tree, this does not cause a significant overhead because the $K^2$-tree is efficient to handle large areas of zeros. Actually, we round $n$ up to the next power of $K$ \cite{ktree}.
\section{Answering Ranked (Max/Min) Range Queries}\label{sec:topk}
We present a first application of the general technique described in the previous section, to solve ranked range queries. We present the case of $\mathcal{MAX}$ queries, but the results are analogous for $\mathcal{MIN}$ queries. We name this data-structure $K^2$-treap, as it conceptually combines a $K^2$-tree with a treap data structure.
Consider a matrix $M[n \times n]$ where each cell can either be empty or contain a weight in the range $[0,d-1]$. We consider a quadtree-like recursive partition of $M$ into $K^2$ submatrices, the same performed in the $K^2$-tree with binary matrices. We build a conceptual $K^2$-ary tree similar to the $K^2$-tree, as follows: the root of the tree will store the coordinates of the cell with the maximum weight of the matrix, and the corresponding weight. Then the cell just added to the tree is marked as \emph{empty},
deleting it from the matrix. If many cells share the maximum weight, we pick any of them. Then, the matrix is conceptually decomposed into $K^2$ equal-sized submatrices, and we add $K^2$ children nodes to the root of the tree, each representing one of the submatrices. We repeat the assignment process recursively for each child, assigning to each of them the coordinates and value of the heaviest cell in the corresponding submatrix and removing the chosen point. The procedure continues recursively on each branch until we reach the cells of the matrix, or we find a completely empty submatrix (either because the submatrix was initially empty or because we emptied it by successively extracting heaviest points).
Fig.~\ref{fig:ck2treap} shows an example of $K^2$-treap construction, for $K=2$. On the top of the image we show the state of the matrix at each level of the decomposition. $M0$ represents the original matrix, where the maximum value is highlighted. The coordinates and value of this cell are stored in the root of the tree. In the next level of the decomposition (matrix $M1$) we find the maximum values in each quadrant (notice that the cell assigned to the root has already been removed from the matrix) and assign them to the children of the root node. The process continues recursively, subdividing each matrix into $K^2$ submatrices. The cells chosen as local maxima are highlighted in the matrices corresponding to each level, except in the last level where all the cells are local maxima. Empty submatrices are marked in the tree with the symbol ``\texttt{-}''.
\begin{figure}
\caption{Example of a $K^2$-treap construction for the matrix in Fig. \ref{fig:matrix}. At the top, $M_i$ represents the state of the matrix at level $i$ of the decomposition. On the bottom, the conceptual $K^2$-treap.}
\label{fig:ck2treap}
\end{figure}
\subsection{Local maximum coordinates}
The data structure is represented in three parts: The coordinates of the local maxima, the weights of the local maxima, and the tree topology.
The conceptual $K^2$-treap is traversed level-wise, reading the sequence of cell coordinates from left to right in each level. The sequence of coordinates at each level $\ell$ is stored in a different sequence $coord[\ell]$. The coordinates at each level $\ell$ of the tree are transformed into an offset in the corresponding submatrix, representing each $c_i$ as $c_i~\mathrm{mod}~(n / K^\ell)$ using $\lceil \log(n) - \ell\log K\rceil$ bits. For example, in Fig.~\ref{k2treapReal} (top) the coordinates of node $N1$ have been transformed from the global value $(4,4)$ to a local offset $(0,0)$. In the bottom of Fig.~\ref{k2treapReal} we highlight the coordinates of nodes $N0$, $N1$ and $N2$ in the corresponding $coord$ arrays. In the last level all nodes represent single cells, so there is no $coord$ array in this level. With this representation, the worst-case space for storing $t$ points is $\sum_{\ell=0}^{\log_{K^2}(t)} 2K^{2\ell}\log\frac{n}{K^\ell} = t\log\frac{n^2}{t} (1+O(1/K^2))$, that is, the same as if we stored the points using the $K^2$-tree.
\begin{figure}
\caption{Storage of the conceptual tree in our data structures. On the top, the differentially encoded conceptual $K^2$-treap. On the bottom left, the conceptual $K^2$-tree that stores the topology of the matrix, and its bitmap implementation $T$. On the bottom right, the local maximum values.}
\label{k2treapReal}
\end{figure}
\subsection{Local maximum values} The maximum value in each node is encoded differentially with respect to the maximum of its parent node~\cite{CNS14}. The result of the differential encoding is a new sequence of non-negative values, smaller than the original. Now the $K^2$-treap is traversed level-wise and the complete sequence of values is stored in a single sequence named $values$. To exploit the small values while allowing efficient direct access to the array, we represent $values$ with Direct Access Codes (DACs)~\cite{BLN12}. Following the example in Fig.~\ref{k2treapReal}, the value
of node $N1$ has been transformed from 7 to $8-7=1$. The bottom of the figure depicts the
complete sequence $values$. We also store a small array $\mathit{first}[0,\log_{K^2} n]$ that stores the offset in $values$ where each level starts.
\subsection{Tree structure} We separate the structure of the tree from the values stored in the nodes. The tree structure of the $K^2$-treap is stored in a $K^2$-tree. Fig.~\ref{k2treapReal} shows the $K^2$-tree representation of the example tree, where only cells with value are labeled with a 1. We will consider a $K^2$-tree stored in a single bitmap $T$ with $rank$ support, that contains the sequence of bits from all the levels of the tree. Our representation differs from a classic $K^2$-tree (which uses two bitmaps $T$ and $L$ and only adds rank support to $T$) because we will need to perform rank operations also in the last level of the tree. The other difference is that points stored separately are removed from the grid. Thus, we save the $K^2$-tree space needed to store those removed points. Our analysis above shows that, in a worst-case scenario, the saved and the extra space cancel out each other, thus storing those explicit coordinates is free in the worst case.
\subsection{Query processing} \subsubsection{Basic navigation} To access a cell $C=(x,y)$ in the $K^2$-treap we start accessing the $K^2$-tree root. The coordinates and weight of the element stored at the root node are $(x_0,y_0)=coord[0][0]$ and $w_0=values[0]$. If $(x_0,y_0)=C$, we return $w_0$ immediately.
Otherwise, we find the quadrant where the cell would be located and navigate to that node in the $K^2$-tree. Let $p$ be the position of the node in $T$. If $T[p]=0$ we know that the complete submatrix is empty and return immediately. Otherwise, we need to find the coordinates and weight of the new node. Since only nodes set to 1 in $T$ have coordinates and weights, we compute $r=rank_1(T,p)$. The value of the current node will be at $values[r]$, and its coordinates at $coord[\ell][r-\mathit{first}[\ell]]$, where $\ell$ is the current level. We rebuild the absolute value and coordinates, $w_1$ as $w_0 - values[r]$ and $(x_1,y_1)$ by adding the current submatrix offset to $coord[\ell][r-\mathit{first}[\ell]]$. If $(x_1,y_1)=C$ we return $w_1$, otherwise we find again the appropriate quadrant in the current submatrix where $C$ would be located, and so on. The formula to find the children is identical to that of the $K^2$-tree. The process is repeated recursively until we find a 0 bit in the target submatrix, we find a 1 in the last level of the $K^2$-tree, or we find the coordinates of the cell in an explicit point.
\subsubsection{Top-$k$ queries}
The process to answer top-$k$ queries starts at the root of the tree. Given a range $Q= [x_1,x_2] \times [y_1, y_2]$, the process initializes an empty max-priority queue and inserts the root of the $K^2$-tree. The priority queue stores, in general, $K^2$-tree nodes sorted by their associated maximum weight (for the root node, this is $w_0$). Now, we iteratively extract the first element from the priority queue (the first time this is the root). If the coordinates of its maximum element fall inside $Q$, we output it as the next answer. In either case, we insert all the children of the extracted node whose submatrix intersects with $Q$, and iterate. The process finishes when $k$ results have been found or when the priority queue becomes empty (in which case there are less than $k$ elements in $Q$).
\subsubsection{Other supported queries} The $K^2$-treap can also answer basic range queries (i.e., report all the points that fall in $Q$). This is similar to the procedure on a $K^2$-tree, where the submatrices that intersect $Q$ are explored in a depth-first manner. The only difference is that we must also check whether the explicit points associated to the nodes fall within $Q$, and in that case report those as well. Finally, we can also answer {\em interval queries}, which ask for all the points in $Q$ whose weight is in a range $[w_1,w_2]$. To do this, we traverse the tree as in a top-$k$ range query, but we only output weights whose value is in $[w_1,w_2]$. Moreover, we discard submatrices whose maximum weight is below $w_2$.
\section{Answering Range Counting Queries}\label{sec:rangecounting}
Consider a binary matrix $BM[n \times n]$ where each cell can either be empty or contain data\footnote{It is easy to allow having more than one point per cell, by using the aggregated sums described in Section \ref{subsub_other}.}. In this case, a $K^2$-tree can be used to represent $BM$ succinctly while supporting range queries, as explained in Section \ref{sec_k2tree}. Obviously, the algorithm presented for range reporting can be optimized to count the number of elements in a range, instead of reporting such elements. In this section, we show how to augment the $K^2$-tree with additional data to further optimize those range counting queries. In Section \ref{sec:experiments} we show that this adds a small overhead in space, while drastically reducing the running time of those queries.
\subsection{Augmenting the $K^2$-tree} The augmented data structure stores additional information to speed up range counting queries. In Fig. \ref{fig:rck2tree} we show a conceptual $K^2$-tree in which each node has been annotated with the number of elements in its corresponding submatrix. Note that this is the same example of Fig.\ \ref{fig:ck2treap}, considering as non-empty cells those with weight larger than 0.
\begin{figure}
\caption{Storage of the conceptual tree in our data structures. On the top, the conceptual $K^2$-treap for the binary matrix in Fig. \ref{fig:matrix_bin}. On the bottom left, the conceptual $K^2$-tree that stores the topology of the matrix. On the bottom right, the bitmap that implements the $k^2$-tree, $T$, and the sequence of differentially encoded counting values, $counts$.}
\label{fig:rck2tree}
\end{figure}
This conceptual tree is traversed level-wise, reading the sequence of counts from left to right at each level. All these counts are stored in a sequence $counts$ using a variant of the differential encoding technique presented in Section \ref{sec:generaltech}. Let $v$ be a node of the $K^2$-tree, $children(v)$ the number of children of $v$, and $count(v)$ the number of elements in the submatrix represented by $v$. Then, $\overline{count(v')}=\frac{count(v)}{children(v)}$ represents the expected number of elements in the submatrix associated with each child $v'$ of $v$, assuming a uniform distribution. Thus, the count of each node $v'$ is stored as the difference of the actual count and its expected value. In the running example, the root has three children and there are 22 elements in the matrix. Each of the corresponding submatrices is expected to contain $\lfloor 22/3 \rfloor = 7$ elements whereas they actually contain 10, 7 and 5 elements, respectively. Hence, the differential encoding stores $10-7=3$, $7-7=0$, and $5-7=-2$.
The result of this differential encoding is a new sequence of values smaller than the original, but which may contain negative values. In order to map this sequence, in a unique and reversible way, to a sequence of non-negative values we use the folklore \emph{overlap and interleave} scheme, which maps a negative number $-i$ to the $i^{th}$-odd number ($2i-1$) and a positive number $j$ to the $j^{th}$ even number ($2j$). Finally, to exploit the small values while allowing efficient direct access to the sequence, we represent $counts$ with DACs~\cite{BLN12}.
As $counts$ corresponds with a level-wise traversal of the $K^2$-tree, it is not necessary to store this additional information for all the levels of the tree. In this way, we provide a parametrized implementation that sets the space overhead by defining the number of levels for which counting information is stored. This provides a space-time trade-off we later study experimentally.
\subsubsection{Range counting queries}
The base of the range counting algorithm is the range reporting algorithm of the $K^2$-tree, Algorithm \ref{alg:range}. We modify this divide-and-conquer algorithm in order to navigate both the $K^2$-tree and $counts$ at the same time. Given a query range $Q=[x_1,x_2] \times [y_1,y_2]$, we start accessing the $K^2$-tree root, and set $c_0 = counts[0]$ and $result=0$. Then, the algorithm explores all the children of the root that intersect $Q$ and decodes the counting value of the node from the $counts$ sequence and the absolute counting value of the root. The process continues recursively for each children until it finds a completely empty submatrix, in which case we do not increment $result$, or a matrix completely contained inside $Q$, in which case we increment result with the counting value of such matrix.
To clarify the procedure, let us introduce some notation and basic functions. We name the nodes with the position of their first bit in the bitmap $T$ that represents the $K^2$-tree. Then, $v=0$ is the root, $v=K^2$ is its first non-empty child, and so on. Recall that $rank_1(T,v) \cdot K^2$ gives the position in $T$ where the children of $v$ start and each of them is represented with $K^2$ bits. We can obtain the number of children of $v$ as $NumChildren(v)=rank_1(T,v+K^2)-rank_1(T,v-1)$. Non-empty nodes store their differential encoding in $counts$ in level order, so we must be able to compute the level order of a node in constant time. Node $v$ stores $K^2$ bits that indicate which children of $v$ are non-empty. If the $i^{th}$ bit is set to 1, then the level order of that child node is $rank_1(T,v+i)$, with $i\in[0,K^2-1]$.
Given a node $v$ with absolute counting value $c_v$ and $NumChildren(v)$ children, each child $v'$ is expected to contain $\overline{count(v')}=\frac{c_v}{NumChildren(v)}$ elements. Let $p$ be the level order number of child $v'$. Then, the absolute counting value of $v'$ can be computed as $c_{v'}=counts[p]+\overline{count(v')}$. Note that $counts$ is stored using DACs, which support direct access. We use the computed value $c_{v'}$ to recursively visit the children of $v'$.
Let us consider a query example $q=[0,1]\times[0,2]$. We start at the root with $c_0 = 22$ and $result=0$. The root has three children, but only the first one, stored at position 4 in $T$, intersects $q$. Each child is expected to represent $\lfloor 22/3 \rfloor = 7$ elements, so we set $c_4=counts[rank_1(T,0+0)]+7=3+7=10$. Similarly, we recurse on the first and third child of this node. On the first branch of the recursion, we process node 16 and set $c_{16}=counts[rank_1(T,4+0)]+\lfloor 10/4 \rfloor=counts[4]+2=2$. As the submatrix corresponding with this node is contained in $q$, we add 2 to the result and stop the recursion on this branch. On the other child, we have to recurse until the leaves in order to sum the other element to the result, and obtain the final count of 3.
\subsubsection{Other supported queries}\label{subsub_other}
This data structure obviously supports all the queries that can be implemented on a $K^2$-tree, such as range reporting or emptiness queries. More interesting is that it can also support other types of queries with minor modifications. A natural generalization of range counting queries are aggregated sum queries. In this case, we consider a matrix $M[n\times n]$ where each cell can either be empty or contain a weight in the range $[0,d-1]$. We perform the same data partitioning on the conceptual binary matrix that represents the non-empty cells. In other words, we use a $K^2$-tree to represent the topology of the matrix. Then, instead of augmenting the nodes with the count of the non-empty cells in the corresponding submatrix, we store the sum of the weights contained in such submatrix. The same differential encoding used for range counting can be used to store these sums. In this case, however, the space-efficiency achieved by the data structure depends not only on the clustering of the data, but also on the distribution of the weights. The encoding achieves its best performance when the sums of the weights of all the children of a node are similar.
\section{Experiments and Results}\label{sec:experiments}
In this section we empirically evaluate the two types of queries studied in previous sections. As the datasets and evaluated solutions for both scenarios are quite different, we devote one subsection to each type of query: we first present the experiment setup (baselines and datasets), then an evaluation in terms of space usage, and finally a running time comparison.
All the data structures were implemented by ourselves and the source code is available at \url{http://lbd.udc.es/research/aggregatedRQ}. We ran all our experiments on a dedicated server with 4 Intel(R) Xeon(R) E5520 CPU cores at 2.27GHz 8MB cache and 72GB of RAM memory. The machine runs Ubuntu GNU/Linux version 9.10 with kernel 2.6.31-19-server (64 bits) and gcc 4.4.1. All the data structures were implemented in C/C++ and compiled with full optimizations.
All bitmaps that are employed use a bitmap representation that supports $rank$ and $select$ using $5\%$ of extra space. The wavelet tree employed to implement the solution of Navarro et al.\ \cite{NNR13} is a pointerless version obtained from {\sc LIBCDS} (\url{http://www.github.com/fclaude/libcds}). This wavelet tree is augmented with an RMQ data structure at each level, which requires $2.37n$ bits and solves range maximum queries in constant time.
\subsection{Ranked range queries} \subsubsection{Experiment setup}
We use several synthetic datasets, as well as some real datasets where top-$k$ queries are of interest. Our synthetic datasets are square matrices where only some of the cells have a value set. We build different matrices varying the following parameters: the \emph{size} $s \times s$ of the matrix ($s=1024$, $2048$, $4096$, $8192$), the number of different weights $d$ in the matrix (16, 128, 1024) and the \emph{percentage} $p$ of cells that have a point (10, 30, 50, 70, 100\%). The distribution of the weights in all the datasets is uniform, and the spatial distribution of the cells with points is random. For example, the synthetic
dataset with $(s=2048,d=128,p=30)$ has size $2048 \times 2048$, 30\% of its cells have a value and their values follow a uniform distribution in $[0,127]$.
We also test our representation using real datasets. We extracted two different views from a real OLAP database (\url{https://www.fupbi.com}\footnote{The dataset belongs to SkillupChile\textsuperscript{\textregistered}, which allow us to use it for our research.}) storing information about sales achieved per store/seller each hour over several months: $salesDay$ stores the number of sales per seller per day, and $salesHour$ the number of sales per hour. Huge historical logs are accumulated over time, and are subject to data mining processing for decision making. In this case, finding the places (at various granularities) with most sales in a time period is clearly relevant. Table~\ref{table:real} shows a summary with basic information about the real datasets. For simplicity, in these datasets we ignore the cost of mapping between real timestamps and seller ids to rows/columns in the table, and assume that the queries are given in terms of rows and columns.
\begin{table}[t] \begin{center} \caption{Description of the real datasets used, and space (in bits per cell) required to represent them with the compared data structures. \label{table:real} }{ \scalebox{0.8}{
\begin{tabular}{|c|r|r|r|r|r|r|} \hline Dataset & \#Sellers & Time instants & Number of & $K^2$-treap & $mk2tree$ & $wtrmq$ \\
& (rows)~ & (columns)~~~ & diff.\ values & (bits/cell)& (bits/cell)& (bits/cell) \\ \hline $SalesDay$ & 1314 & 471 & 297 & 2.48 & 3.75 & 9.08 \\ $SalesHour$ & 1314 & 6028 & 158 & 1.06 & 0.99 & 3.90 \\ \hline \end{tabular}} } \end{center} \end{table}
We compare the space requirements of the $K^2$-treap with a solution based on wavelet trees enhanced with RMQ structures \cite{NNR13} ($wtrmq$). Since our matrices can contain none or multiple values per column, we transform our datasets to store them using wavelet trees. The wavelet tree will store a grid with as many columns as values we have in our matrix, in column-major order. A bitmap is used to map the real columns with virtual ones: we append a 0 per new point and a 1 when the column changes. Hence, range queries in the $wtrmq$ require a mapping from real columns to virtual ones (2 $select_1$ operations per query), and the virtual column of each result must be mapped back to the actual value (a $rank_1$ operation per result).
We also compare our proposal with a representation based on constructing multiple $K^2$-trees, one per different value in the dataset. In this representation ($mk2tree$), top-$k$ queries are answered by querying consecutively the $K^2$-tree representations for the higher values. Each $K^2$-tree representation in this proposal is enhanced with multiple optimizations over the simple bitmap approach we use, like the compression of the lower levels of the tree (see~\cite{ktree} for a detailed explanation of this and other enhancements of the $K^2$-tree).
\subsubsection{Space comparison} We start by comparing the compression achieved by the representations. As shown in Table~\ref{table:real}, the $K^2$-treap overcomes the $wtrmq$ in the real datasets considered by a factor over 3.5. Structure $mk2tree$ is competitive with the $K^2$-treap and even obtains slightly less space in the dataset $salesHour$, taking advantage of the relatively small number of different values in the matrix.
The $K^2$-treap also obtains the best space results in most of the synthetic datasets studied. Only in the datasets with very few different values ($d=16$) the $mk2tree$ uses less space than the $K^2$-treap. Notice that, since the distribution of values and cells is uniform, the synthetic datasets are close to a worst-case scenario for the $K^2$-treap and $mk2tree$. Fig.~\ref{fig:spaceSint} provides a summary of the space results for some of the synthetic datasets used. The left plot shows the evolution of compression with the size of the matrix. The $K^2$-treap is almost unaffected by the matrix size, as its space is around $t\log\frac{s^2}{t} = s^2\frac{p}{100}\log\frac{100}{p}$ bits, that is, constant per cell as $s$ grows. On the other hand, the $wtrmq$ uses $t\log s = s^2\frac{p}{100}\log s$ bits, that is, its space per cell grows logarithmically with $s$. Finally, the $mk2tree$ obtains poor results in the smaller datasets but it is more competitive on larger ones (some enhancements in the $K^2$-tree representations behave worse in smaller matrices). Nevertheless, notice that the improvements in the $mk2tree$ compression stall once the matrix reaches a certain size.
The right plot of Fig.~\ref{fig:spaceSint} shows the space results when varying the number of different weights $d$. The $K^2$-treap and the $wtrmq$ are affected only logarithmically by $d$. The $mk2tree$, instead, is sharply affected, since it must build a different $K^2$-tree for each different value: if $d$ is very small the $mk2tree$ representation obtains the best space results also in the synthetic datasets, but for large $d$ its compression degrades significantly.
As the percentage of cells set $p$ increases, the compression in terms of bits/cell (i.e., total bits divided by $s^2$) will be worse. However, if we measure the compression in bits/point (i.e., total bits divided by $t$), then the space of the $wtrmq$ is independent of $p$ ($\log s$ bits), whereas the $K^2$-treap and $mk2tree$ use less space as $p$ increases ($\log \frac{100}{p}$). That is, the space usage of the $wtrmq$ increases linearly with $p$, while that of the $K^2$-treap and $mk2tree$ increases sublinearly. Over all the synthetic datasets, the $K^2$-treap uses from 1.3 to 13 bits/cell, the $mk2tree$ from 1.2 to 19, and the $wtrmq$ from 4 to 50 bits/cell.
\begin{figure}
\caption{Evolution of the space usage with $s$ and $d$ in the synthetic datasets, in bits/cell (in the right plot, the two lines for the $K^2$-treap coincide).}
\label{fig:spaceSint}
\end{figure}
\subsubsection{Query processing} In this section we analyze the efficiency of top-$k$ queries, comparing our structure with the $mk2tree$ and the $wtrmq$. For each dataset, we build multiple sets of top-$k$ queries for different values of $k$ and different spatial ranges (we ensure that the spatial ranges contain at least $k$ points). All query sets are generated for fixed $k$ and $w$ (side of the spatial window). Each query set contains 1,000 queries where the spatial window is placed at a random position within the matrix.
Fig.~\ref{fig:topkSint} shows the time required to perform top-$k$ queries in some of our synthetic datasets, for different values of $k$ and $w$. The $K^2$-treap obtains better query times than the $wtrmq$ in all the queries, and both evolve similarly with the size of the query window. On the other hand, the $mk2tree$ representation obtains poor results when the spatial window is small or large, but it is competitive with the $K^2$-treap for medium-sized ranges. This is due to the procedure to query the multiple $K^2$-tree representations: for small windows, we may need to query many $K^2$-trees until we find $k$ results; for very large windows, the $K^2$-treap starts returning results in the upper levels of the conceptual tree, while the $mk2tree$ approach must reach the leaves; for some intermediate values of the spatial window, the $K^2$-treap still needs to perform several steps to start returning results, and the $mk2tree$ representation may find the required results in a single $K^2$-tree. Notice that the $K^2$-treap is more efficient when no range limitations are given (that is, when $w = s$), since it can return after exactly $K$ iterations. Fig.~\ref{fig:topkSint} only shows the results for two of the datasets, but similar results were obtained in all the synthetic datasets studied.
\begin{figure}
\caption{Times (in microseconds per query) of top-$k$ queries in synthetic datasets for $k=10$ and $k=100$ and range sizes varying from 4 to 4,096. The number of different weights $d$ in the matrix is 128 on the left graph and 1,024 on the right graph, while $s$ and $p$ remain fixed. We omit the lines connecting the points for $wtrmq$ variants, as they produce several crosses that hamper legibility.}
\label{fig:topkSint}
\end{figure}
Next we query our real datasets. We start with the same $w \times w$ queries as before, which filter a range of rows (sellers) and columns (days/hours). Fig.~\ref{fig:topkRealesVentana} shows the results of these range queries. As we can see, the $K^2$-treap outperforms both the $mk2tree$ and $wtrmq$ in all cases. As in the synthetic spaces, the $mk2tree$ obtains poor query times for small ranges but it is better in larger ranges.
\begin{figure}
\caption{Query times (in microseconds per query) of top-$k$ queries in the real datasets $SalesDay$ (left) and $salesHour$ (right) for $k=1$, $k=5$ and $k=50$, and range sizes varying from 4 to 100.}
\label{fig:topkRealesVentana}
\end{figure}
We also run two more specific sets of queries that may be of interest in many datasets, as they restrict only the range of sellers or the time periods, that is, only one of the dimensions of the matrix. Row-oriented queries ask for a single row (or a small range of rows) but do not restrict the columns, and column-oriented ask for single columns. We build sets of 10{,}000 top-$k$ queries for random rows/columns with different values of $k$. Fig.~\ref{fig:topkRealesVentana2} (left) shows that in column-oriented queries the $wtrmq$ is faster than the $K^2$-treap for small values of $k$, but our structure is still faster as $k$ grows. The reason for this difference is that in ``square'' range queries, the $K^2$-treap only visits a small set of submatrices that overlap the region; in row-oriented or column-oriented queries, the $K^2$-treap is forced to check many submatrices to find only a few results. The $mk2tree$ suffers from the same problem, being unable to efficiently filter the matrix, and obtains the worst query times in all cases.
In row-oriented queries (Fig.~\ref{fig:topkRealesVentana2}, right) the $wtrmq$ is even more competitive, obtaining the best results in many queries. The reason for the differences with column-oriented queries in the $wtrmq$ is the mapping between real and virtual columns: column ranges are expanded to much longer intervals in the wavelet tree, while row ranges are left unchanged. Notice anyway that our structure is still competitive unless $k$ is very small.
\begin{figure}
\caption{Query times (in microseconds per query) of column-oriented (left) and row-oriented (right) top-$k$ queries on the real datasets for $k$ varying from 1 to 100.}
\label{fig:topkRealesVentana2}
\end{figure}
\subsection{Range counting queries} \subsubsection{Experiment setup}
In this evaluation, we use grid datasets coming from three different real domains: Geographic Information Systems (GIS), Social Networks (SN) and Web Graphs (WEB). For GIS data we use the Geonames dataset\footnote{http://www.geonames.org}, which contains the geographic coordinates (latitude and longitude) of more than 6 million populated places, and converted it into three grids with different resolutions: Geo-sparse, Geo-med, and Geo-dense. The higher the resolution, the sparser the matrix. These datasets allow for evaluating the influence of data sparsity in the different proposals. For SN we use three social networks (dblp-2011, enwiki-2013 and ljournal-2008) obtained from the Laboratory for Web Algorithmics\footnote{{http://law.di.unimi.it}}~\cite{BoVWFI,BRSLLP}. Finally, in the WEB domain we consider the grid associated with the adjacency matrix of three Web graphs (indochina-2004, uk-2002 and uk-2007-5) obtained from the same Web site. The clustering in these datasets is very dissimilar. In general, GIS datasets do not present many clusters, whereas data points in the WEB datasets are highly clustered. SN represents an intermediate collection in terms of clustering.
In this experiment, we compare our proposal to speed up range counting queries on a $K^2$-tree, named $rck2tree$, with the original $K^2$-tree. Recall from Section~\ref{sec:rangecounting} that we augment the $K^2$-tree with additional data in order to speed up this type of queries. Thus, in this evaluation we show the space-time trade-off offered by the $rck2tree$ structure, which is parametrized by the number of levels of the original $K^2$-tree augmented with counting information. As a baseline from the succinct data structures area, we include a representation of grids based on wavelet trees, named \emph{wtgrid} in the following discussion. This representation was described in Section \ref{subsec:wt} and an algorithm to support range counting was sketched in Section \ref{sec:relwork}. As explained in Section \ref{subsec:wt}, this representation requires a bitmap to map from a general grid to a grid with one point per column. In our experiments we store this bitmap with either plain bitmaps, RRR or sd-arrays, whichever requires less space. As for the wavelet tree itself, we use a balanced tree with just one pointer per level and the bitmaps of each level are compressed with RRR. In other words, we use a configuration of this representation that aims to reduce the space. Note, however, that we use the implementation available in {\sc Libcds} \cite{CN08}, which uses $O(alphabet\_size)$ counters to speed up queries. As we will show in the experiments, this data structure, even with the compression of the bitmaps that represent the nodes of the wavelet tree, does not take full advantage of the existence of clusters in the data points.
Table \ref{tab:space} shows the main characteristics of the datasets used: name of the dataset, size of the grid ($u$)\footnote{Note that, unlike the grids used in the previous scenario, these are square grids, and thus $u$ represents both the number of rows and columns.}, number of points it contains ($n$) and the space achieved by the baseline $wtgrid$, by the original $K^2$-tree and by different configurations of our $rck2tree$ proposal. Unlike the previous scenario, the space is measured in bits per point because these matrices are very sparse, which results in very low values of bits per cell.
\begin{table}[t] \begin{center} \caption{Description of the real datasets used, and space (in bits per point) required to represent them with the compared data structures.\label{tab:space} }{ \scalebox{0.6}{
\begin{tabular}{| c | l | r | r | r | r | r | r | r |} \hline
Dataset & Type & Grid (u) & Points (n) & wtgrid & $K^2$-tree & $rck2tree^4$ & $rck2tree^8$ & $rck2tree^{16}$\\
& & & & (bits/point) & (bits/point) & (bits/point) & (bits/point) & (bits/point) \\
\hline Geo-dense & GIS & 524,288 & 6,049,875 & 17.736 & 14.084 & 14.085 & 14.356 & 18.138 \\ Geo-med & GIS & 4,194,304 & 6,080,640 & 26.588 & 26.545 & 26.564 & 29.276 & 36.875 \\ Geo-sparse & GIS & 67,108,864 & 6,081,520 & 44.019 & 41.619 & 41.997 & 48.802 & 56.979 \\\hline dblp-2011 & SN & 986,324 & 6,707,236 & 19.797 & 9.839 & 9.844 & 10.935 & 13.124 \\ enwiki-2013 & SN & 4,206,785 & 101,355,853 & 19.031 & 14.664 & 14.673 & 16.016 & 19.818 \\ ljournal-2008 & SN & 5,363,260 & 79,023,142 & 20.126 & 13.658 & 13.673 & 15.011 & 18.076 \\\hline indochina-2004 & WEB & 7,414,866 & 194,109,311 & 14.747 & 1.725 & 1.729 & 1.770 & 2.13 \\ uk-2002 & WEB & 18,520,486 & 298,113,762 & 16.447 & 2.779 & 2.797 & 2.888 & 3.451 \\ uk-2007-5 & WEB & 105,896,555 & 3,738,733,648 & 16.005 & 1.483 & 1.488 & 1.547 & 1.919 \\%\hline
\hline \end{tabular}} } \end{center} \end{table}
\subsubsection{Space comparison}
As expected, the representation based on wavelet trees, $wtgrid$, is not competitive in terms of space, especially when the points in the grid are clustered. Even though the nodes of the wavelet tree are compressed with RRR, this representation is not able to capture the regularities induced by the clusters. In the WEB domain, where the points are very clustered, the $wtgrid$ representation requires up to 10 times the space of the $K^2$-tree. Therefore, the latter allows for the processing in main memory of much larger datasets. In domains where the data points are not that clustered, space-savings are still significant but not as outstanding as in the previous case.
Second, we analyze the space overhead incurred by the $rck2tree$ in comparison with the original $K^2$-tree. As mentioned above, this overhead depends on the number of levels of the $K^2$-tree that are augmented with additional range counting information. In these experiments, we show the results of three configurations in which 4, 8 and 16 levels were augmented, respectively. As Table \ref{tab:space} shows, the space overhead is almost negligible for the $rck2tree^4$ and it ranges from 25\% to 40\% for $rck2tree^{16}$. In the next section we will show the influence of this extra space in the performance of the data structure to solve range counting queries.
It is interesting to notice that the space overhead is lower in the domains where the $K^2$-performs best. The $K^2$-tree performs better in sparse domains where data points are clustered \cite{ktree}, for example, in the Web graph. In the largest WEB dataset, uk-2007-5, the $K^2$-tree requires about 1.5 bits per point, and we can augment the whole tree with range counting information using less than 0.5 extra bits per point (this is an overhead of less than 30\%). Sparse and clustered matrices result in less values being stored in the augmented data structure (as in the original $K^2$-tree, we do not need to store information for submatrices full of zeros). In Fig. \ref{fig:rc_space} we show the space overhead in two of our dataset collections, GIS and WEB.
\begin{figure}
\caption{Space overhead (in bits per point) in the real datasets GIS (left) and WEB (right). Each bar, from bottom to top, represents the $K^2$-tree and the additional space used by the $rck2tree$ with 4, 8, and 16 levels, respectively. The additional space required by $rck2tree^4$ is almost negligible.}
\label{fig:rc_space}
\end{figure}
Note also that, unlike the $K^2$-tree, the space overhead does not increase drastically in sparse non-clustered datasets (e.g., Geo-sparse). This is because isolated points waste $K^2$ bits per level in the original $K^2$-tree, and this is much more than the overhead incurred by the range counting fields, where they use approximately two bits per level. The reason is that these fields represent the difference between the expected number of elements in the submatrix, 1, and the actual number, which is also 1.\footnote{Recall that these data are represented using DACs, which require at least two bits.}. For example, in Geo-sparse the $K^2$-tree uses more than 40 bits per point, which is much more than the (roughly) 2 bits per point in the sparse and clustered WEB datasets. However, the space overhead of the $rck2tree^{16}$ (with respect to the $K^2$-tree) is about 40\% in Geo-sparse and 30\% in uk-2007-5 (i.e., a difference of 10 percentage points). In the configurations that store range counting values only for some levels of the $K^2$-tree, the difference is even smaller. This is expected because there are fewer isolated points in the higher levels.
\subsubsection{Query processing}
In this section we analyze the efficiency of range counting queries, comparing our augmented $K^2$-tree with the original data structure and with the $wtgrid$. For each dataset, we build multiple sets of queries with different query selectivities (i.e., size of spatial ranges). All the query sets were generated for fixed query selectivity. A query selectivity of $X\%$ means that each query in the set covers $X\%$ of the cells in the dataset. Each query set contains 1,000 queries where the spatial window is placed at a random position in the matrix.
As in the space comparison, we show the results of three different configurations of our augmented data structure, in which 4, 8, and 16 levels are augmented with range counting data. Fig. \ref{fig:rc_time} shows the time required to perform range counting queries in some of our real datasets, for different values of query selectivity. For each domain (GIS, SN and WEB), we only show the results of the two most different datasets, as the others do not alter our conclusions.
\begin{figure}
\caption{Query times (in microseconds per query) of range counting queries in two examples of each type of real dataset: GIS (top), SN (middle) and WEB (bottom) for range queries sizes varying from $0.001\%$ to $1\%$.}
\label{fig:rc_time}
\end{figure}
Our augmented data structure consistently outperforms the original $K^2$-tree for all domains and query selectivities. The only exception is the $rck2tree^4$ in the two SN datasets and Geo-dense for very selective queries (i.e., with the smallest areas). The influence of the query selectivity in the results is evident. The larger the query, the higher the impact of the additional range counting data in the performance of the data structure. Larger queries are expected to stop the recursion of the range counting algorithm in higher levels of the $K^2$-tree because those queries are more likely to contain the whole area covered by nodes of the $K^2$-tree. Recall that a node of the $K^2$-tree at level $i$ represents $u^2/(K^2)^i$ cells. In our experiments we use a configuration of the $K^2$-tree in which the first six levels use a value of $K_1=4$ (thus partitioning the space into $K_1^2=16$ regions) and the remaining levels use a value of $K_2=2$. Therefore, for the $rck2tree^4$ to improve the performance of the range counting queries, those queries must contain at least $u^2/(4^2)^4= u^2/2^{16}$ cells. In the $rck2tree^8$ and $rck2tree^{16}$, which contain range counting data for more levels of the $K^2$-tree, this value is much lower, and thus the recursion can be stopped early even for small queries. For larger queries, the performance improvement reaches several orders of magnitude in some datasets (note the logarithmic scale).
In most datasets, the performance of $rck2tree^8$ and $rck2tree^{16}$ is very similar, therefore the former is preferable as it requires less extra space. Hence, in general, we can recommend the use of the $rck2tree^8$ configuration to speed up range counting queries. If the queries are not very selective, and there are memory constraints, the $rck2tree^4$ can be an alternative as it requires almost the same space of the original $K^2$-tree.
The $wtgrid$ is consistently not only faster than the data structures based on the $K^2$-tree, but also less sensitive to the size of the query. However, as we showed above, it also requires significantly more space. To reinforce these conclusions, Fig. \ref{fig:rc_tradeoff} shows the space-time trade-offs of the different configurations. We selected two representative datasets that were not used in the previous experiment, enwiki-2013 (SN) and uk-2002 (WEB), and two query selectivities for each of them, $0.001\%$ and $0.1\%$. Each point in the lines named $rck2tree$ represents a different configuration with 4, 8, and 16 levels augmented with range counting information, respectively from left to right.
\begin{figure}
\caption{Space-time trade-offs offered by the compared range counting variants on examples of SN (left) and WEB (right) datasets.}
\label{fig:rc_tradeoff}
\end{figure}
These graphs show that the $rck2tree^8$ configuration offers the most interesting trade-off, as it requires just a bit more space than the original $K^2$-tree and it speeds up range counting queries significantly. This effect is more evident for larger queries, but even for very selective queries the improvement is significant. The $wtgrid$ competes in a completely different area of the trade-off, being the fastest data structure, but also the one that requires the most space.
\section{Conclusions}\label{sec:conclusions}
We have introduced a technique to solve aggregated 2D range queries on grids, which requires little space when the data points in the grid are clustered. We use a $K^2$-tree to represent the topology of the data (i.e., the positions of the data points) and augment each node of the tree with additional aggregated information that depends on the operation to be supported. The aggregated information in each node is differentially encoded with respect to its parent in order to reduce the space overhead. To illustrate the applicability of this technique, we adapted it to support two important types of aggregated range queries: ranked and counting range queries.
In the case of ranked queries, we named the resulting data structure $K^2$-treap. This data structure performs top-$k$ range queries up to 10 times faster than current state-of-the-art solutions and requires as little as 30\% of their space, both in synthetic and real OLAP datasets. This holds even on uniform distributions, which is the worst scenario for $K^2$-treaps.
For range counting queries, our experimental evaluation shows that with a small space overhead (below 30\%) on top of the $K^2$-tree, our data structure answers queries several orders of magnitude faster than the original $K^2$-tree, especially when the query ranges are large. These results are consistent in the different databases tested, which included domains with different levels of clustering in the data points. For example, in Web graphs the data points are very clustered, which is not the case in GIS applications. The comparison with a wavelet tree-based solution shows that, although the wavelet tree is faster, our proposal requires less space (up to 10 times less when the points are clustered). Thus, we provide a new alternative in the space-time trade-off which allows for the processing of much larger datasets.
Although we have presented the two types of queries separately, this does not mean that an independent data structure would be required for each type of aggregated query. The topology of the data can be represented by a unique $K^2$-tree and each type of aggregated query just adds additional aggregated and differentially encoded information. However, some specific optimizations on the $K^2$-tree, such as the one presented for ranked range queries, may not be possible for all types of queries.
The technique can be generalized to represent grids in higher dimensions, which is essential in some domains such as OLAP databases \cite{Sarawagi97}, by replacing our underlying $K^2$-tree with its generalization to $d$ dimensions, the $K^d$-tree \cite{ABNP13} (not to be confused with $kd$-trees \cite{Ben75}). The algorithms stay identical, but an empirical evaluation is left for future work. In the worst case, a grid of $t$ points on $[n]^d$ will require $O(t \log \frac{n^d}{t})$ bits, which is of the same order of the data, and much less space would be used on clustered data. Instead, an extension of the wavelet tree will require $O(n\log^d n)$ bits, which quickly becomes impractical. Indeed, any structure able to report the points in a range in polylogarithmic time requires $\Omega(n(\log n/\log\log n)^{d-1})$ words of space \cite{Cha90}, and with polylogarithmic space one needs time at least $\Omega(\log n(\log n/\log\log n)^{\lfloor d/2\rfloor-2})$ \cite{AAL12}. As with top-$k$ queries one can report all the points in a range, there is no hope to obtain good worst-case time and space bounds in high dimensions, and thus heuristics like $K^d$-treaps are the only practical approaches ($kd$-trees do offer linear space, but their time guarantee is rather loose, $O(n^{1-1/d})$ for $n$ points on $[n]^d$).
\end{document} | arXiv |
Polygenic risk prediction based on singular value decomposition with applications to alcohol use disorder
James J. Yang1,
Xi Luo1,
Elisa M. Trucco2,3 &
Anne Buu4
Background/aim
The polygenic risk score (PRS) shows promise as a potentially effective approach to summarize genetic risk for complex diseases such as alcohol use disorder that is influenced by a combination of multiple variants, each of which has a very small effect. Yet, conventional PRS methods tend to over-adjust confounding factors in the discovery sample and thus have low power to predict the phenotype in the target sample. This study aims to address this important methodological issue.
This study proposed a new method to construct PRS by (1) approximating the polygenic model using a few principal components selected based on eigen-correlation in the discovery data; and (2) conducting principal component projection on the target data. Secondary data analysis was conducted on two large scale databases: the Study of Addiction: Genetics and Environment (SAGE; discovery data) and the National Longitudinal Study of Adolescent to Adult Health (Add Health; target data) to compare performance of the conventional and proposed methods.
Result and conclusion
The results show that the proposed method has higher prediction power and can handle participants from different ancestry backgrounds. We also provide practical recommendations for setting the linkage disequilibrium (LD) and p value thresholds.
Genome-wide association studies (GWAS) have been used to identify variants that are significantly associated with the phenotype of interest. Yet, for complex diseases such as substance use disorders (SUD), the phenotype tends to be influenced by a combination of multiple genes or variants, each of which has a very small effect. As a result, many GWAS with small to moderate sample sizes fail to identify important variants even though the phenotype has been shown to be highly heritable. This phenomenon is called the "missing heritability problem" [1]. Although increasing the sample size of GWAS or conducting a meta analysis on several studies are possible remedies to reach sufficient statistical power, they may not be feasible in some practical settings. An alternative approach that shows promise is the polygenic risk score (PRS), also known as genetic risk score or risk profile score [2]. PRS derived its name from the notion that complex diseases are highly polygenic [3] with the effect of each variant being very small. To deal with this issue, the PRS approach proposes an additive model to summarize the marginal effects of many variants to quantify genetic influences on a particular phenotype [4]. Thus, PRS represents the distribution of aggregated genetic liability that can be used to profile the genetic contribution to the phenotype. To our knowledge, Wray et al. [5] was the first study to apply the PRS approach in GWAS.
PRS has been applied to therapeutic intervention, disease screening, and life planning [6]. For example, PRS was used to predict onset and early patterns of heavy episodic drinking in males [7]. The well-known Adolescent Brain Cognitive Development Study (https://abcdstudy.org/) also demonstrated its ability to predict cognitive performance in a large sample of 9–10 years old children in the US population [8]. Further, PRS was integrated with family history and traditional risk factors to improve the screening for coronary heart disease [9].
In spite of the above potential applications of PRS, how to accurately estimate PRS remains an open research question. Because the allele frequencies of variants and the linkage disequilibrium (LD) patterns vary across different populations [10], constructing PRS without considering these two key factors is likely to result in either bias or lower power. According to a recent comprehensive review of existing PRS studies [11], 67% of studies included exclusively European ancestry participants; 19% included only East Asian ancestry participants; and only 3.8% were among cohorts of African, Hispanic, or Indigenous individuals. Importantly, the same study showed that the predictive performance of European ancestry-derived PRS is lower in non-European ancestry samples with the worst performance found among African ancestry samples. This is the so-called transferability issue with PRS [12].
In this paper, we review conventional PRS methods and identify important methodological issues. To deal with these issues, we propose a PRS method based on lower rank approximation of the observed genotypes and eigen-correlation selection. Empirical data collected from the substance use field are chosen to demonstrate the applications of these PRS methods because substance use disorders (SUD) are highly heritable [13,14,15] and many variants have been identified to be associated with SUD [16]. Secondary data analysis is conducted to compare different PRS methods in terms of their performance. The results also shed some light on future applications of these methods.
Review of conventional PRS methods
In general, the PRS method requires two independent data sets: the discovery data and the target data. The discovery data is used to identify the set of variants associated with the phenotype and estimate their effects. These estimated effects are later applied to the genotypes of the participants in the target data to calculate their PRS.
Let \({\hat{\beta }}_j\) be the marginal effect size for Variant j (\(j=1,\ldots m\)) estimated from the discovery data; and \(g_{ij}\) be the genotype coded as the number of the effect allele at Variant j for Individual i from the target data. The PRS for Individual i is calculated as
$$\begin{aligned} S_i=\sum _{j=1}^m {\hat{\beta }}_j g_{ij}. \end{aligned}$$
Equation (1) indicates that the PRS is an additive function of the genotype \(g_{ij}\). The PRS for Individual i in the target data is the weighted sum of his/her genotype \(g_{ij}\) with the weights \({\hat{\beta }}_j\) estimated from the discovery data.
Based on Eq. (1), the performance of PRS depends on the set of variants \(g_{ij}\) and the effect size of each variant \({\hat{\beta }}_j\). In a typical setting of GWAS, the number of variants well exceeds 1 million whereas the number of participants is usually between 1000 and 10,000. If all the variants are included for calculating PRS, it is not feasible to jointly model all variants and estimate their effects accurately. In fact, the majority of variants are not likely to be associated with the phenotype.
Choi et al. [17] summarized various approaches for PRS construction. Among them, the clumping and thresholding approach is widely used because of its simplicity and relatively good performance. In addition, this approach only requires summary statistics rather than the original genetic data which are usually not publicly accessible due to confidentiality issues. The clumping step identifies the variant with the strongest association with the phenotype and removes neighboring variants that are in linkage disequilibrium with it. Thus, it produces a subset of variants which are in linkage equilibrium with one another. The thresholding step further reduces the number of variants identified in the clumping step by only keeping those variants if their p values are smaller than a given threshold. The optimal values of clumping and thresholding are usually determined by a model selection procedure. The details of these steps can be found in Choi et al. [17] and this approach was adopted by the popular PRS software: PRSice [18]. Another popular approach—based on a Bayesian model—takes the linkage disequilibrium among variants into account and models the faction of causal variants in the prior distribution. The Markov chain Monte Carlo method is used to estimate the shrink effects of variants. This approach was implemented in LDpred [19] and its improved version: LDpred2 [20].
Important methodological issues of PRS
The performance of PRS depends on not only the chosen variants but also the quality of estimates of their effects. The traditional PRS construction usually follows the method described in Purcell et al. [21]. The fist step is to separate discovery samples into ancestrally homogeneous subgroups. The next step is to derive principal components such as 10 in each group and add these 10 principal components as covariates in the regression model to estimate the effect of each variant. Some researchers further adjusted for additional covariates such as age and gender. The reason is that these leading principal components are confounded with population stratification or cryptic relatedness. Adding these covariates could correct for these effects so the adjusted effect of each variant may not be biased.
Yet, the \(R^2\) value of the resultant PRS following this practice is usually only around 0.64–1.1% [7] in alcohol use behaviors, although in neuropsychiatric diseases up to 5–6% has been reported [22]. In fact, recent studies have shown the range of \(R^2\) to be 0.5–3% [23,24,25,26]. These relatively small \(R^2\) values raise a legitimate concern that the estimation of marginal effects of variants may not adequately reflect the polygenic contribution to the phenotypes. Specifically, the estimation may have been over-adjusted. For example, if the principal components derived from the genotypes of the discovery sample are highly correlated with race and ethnicity that happens to be a strong predictor of the phenotype, adjusting for the principal components would eliminate not only the effect of ethnicity but also the power to predict the phenotype using the adjusted variant effects. Furthermore, because the number of variants is much larger than the sample size, including more variants does not necessarily increase the prediction accuracy of the PRS estimate [27]. Thus, how to choose an informative subset of variants is critical. The present study proposes a new PRS method to deal with these issues.
The proposed PRS method based on principal component projection
Polygenic model
Suppose the discovery genotype data are organized as a \(n\times m\) matrix A with each row corresponding to an individual and each column to a variant. Thus, the cell \(g_{ij} (\in A)\) represents the genotype of Individual i on Variant j. For SNP data, \(g_{ij}\) is coded as 0, 1, or 2 to reflect the number of the effect allele. Before calculating the principal components, \(g_{ij}\) is normalized as \((g_{ij}- 2p_j)/\sqrt{2p_j(1-p_j)}\), where \(p_j\) is the allele frequency of each variant [28]. Missing values are imputed with 0. The resulting normalized matrix of A is denoted by Z.
The effects of genotypes on the phenotypes of n subjecs \(\varvec{y} = (y_1,\ldots ,y_n)^T\) can be characterized using the following linear random effect model:
$$\begin{aligned} \varvec{y} = \mu + Z\varvec{b} + {\varvec{e}} \end{aligned}$$
where \(\mu\) is the intercept, Z is the normalized matrix of genotypes, \(\varvec{b} \sim N(0,\sigma _a^2 I)\) is the random effect, and \({\varvec{e}} \sim N(0,\sigma _e^2 I)\) is the error. The genetic similarity matrix (or genetic relatedness matrix) among the n individuals is defined as \(K= ZZ^T/m\). Equation (2) can then be written as
$$\begin{aligned} \varvec{y} = \mu + {\varvec{g}} + {\varvec{e}} \end{aligned}$$
where \({\varvec{g}} \sim N(0,\sigma _g^2K)\); and \(\sigma _g^2 = m\sigma ^2_a\) is the variance of all the additive genetic effects. When the matrix K is known or can be derived from the pedigree of the n individuals, we can estimate the parameters in Eq. (2) or (3) based on the genotypes and phenotypes. However, when the matrix K needs to be estimated from the genotypes, the number of unknown parameters is larger than the number of data points so the proposed linear random effect model ends up overfitting the data [29]. Therefore, when the matrix K needs to be estimated from the same data, the estimates of parameters in Eq. (3) are biased.
Using the singular value decomposition (SVD), the matrix Z can be expressed as \(Z=U\Lambda V^T\), where U and V are both orthogonal matrices and \(\Lambda\) is a rectangular diagonal matrix with non-negative singular values \((\lambda _1,\lambda _2,\ldots )\) on the diagonal. In practice, we rearrange the column vectors of U, V, and \(\lambda\)'s so that \(\lambda _1 \ge \lambda _2\ge \ldots\). Following the convention of GWAS analysis, we define the principal components of Z as the column vectors of the left singular matrix U and the eigenvalues of Z as the square of singular values of Z. Equation (2) can thus be written as
$$\begin{aligned} \varvec{y}&= \mu + U\Lambda V^T\varvec{b}+{\varvec{e}} \end{aligned}$$
If we define \(\beta = \Lambda V^T\varvec{b}\), then we have
$$\begin{aligned} \varvec{y}= \mu + U \beta + {\varvec{e}}. \end{aligned}$$
Hence, the linear random effects model (Eq. 3) can be written as a linear regression model on the principal components U (Eq. 4). In addition, modeling all the principal components in Eq. (4) as either random or fixed effects shares the same underlying regression model [30].
To address the issue of more parameters than data points in Eq. (4), we propose to use a subset of principal components in the regression model to approximate the full model as follows:
$$\begin{aligned} \varvec{y}= \mu + \sum _{k \in S}u_k\gamma _k +{\varvec{e}} \end{aligned}$$
The details about how to select the indexes in the set S and the variants used to estimate \(u_k\) are described in the following sections. We also demonstrate how to estimate the parameters in Eq. (5) based on the discovery data and apply them to the target data.
Variant selection
The SVD of the genotype matrix Z does not require information about the phenotypes. Since we propose to use the principal components (i.e., the left singular vectors) as predictors of the phenotypes, choosing variants significantly associated with the phenotypes and using these variants to derive the principal components would increase the association in Eq (5). For this reason, we propose a two-step approach to select variants based on the marginal p value of each variant's linear association with the phenotype via a simple linear regression. The first step, LD-based clumping, selects the most significant variant in a region and removes other variants in the same region that are in LD with the the chosen variant. This process is repeated for all regions. After clumping, the final set of variants are in approximate linkage equilibrium with each other. The above procedure is carried out by the plink program with command options –clump-kb 500–clump-p1 1 –clump-r2 \(\rho\) (where \(\rho\) is the LD threshold) so that variants within 500 kb are in linkage equilibrium after clumping. In the second step, a subset of variants is chosen if their p values are smaller than the threshold \(\theta\). In this study, we evaluate different values of \(\rho\) and \(\theta\) in terms of the prediction power of PRS (see details in the results section).
Principal component selection
Once the variants are selected, the next critical step is to determine the number of principal components (left singular vectors) used in Eq. (5). The minimum requirement for a model to be estimable is that the number of principal components is smaller than the sample size. However, too many principal components may increase the variance in the estimates. The common practice is to choose the number based on a fixed number (e.g., 10) or based on the eigenvalues greater than a fixed threshold. These methods, however, do not consider the correlation between each principal component and phenotype. We propose an alternative approach that takes this into account.
Define eigen-correlation (EigenCorr) as the correlation between a principal component (\(u_k\)) and the phenotype (\(\varvec{y}\)) multiplied by the corresponding singular value (\(\lambda _k\)): \(\text {EigenCorr}_k=cor(u_k, \varvec{y})\lambda _k.\) Lee et al. [31] showed that the sum of all squared correlations between each variant and the phenotype is equal to the sum of all squared EigenCorr's. Since the majority of principal components are uncorrelated with the phenotype, \(\text {cor}(u_k,\varvec{y})\) approximately follows \(t/\sqrt{n-2+t^2}\) where t is a t-distribution with \(n-2\) degrees of freedom. In PRS studies, the sample size n is usually 1000 or larger in the training data so the 95% confidence interval for \(\text {cor}(u_k,\varvec{y})\) is within \((-0.062, 0.062)\) when the principal component and the phenotype are uncorrelated. Among singular values calculated from the normalized genotype matrix, the largest singular value depends on n and m. We simulated GWAS data with a common setting of \(n=1000\) and \(m =100,000\) and found that the largest singular value is less than 5 and the majority of values are around 1 or smaller. Thus, the square of eigen-correlations are less than 0.1 (\(0.062^2\times 5^2\)) for most principal components. Based on these results, we propose to select the indexes in the set of S in Eq. (5) when their corresponding squared EigenCorr is above 0.1.
Principal component projection
The SVD of Z is \(Z=U\Lambda V^T\) where Z is an \(n\times m\) matrix. When m is larger than n, a direct calculation of SVD is time consuming. However, if the purpose is to find the first few columns of U matrix, we can first calculate \(\Phi = ZZ^T\) and then use the spectral decomposition on \(\Phi\) to calculate its eigenvectors \((u_1,u_2,\ldots )\) and eigenvalues \((\sigma _1,\sigma _2,\ldots )\), where \(\sigma _1 \ge \sigma _2\ge \ldots\). Given these eigenvectors and eigenvalues, the left singular matrix is \(U=(u_1,u_2,\ldots )\) and the singular values of Z are \(\lambda _k = \sqrt{\sigma _k}\) \((k=1,2,\ldots )\). Thus, the right singular matrix \(V=(v_1,v_2,\ldots )\) can be derived as:
$$\begin{aligned} v_k=Z^Tu_k/\lambda _k \end{aligned}$$
for \(k=1,2,\ldots\).
Equation (6) can be written, equivalently, as \(u_k=Z v_k/\lambda _k\), which indicates that the eigenvector \(u_k\) can be derived from the discovery data Z by projecting Z through \(v_k\) and weighting it by \(1/\lambda _k\). Following this idea, we propose to derive the corresponding eigenvector in the target genotype data, say B, by projecting B through \(v_k\) and \(\lambda _k\) (both are calculated from Z) as follows:
$$\begin{aligned} u_k^{(B)} = B v_k/\lambda _k, \end{aligned}$$
PRS construction
The eigenvector \(u_k\) derived from the discovery genotype data Z is used to estimate the effect size of the principal component, whereas the corresponding eigenvector in the target genotype data B, \(u_k^{(B)}\), is employed to construct the PRS. Specifically, given \(\varvec{y}\) and \(u_k\) from the discovery data A, we can use Eq. (5) to derive the least squared estimates \({\hat{\gamma }}_k\), which is then used as the effect size to calculate the PRS for the subject in the target sample as:
$$\begin{aligned} \text {PRS} = \sum _{k\in S} {\hat{\gamma }}_{k} u_{k}^{(B)} \end{aligned}$$
In this study, we used three sources of genomic data to demonstrate the applications of the proposed PRS method and evaluate its performance relative to the conventional method. The 1000 Genome Project Phase 3 reference panel was used as our reference genomic data. This publicly accessible database (http://bioinfo.hpc.cam.ac.uk/downloads/datasets/vcf/index_.html) contains genetic data from 2504 individuals who were classified based on 5 super populations: African (AFR), Ad Mixed American (AMR), East Asian (EAS), European (EUR), and South Asian (SAS). We used these five super populations to represent 5 distinct genetic ancestries.
The discovery (training) database was distributed by the Study of Addiction: Genetics and Environment (SAGE) (dbGaP study accession: phs000092.v1.p1; https://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/analysis.cgi?study_id=phs000092.v1.p1). The SAGE aggregated data containing common measures from three large scale studies in the substance abuse field: the Collaborative Study on the Genetics of Alcoholism (COGA), the Family Study of Cocaine Dependence (FSCD), and the Collaborative Genetic Study of Nicotine Dependence (COGEND). There were 4094 participants in this database.
The target (testing) database came from the National Longitudinal Study of Adolescent to Adult Health (Add Health) (dbGaP study accession: phs001367.v1.p1; https://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?study_id=phs001367.v1.p1). The Add Health (Harris et al. 2013) collected GWAS data and health behavior data from a large sample of U.S. adolescents who were followed from grades 7–12 into adulthood. Genetic data were available for 9974 participants with the primary race groups being Black and White.
The genomic data from 1000 Genome Project, SAGE, and Add Health were genotyped from different types of SNP genotype arrays. The genomic data from Add Health have been imputed, whereas the SAGE did not provide imputed data. To ensure that all the three databases cover the same variants, we conducted imputation on the SAGE data using the imputation service provided by the Michigan Imputation Center (https://imputationserver.readthedocs.io/en/latest/).
While the genomic data of 1000 Genome Project and Add Health were both based on GRCH37/hg19, the genomic data of SAGE were based on NCBI Build 36.1. Because the Michigan Imputation Center can only impute genomic data built on GRCh37/hg19 or GRCh38/hg38, we first converted the genome coordinate of the SAGE data to GRCH37/hg19 genomic build using the liftover program [32]. A quality control procedure (removing variants with the MAF \(< 0.01\), the p value of Hardy–Weinberg Equilibrium test \(<10^{-6}\), the missing rate \(< 0.05\)) was also conducted before the imputation. We chose the Eagle v2.4 for phasing and the Minimac4 for imputation with the reference panel for both procedures being the 1000 Genomes Phase 3 (Version 5). After the imputation, we conducted further quality control by keeping those variants with the imputation R-square value being greater than 0.3 for data analysis.
Participant selections
After the imputation, we extracted the common variants across the three data sources for data analysis. In the SAGE and Add Health databases, we focused the analysis on those participants with the majority of genomic compositions being either AFR or EUR ancestry because the sample sizes of participants with other ancestries were very small in both studies.
We used the ethnic information of the 1000 Genome participants to infer the ancestries of the SAGE and Add Health subjects. We first merged the three databases and then conducted principal components analysis on the merged data using the PLINK software [33, 34]. A Fisher linear discriminant function was built by using the top twenty principal components in the 1000 Genome data as predictors and their ethnicity as outcomes. Applying this Fisher linear discriminant function to the SAGE and Add Health data, we were able to calculate the posterior probabilities corresponding to the five ancestry groups (i.e., the five super-populations defined by the 1000 Genome Project). The participants in SAGE and Add Health were then chosen if their posterior probabilities in either AFR or EUR were above 0.9. This process identified 3394 SAGE participants and 8588 Add Health participants.
Quality control for calculating PRS
Although both the SAGE and Add Health genomic data were imputed using the 1000 Genome Project reference panel, it is still necessary to eliminate the ambiguous SNPs which have complementary alleles (either A/T or C/G). This is a recommended quality control procedure for PRS as it eliminates the potential canceled effects when comparing the proposed method and the conventional method. After the removal, the three databases shared 2,993,682 variants in common.
Phenotype selection
Both the SAGE and Add Health studies measured many substance use related phenotypes. In this study, we focused on the number of lifetime alcohol use disorder symptoms because both studies adopted the DSM IV criteria. SAGE provided the number of alcohol dependence symptoms (0–7), whereas Add Health measured the number of alcohol use disorder symptoms (0–11).
Method comparison
We applied the following three PRS methods to analyze the imputed data of SAGE (discovery) and Add Health (target):
The conventional method:
PRS \(=\sum _{j\in S_1} {\hat{\beta }}_j g_{j}\)
The Bayesian method:
LDpred2
The proposed method:
PRS \(=\sum _{k\in S_2} {\hat{\gamma }}_{k} u_{k}^{(B)}\)
The conventional method summed up the effects of variants selected in the discovery data (\(j \in S_1\)), with the effect size for each variant (\({\hat{\beta }}_j\)) being derived from marginal regression with 10 principal components as covariates. The LDpred2 method was implemented in the bigsnpr package (https://github.com/privefl/bigsnpr). The proposed method was described in details in "The proposed PRS method based on principal component projection" section. The former two methods were chosen for method comparison because (1) the conventional method which was based on clumping and thresholding of p values was relatively straightforward and yet performed comparably to other existing PRS methods [17]; and (2) the LDpred2 method represented a newer alternative method based on the Bayesian paradigm. Both are popular PRS methods.
We used the method described in "Participant selections" section to identify ancestrally homogeneous subgroups of AFR and EUR for both SAGE and Add Health datasets using the 1000 Genome Project as the reference genomic data. In the SAGE dataset, 1308 AFR and 2675 EUR were identified, whereas in the Add Health dataset, 1362 AFR and 3959 EUR were found. The analysis was restricted for these participants in order to evaluate the PRS for either ancestrally homogeneous or diverse groups.
Table 1 Summary statistics of alcohol phenotypes in SAGE and Add Health
The summary statistics (means and standard deviations) of the phenotype variables described in "Phenotype selection" section are shown in Table 1. The high average number of alcohol dependence symptoms in the SAGE dataset (about 3 out of 7) reflected the nature of high-risk samples. Conversely, the average number of alcohol use disorder (AUD) symptoms (0.80–2.10 out of 11) in the Add Health dataset was low because the sample represented the general population. The two-sample t-tests examining racial differences indicate that there was no significant difference between AFR and EUR participants in the SAGE study. However, in the Add Health study, EUR tended to have a higher level of AUD symptomatology than AFR. This is again consistent with prevalence data in the general population.
Barplots of \(R^2\) values for predicting AUD symptoms of Add Health participants using the PRS calculated from SAGE discovery data based on the conventional method. AFR: conducted PRS on AFR ancestral group only. EUR: conducted PRS on EUR ancestral group only. AFR+EUR: conducted PRS on AFR and EUR ancestral groups
In this study, we constructed PRS for Add Health participants using SAGE participants as the discovery sample. Three PRS methods were applied to analyze the data: the conventional method, the Bayesian method, and the proposed method. For each method, the analysis was conducted on the AFR only, the EUR only, and the AFR and EUR together. Based on the conventional method, the effect size of each variant was estimated using linear regression with 10 principal components as covariates. The PRS was then constructed following the clumping and thresholding procedure with various clumping cut-off values (\(r^2= 0.01, 0.1, 0.2\)) and thresholding cut-off values (at 0.0001, 0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.5, 1). The coefficient of determination (\(R^2\)) was calculated for each combination of cut-off values to indicate the proportion of variance in AUD symptoms explained by the PRS. This statistic was also used to evaluate the performance of PRS. Figure 1 shows the largest \(R^2\) value (among all combinations of cut-off values) for AFR only, EUR only, and AFR+EUR, indicating that the conventional method performed poorly across the three samples (all \(R^2\) values were less than 0.005).
Barplots of \(R^2\) values for predicting AUD symptoms of Add Health participants using the PRS calculated from SAGE discovery data based on the Bayesian method. AFR: conducted PRS on AFR ancestral group only. EUR: conducted PRS on EUR ancestral group only. AFR+EUR: conducted PRS on AFR and EUR ancestral groups.
The results using the Bayesian method, LDpred2, are shown in Fig. 2. All the \(R^2\) values in AFR only, EUR only, and AFR+EUR were smaller than 0.001. In comparison to the conventional method (Fig. 1), this Bayesian method actually performed worse.
The proposed method was also used to conduct PRS analysis so the performance can be compared with that of the conventional method. An important step of the procedure is to calculate EigenCorr for identifying which of the principal components were more correlated with the phenotype. Based on \(\rho =0.2\) to select variants in linkage equilibrium (i.e., the sample linkage correlation between each pair of variants is less than 0.2), we calculated SVD of the SAGE dataset and ranked the squared EigenCorr. The distribution of squared EigenCorr is presented in terms of its rank in Fig. 3, showing that the largest squared EigenCorr were derived from the 13th and 8th principal components. Although the first principal component had the largest eigenvalue, it was ranked 5th. In addition, based on the cut-off value of 0.1 (the dashed line), we identified six large EigenCorr's corresponding to the 13th, 8th, 3rd, 12th, 1st, 2nd principal components, which were used for PRS construction. If we used eigenvalues to choose principal components, we would end up choosing only the 1st principal component (with the eigenvalue value of 159) because the 2nd (eigenvalue = 3.8), the 3rd (eigenvalue = 2.1), and the remaining principal components (all with eigenvalues being close to or less than 1) all had very small eigenvalues in comparison.
The scatter plot of the top 100 squared EigenCorr values versus their ranks. The dotted horizontal line is the cut-off value where the principal components with their corresponding EigenCorr above the dotted line are selected for PRS model construction. Each panel is based on a given \(\rho\) value in the first step and \(\theta\) value in the second step for variant selection used for SVD.
We evaluated the performance of PRS based on the \(R^2\) under different values of the LD threshold (\(\rho =0.2, 0.1, 0.01\)) and the p-value threshold (\(\theta =0.1, 0.5, 1\)). The results are shown in Fig. 4 using barplots. The \(R^2\) was calculated for AFR only, EUR only, or the two combined. While all the \(R^2\) values corresponding to AFR only and EUR only were smaller than 0.01, the \(R^2\) values were above 0.03 across different values of \(\rho\) and \(\theta\) if we used participants from the mixture of AFR and EUR. This set of analyses also informs the choice of the values of \(\rho\) and \(\theta\). For large \(\rho\) values, the selected variants are likely to be in linkage disequilibrium. On the other hand, for small \(\rho\) values, we may eliminate variants that are informative. The value of \(\rho\) at 0.1 is thus a good compromise. In terms of the \(\theta\), we recommend to set it at 0.1 to increase information contents in driving principal components. Another advantage of choosing \(\rho\) at 0.1 and \(\theta\) at 0.1 is to reduce computational time during SVD and principal component projection because it involves a small number of variants. Under \(\rho =0.1\) and \(\theta =0.1\), the \(R^2\) values for the AFR and EUR combined is 0.037, indicating that the proposed method can explain 3.7% of the variation in AUD symptoms with the PRS built upon the 2nd, 3rd, and 1st principal components.
Our proposed method did not attempt to fit the random effect model in Eq. (3) directly for the following reasons: (1) the estimates are likely to be inconsistent because the number of unknown parameters is larger than the sample size; (2) the procedure would be very time-consuming; and (3) how to apply the fitted model to the genotypes of participants in the target sample is an open research question. We, instead, proposed to fit Eq. (6) so it can be applied to the discovery data by projecting the observed genotypes to the axes of principal components. In this way, we have dealt with all the above issues.
Barplots of \(R^2\) values for predicting AUD symptoms of Add Health participants using the PRS calculated from SAGE discovery data based on the proposed method. AFR: conducted PRS on AFR ancestral group only. EUR: conducted PRS on EUR ancestral group only. AFR+EUR: conducted PRS on AFR and EUR ancestral groups.
Although the conventional PRS method can be implemented easily and it only requires the summary statistics of the discovery data instead of the original genotype data, it has a critical issue. Adjusting the first few principal components while deriving the effect of each variant is actually equivalent to estimating the variant effect from the remaining principal components. Although this procedure may adjust for the large structure effect, the derived effects of variants may still depend on other confounding factors such as demographic or socio-economic status [35]. In fact, adding large PCs in the regression model may over-adjust the estimates of marginal variant effects. Particularly, given the purpose of PRS is to build a prediction model for the phenotype based on genotypes, adding even 1 principal component as a covariate is expected to reduce the prediction accuracy [27].
Unlike the conventional method that only requires summary statistics from the discovery data, our approach requires availability of the singular values and right singular matrix of the discovery data. Nevertheless, users would not be able to recover the original genetic data based on these available information. Thus, the confidentiality of participants in the discovery data can still be kept. Moreover, although this study only dealt with two ancestry groups because they were the majority in the discovery and target samples, the proposed method can be easily applied to more ancestry groups.
This study makes a unique contribution to the literature by proposing a new PRS method that has several strengths. First, the proposed method has higher prediction power than the conventional method that tends to commit over-adjustment during estimation of marginal effects of variants. Second, our approach based on principal components that are linear transformations from the genotype matrix conforms to the commonly accepted theory of additive genetic variance for complex traits [36]. Third, the principal components selected by the proposed method can facilitate our understanding of the structure of genetic effects on the phenotype. Fourth, our approach can handle participants from different ancestry backgrounds as long as the ancestries of participants in the target sample are a subset of those in the discovery sample.
The Julia program will be deposited on GitHub at https://github.com/jjyang2019/SVD_Projection.jl
Add Health:
National longitudinal study of adolescent to adult health
GWAS:
Genome-wide association studies
LD:
Linkage disequilibrium
PRS:
Polygenic risk score
Study of addiction: genetics and environment
SUD:
SVD:
Singular value decomposition
Manolio TA, Collins FS, Cox NJ, Goldstein DB, Hindorff LA, Hunter DJ, McCarthy MI, Ramos EM, Cardon LR, Chakravarti A, et al. Finding the missing heritability of complex diseases. Nature. 2009;461(7265):747–53.
Arango C. Candidate gene associations studies in psychiatry: time to move forward. Berlin: Springer; 2017.
Dudbridge F. Power and predictive accuracy of polygenic risk scores. PLoS Genet. 2013;9(3):1003348.
Peterson RE, Kuchenbaecker K, Walters RK, Chen C-Y, Popejoy AB, Periyasamy S, Lam M, Iyegbe C, Strawbridge RJ, Brick L, et al. Genome-wide association studies in ancestrally diverse populations: opportunities, methods, pitfalls, and recommendations. Cell. 2019;179(3):589–603.
Wray NR, Goddard ME, Visscher PM. Prediction of individual genetic risk to disease from genome-wide association studies. Genome Res. 2007;17(10):1520–8.
Torkamani A, Wineinger NE, Topol EJ. The personal and clinical utility of polygenic risk scores. Nat Rev Genet. 2018;19(9):581–90.
Li JJ, Cho SB, Salvatore JE, Edenberg HJ, Agrawal A, Chorlian DB, Porjesz B, Hesselbrock V, Investigators C, Dick DM, et al. The impact of peer substance use and polygenic risk on trajectories of heavy episodic drinking across adolescence and emerging adulthood. Alcohol Clin Exp Res. 2017;41(1):65–75.
Loughnan RJ, Palmer CE, Thompson WK, Dale AM, Jernigan TL, Fan CC. Polygenic score of intelligence is more predictive of crystallized than fluid performance among children (2020). arXiv:637512
Tikkanen E, Havulinna AS, Palotie A, Salomaa V, Ripatti S. Genetic risk prediction and a 2-stage risk screening strategy for coronary heart disease. Arterioscler Thromb Vasc Biol. 2013;33(9):2261–6.
Yang JJ, Li J, Buu A, Williams LK. Efficient inference of local ancestry. Bioinformatics. 2013;29(21):2750–6.
Duncan L, Shen H, Gelaye B, Meijsen J, Ressler K, Feldman M, Peterson R, Domingue B. Analysis of polygenic risk score usage and performance in diverse human populations. Nat Commun. 2019;10(1):1–9.
Martin AR, Gignoux CR, Walters RK, Wojcik GL, Neale BM, Gravel S, Daly MJ, Bustamante CD, Kenny EE. Human demographic history impacts genetic risk prediction across diverse populations. Am J Hum Genet. 2017;100(4):635–49.
Kendler KS, Heath AC, Neale MC, Kessler RC, Eaves LJ. A population-based twin study of alcoholism in women. JAMA. 1992;268(14):1877–82.
Kendler KS, Prescott CA, Neale MC, Pedersen NL. Temperance board registration for alcohol abuse in a national sample of Swedish male twins, born 1902 to 1949. Arch Gen Psychiatry. 1997;54(2):178–84.
Heath AC, Bucholz K, Madden P, Dinwiddie S, Slutske W, Bierut L, Statham D, Dunne M, Whitfield J, Martin N. Genetic and environmental contributions to alcohol dependence risk in a national twin sample: consistency of findings in women and men. Psychol Med. 1997;27(6):1381–96.
Mayfield RD, Harris RA, Schuckit MA. Genetic factors influencing alcohol dependence. Br J Pharmacol. 2008;154(2):275–87.
Choi SW, Mak TS-H, O'Reilly PF. Tutorial: a guide to performing polygenic risk score analyses. Nat Protoc. 2020;15(9):2759–72.
Euesden J, Lewis CM, O'Reilly PF. PRSice: polygenic risk score software. Bioinformatics. 2015;31(9):1466–8.
Vilhjálmsson BJ, Yang J, Finucane HK, Gusev A, Lindström S, Ripke S, Genovese G, Loh P-R, Bhatia G, Do R, et al. Modeling linkage disequilibrium increases accuracy of polygenic risk scores. Am J Hum Genet. 2015;97(4):576–92.
Privé F, Arbel J, Vilhjálmsson BJ. Ldpred2: better, faster, stronger. Bioinformatics. 2020;36(22–23):5424–31.
Purcell SM, Wray NR, Stone JL, Visscher PM, O'Donovan MC, Sullivan PF, Sklar P, Purcell Leader SM, Stone JL, Sullivan PF, Ruderfer DM, McQuillin A, Morris DW, OÕDushlaine CT, Corvin A, Holmans PA, OÕDonovan MC, Sklar P, Wray NR, Macgregor S, Sklar P, Sullivan PF, OÕDonovan MC, Visscher PM, Gurling H, Blackwood DHR, Corvin A, Craddock NJ, Gill M, Hultman CM, Kirov GK, Lichtenstein P, McQuillin A, Muir WJ, O'Donovan MC, Owen MJ, Pato CN, Purcell SM, Scolnick EM, St Clair D, Stone JL, Sullivan PF, Sklar Leader P, O'Donovan MC, Kirov GK, Craddock NJ, Holmans PA, Williams NM, Georgieva L, Nikolov I, Norton N, Williams H, Toncheva D, Milanova V, Owen MJ, Hultman CM, Lichtenstein P, Thelander EF, Sullivan P, Morris DW, O'Dushlaine CT, Kenny E, Quinn EM, Gill M, Corvin A, McQuillin A, Choudhury K, Datta S, Pimm J, Thirumalai S, Puri V, Krasucki R, Lawrence J, Quested D, Bass N, Gurling H, Crombie C, Fraser G, Leh Kuan S, Walker N, St Clair D, Blackwood DHR, Muir WJ, McGhee KA, Pickard B, Malloy P, Maclean AW, Van Beck M, Wray NR, Macgregor S, Visscher PM, Pato MT, Medeiros H, Middleton F, Carvalho C, Morley C, Fanous A, Conti D, Knowles JA, Paz Ferreira C, Macedo A, Helena Azevedo M, Pato CN, Stone JL, Ruderfer DM, Kirby AN, Ferreira MAR, Daly MJ, Purcell SM, Sklar P, Purcell SM, Stone JL, Chambert K, Ruderfer DM, Kuruvilla F, Gabriel SB, Ardlie K, Moran JL, Daly MJ, Scolnick EM, Sklar P. Consortium, T.I.S., preparation, M., analysis, D., analysis subgroup, G.W.A.S., analyses subgroup, P., committee, M., University, C., of North Carolina at Chapel Hill, K.I., Dublin, T.C., London, U.C., of Aberdeen, U., of Edinburgh, U., of Medical Research, Q.I., of Southern California, U., Hospital, M.G., for Psychiatric Research, S.C., of Broad Institute, M.I.T., Harvard: Common polygenic variation contributes to risk of schizophrenia and bipolar disorder. Nature. 2009;460(7256):748–52. https://doi.org/10.1038/nature08185.
Cullen H, Krishnan ML, Selzam S, Ball G, Visconti A, Saxena A, Counsell SJ, Hajnal J, Breen G, Plomin R, et al. Polygenic risk for neuropsychiatric disease and vulnerability to abnormal deep grey matter development. Sci Rep. 2019;9(1):1–8.
Hartz SM, Horton AC, Oehlert M, Carey CE, Agrawal A, Bogdan R, Chen L-S, Hancock DB, Johnson EO, Pato CN, et al. Association between substance use disorder and polygenic liability to schizophrenia. Biol Psychiat. 2017;82(10):709–15.
Barr PB, Ksinan A, Su J, Johnson EC, Meyers JL, Wetherill L, Latvala A, Aliev F, Chan G, Kuperman S, et al. Using polygenic scores for identifying individuals at increased risk of substance use disorders in clinical and population samples. Transl Psychiatry. 2020;10(1):1–9.
Andersen AM, Pietrzak RH, Kranzler HR, Ma L, Zhou H, Liu X, Kramer J, Kuperman S, Edenberg HJ, Nurnberger JI, et al. Polygenic scores for major depressive disorder and risk of alcohol dependence. JAMA Psychiat. 2017;74(11):1153–60.
Consortium I.S. Common polygenic variation contributes to risk of schizophrenia that overlaps with bipolar disorder. Nature. 2009;460(7256):748.
Rao P. Some notes on misspecification in multiple regression. Am Stat. 1971;25:37–9.
Yang J, Benyamin B, McEvoy BP, Gordon S, Henders AK, Nyholt DR, Madden PA, Heath AC, Martin NG, Montgomery GW, et al. Common SNPs explain a large proportion of the heritability for human height. Nat Genet. 2010;42(7):565–9.
Kumar SK, Feldman MW, Rehkopf DH, Tuljapurkar S. Limitations of GCTA as a solution to the missing heritability problem. Proc Natl Acad Sci. 2016;113(1):61–70.
Hoffman GE. Correcting for population structure and kinship using the linear mixed model: Theory and extensions. PLoS ONE. 2013;8(10):75707. https://doi.org/10.1371/journal.pone.0075707.
Lee S, Wright FA, Zou F. Control of population stratification by correlation-selected principal components. Biometrics. 2011;67(3):967–74.
Kuhn RM, Haussler D, Kent WJ. The UCSC genome browser and associated tools. Brief Bioinform. 2013;14(2):144–61.
Purcell S, Neale B, Todd-Brown K, Thomas L, Ferreira MAR, Bender D, Maller J, Sklar P, de Bakker PIW, Daly MJ, Sham PC. PLINK: A tool set for whole-genome association and population-based linkage analyses. Am J Hum Genet. 2007;81(3):559–75. https://doi.org/10.1086/519795.
Chang CC, Chow CC, Tellier LC, Vattikuti S, Purcell SM, Lee JJ. Second-generation PLINK: rising to the challenge of larger and richer datasets. Gigascience. 2015;4(1):13742–015.
Mostafavi H, Harpak A, Agarwal I, Conley D, Pritchard JK, Przeworski M. Variable prediction accuracy of polygenic scores within an ancestry group. Elife. 2020;9:48376.
Hill WG, Goddard ME, Visscher PM. Data and theory point to mainly additive genetic variance for complex traits. PLoS Genet. 2008;4(2):1000008.
This research was supported by National Institutes of Health (NIH) Grants: R01DA049154, R01EB022911, U54MD012393, and K08AA023290. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.
Department of Biostatistics and Data Science, University of Texas Health Science Center, Houston, USA
James J. Yang & Xi Luo
Department of Psychology, Florida International University, Miami, USA
Elisa M. Trucco
Department of Psychiatry, University of Michigan, Ann Arbor, USA
Department of Health Promotion and Behavioral Sciences, University of Texas Health Science Center, Houston, USA
Anne Buu
James J. Yang
Xi Luo
JJY and AB conceived the study and drafted the manuscript. AB, XL and EMT secured the grant funding. JJY and XL conducted computational work and statistical analysis. EMT edited the manuscript and provided critical feedback. All authors contributed to and approved the final manuscript.
Correspondence to James J. Yang.
Yang, J.J., Luo, X., Trucco, E.M. et al. Polygenic risk prediction based on singular value decomposition with applications to alcohol use disorder. BMC Bioinformatics 23, 28 (2022). https://doi.org/10.1186/s12859-022-04566-5
Alcohol use disorder | CommonCrawl |
Approve tag wiki edits
Evaluate $\lim\limits_{n\to \infty} \left( \cos(1/n)-\sin(1/n) \right) ^n $?
Jan 30 at 13:51
Show that $f(x,y)$=$\sqrt[3]{\lvert x^2-(y+1)^2 \lvert}\sin(\lvert x+y+1 \lvert))$ is differentiable at $(0,-1)$, $(1,0)$ and $(-1,0)$
Nov 8 '18 at 15:09
Why is $a_{1} > 0 \land a_{n+1}=a_{n}+\frac{1}{a_{n}}$ unbounded?
Find $\lim_{n\to\infty} \frac{1^4+2^4+\dots+n^4}{1^4+2^4+\dots+n^4+(n+1)^4}$
Prove $||u+v||=||u||+||v||\iff u=\alpha v, \alpha>0$
Derivative by definition
Solutions to Linear Equation
$\lim_{y \rightarrow b} \lim_{x \rightarrow a} f \neq \lim_{(x,y)\rightarrow (a,b)} f \neq \lim_{x \rightarrow a} \lim_{y \rightarrow b} f$
How to see that $e^{x(1+x/3)} \le (1+x)^{(1+x)}$ for very small x>0?
Multivariable limit of $\left(x^2+y^2 \right) \frac{1}{\sin xy}$
Fidning multivariable limit
$\displaystyle\iiint_E (x²+y²) \;\mathrm{d}V$ where $E$ is the region between the spheres $x^2+y^2+z^2 = 4$ and $x^2 + y^2 + z^2 = 9$
Find $\lim_{x\to \frac\pi2}\frac{\tan2x}{x-\frac\pi2}$ without l'hopital's rule.
Integral involving a trig. term
Jun 30 '15 at 19:35
$(x+y+z)^3-(y+z-x)^3-(z+x-y)^3-(x+y-z)^3=24xyz$?
derivative integral $\int_0^{x^2} \sin(t^2)dt$
Condition for existence of Fourier transform?
Trouble proving divergence theorem
Find a base in linear algebra
If limit of a function exists does it imply that function is bounded?
Differential of a vector field
Find a unit tangent vector to a curve that is an intersection of two surfaces.
Differentiability of $f(x,y)=2xy+\frac{x}{y}$ at $(1,1)$
Calculate the radius of convergence of $\sum \frac{\ln(1+n)}{1+n} (x-2)^n$
Apr 26 '15 at 9:09
Show that f is not differentiable at the origin of the following function.
Apr 15 '15 at 13:19
Finding a limit with two independent variables: $\lim_{(x,y)\to (0,0)}\frac{x^2y^2}{x^2+y^2}$
evaluating the limit $ \lim_{p \rightarrow (0,0) } \frac{x^2y^2}{x^2 + y^2} $
Does the limit exist? (Calculus)
Is this a valid proof? Find $\lim_{(x,y) \to (0,0)}\frac{x^2+y^2}{\sqrt{x^2+y^2+1}-1}$
multivariable limit of $\frac{x^2y^2}{x^2y^2+(x-y)^2}$ | CommonCrawl |
Marty Golubitsky
Martin Aaron Golubitsky is an American Distinguished professor of mathematics at Ohio State University and the former director of the Mathematical Biosciences Institute.
Marty Golubitsky
Marty Golubitsky at the Mathematical Biosciences Institute, 2016
Born (1945-04-05) April 5, 1945
Philadelphia, Pennsylvania, US
NationalityAmerican
Alma materUniversity of Pennsylvania, MIT
OccupationMathematician
Scientific career
InstitutionsUniversity of Nice Sophia Antipolis
Duke University
Ohio State University
Mathematical Biosciences Institute
Rice University
University of Houston
University of Minnesota
University of Toronto
Fields Institute
Newton Institute
Trinity College
Biography
Education
Marty Golubitsky was born on April 5, 1945, in Philadelphia, Pennsylvania. He graduated with bachelor's degree in 1966 from the University of Pennsylvania and the same year got his master's there as well. He obtained his Ph.D. from Massachusetts Institute of Technology in 1970 where his advisor was Victor Guillemin.[1]
Full-time
From September 1974 to December 1976 he was an assistant professor at the Queens College and from January of next year to August 1979 served as an associate professor there. Starting from the same month of 1979 he relocated himself to the Arizona State University where he became a professor and served there till August 1983. In September of the same year he held the same position at the University of Houston where he remained till November 2008. From then until 2016 he served as the director of the Mathematical Biosciences Institute at Ohio State University where he retains a distinguished professorship in mathematics. He affiliates himself with such organizations as the American Association for the Advancement of Science, American Mathematical Society, Association for Women in Mathematics and Society for Industrial and Applied Mathematics.[1] He served as the President of the Society for Industrial and Applied Mathematics (SIAM) 2005–2006.[2] In 2012 he became an inaugural fellow of the American Mathematical Society.[3] and in 2009 a SIAM Fellow.[4]
Visiting
From January 1980 to June of the same year he worked at University of Nice Sophia Antipolis as a visiting professor and then from September to December of the next year worked at the Duke University. Following that he worked at the University of California, Berkeley, with the same position which lasted him for two months in summer of 1982 and then from January to June 1989 he worked at the Institute for Mathematics and Applications, a division of the University of Minnesota. He continued to hold that position even four years later when from January to June 1993 he was working at the division of University of Waterloo called Fields Institute. From August to November 2005 he worked at both Newton Institute and Trinity College in Cambridge and then from January to June 2006 worked at the University of Toronto as a distinguished professor. As of July 2005 he works as an adjunct professor at the Computational and Applied Mathematics division of Rice University.[1]
Publications
In 1992, he and Ian Stewart wrote a book called Fearful Symmetry: Is God a Geometer? which was published by Blackwell Publishers in Oxford. In 1994 it was translated into Dutch by Hans van Cuijlenborg where it came out under a title of Turings tijger by Epsilon Uitgaven in Utrecht. In 1995 the same work was translated into Italian by Libero Sosio as Terribili simmetrie: Dio è un geometra? and was published in Turin, Italy. His second book, on which he worked with M. Field called Symmetry in Chaos: A Search for Pattern in Mathematics, Art, and Nature was released by Oxford University Press, in 1992 and was followed by German translation by Micha Lotrovsky in 1993 and the French one the same year by Christian Jeanmougin which was published by Inter´Editions in Paris. Besides books he also has numerous peer-reviewed articles and even was a co-editor of the Multiparameter Bifurcation Theory, Contemporary Mathematics which was published by Association for Computing Machinery in 1986.[1]
References
1. "Marty Golubitsky" (PDF). pp. 1–27. Retrieved December 24, 2013.
2. SIAM Presidents Retrieved May 22, 2014.
3. List of Fellows of the American Mathematical Society, Retrieved January 20, 2014.
4. SIAM Fellows Retrieved May 22, 2014.
External links
• Marty Golubitsky
• Marty Golubitsky
Authority control
International
• ISNI
• VIAF
National
• Norway
• Spain
• France
• BnF data
• Germany
• Israel
• United States
• Latvia
• Japan
• Czech Republic
• Australia
• Croatia
• Netherlands
Academics
• CiNii
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• zbMATH
People
• Trove
Other
• IdRef
| Wikipedia |
\begin{definition}[Definition:Convex Real Function/Definition 3/Strictly]
Let $f$ be a real function which is defined on a real interval $I$.
$f$ is '''strictly convex on $I$''' {{iff}}:
:$\forall x_1, x_2, x_3 \in I: x_1 < x_2 < x_3: \dfrac {f \left({x_2}\right) - f \left({x_1}\right)} {x_2 - x_1} < \dfrac {f \left({x_3}\right) - f \left({x_1}\right)} {x_3 - x_1}$
Hence a geometrical interpretation: the slope of $P_1 P_2$ is less than that of $P_1 P_3$:
:500px
\end{definition} | ProofWiki |
Stellated truncated hexahedron
In geometry, the stellated truncated hexahedron (or quasitruncated hexahedron, and stellatruncated cube[1]) is a uniform star polyhedron, indexed as U19. It has 14 faces (8 triangles and 6 octagrams), 36 edges, and 24 vertices.[2] It is represented by Schläfli symbol t'{4,3} or t{4/3,3}, and Coxeter-Dynkin diagram, . It is sometimes called quasitruncated hexahedron because it is related to the truncated cube, , except that the square faces become inverted into {8/3} octagrams.
Stellated truncated hexahedron
TypeUniform star polyhedron
ElementsF = 14, E = 36
V = 24 (χ = 2)
Faces by sides8{3}+6{8/3}
Coxeter diagram{{{stH-Coxeter}}}
Wythoff symbol2 3 | 4/3
2 3/2 | 4/3
Symmetry groupOh, [4,3], *432
Index referencesU19, C66, W92
Dual polyhedronGreat triakis octahedron
Vertex figure
3.8/3.8/3
Bowers acronymQuith
Even though the stellated truncated hexahedron is a stellation of the truncated hexahedron, its core is a regular octahedron.
Orthographic projections
Related polyhedra
It shares the vertex arrangement with three other uniform polyhedra: the convex rhombicuboctahedron, the small rhombihexahedron, and the small cubicuboctahedron.
Rhombicuboctahedron
Small cubicuboctahedron
Small rhombihexahedron
Stellated truncated hexahedron
See also
• List of uniform polyhedra
References
1. Weisstein, Eric W. "Uniform Polyhedron". MathWorld.{{cite web}}: CS1 maint: url-status (link)
2. Maeder, Roman. "19: stellated truncated hexahedron". MathConsult.{{cite web}}: CS1 maint: url-status (link)
External links
• Weisstein, Eric W. "Stellated truncated hexahedron". MathWorld.
| Wikipedia |
Neil White: 1945 – 2014
Posted on October 30, 2014 by Guest Contributor
Guest post by Gary Gordon
Those with memories of Neil White are invited to share them in the comments below.
Neil White passed away on Aug. 11, 2014. Neil was an inspiring teacher and one of the key contributors to the revitalization of matroid theory in the 1970's and 80's. He published on a variety of topics, but most of his work was characterized by the way it combined different areas of mathematics, especially combinatorial geometry and algebra. His co-authored book Oriented Matroids (with A. Bjorner, M. Las Vergnas, B. Sturmfels and G. Ziegler) from 1993, with a 2nd edition published in 1999, is the standard reference for this topic. His book Coxeter Matroids (co-authored with A. Borovik and I. Gelfand) and several papers he wrote on this topic are typical of his breadth: these objects draw on classical results in algebra, geometry and combinatorics.
Neil, a Michigan native, received his undergraduate degree from Michigan State University and his PhD from Harvard. Neil wrote his dissertation under the direction of G.-C. Rota in 1972. Neil's doctoral thesis examined the bracket ring defined by a matroid, and this algebraic approach influenced much of his future work. Neil was one of several young PhDs in Cambridge in the early 1970's, and this group (including Ken Baclawski, Tom Brylawski, Curtis Greene, Richard Stanley, Walter Whiteley, and Tom Zaslavsky, among others) contributed significantly to a resurgence of interest in matroids.
Neil is best remembered in the matroid community for editing the seminal series of books in the Cambridge Encyclopedia of Mathematics series: Theory of Matroids (1986), Combinatorial Geometries (1987) and Matroid Applications (1992). The wide range of topics and clear organization is a testament to Neil's vision. Neil also wrote three chapters for these volumes that remain essential references today.
In addition to his work as an editor and expositor, Neil made significant contributions to invariant theory, the combinatorics of bar-and-body frameworks and oriented matroids. MathSciNet lists nearly 600 citations for his 53 publications, and this list does not include citations to the Encyclopedia of Mathematics series he edited. Reading through his list of published work gives an indication of the depth and breadth of his contributions.
Neil spent his career at the University of Florida, retiring in 2008. He was a dedicated and very inspiring teacher, teaching combinatorics, algebra and a variety of other subjects to both undergraduates and graduates. His courses were challenging, but he gave students the tools to solve difficult problems. He was also in charge of the Putnam team preparation for a time. He was always an excellent problem solver, finishing in the top 20 nationally on the Putnam while he was an undergraduate.
From a personal standpoint, Neil was a good friend and a calm, positive presence. He was my teacher for some 10 courses from 1975 – 1977 at the University of Florida, and his approach to mathematics and problem solving had a strong influence on me. His teaching notes were exceptionally clear; I have used his notes on ordinary and exponential generating functions and the Mobius function in my own classes. I do not believe I would be a mathematician if it were not for Neil.
Neil had wide interests outside of mathematics. He was an early advocate of the analytic approach to baseball, and he played a version of simulation baseball for 30 years. He played bridge, read widely, and volunteered his time for local organizations. He also had a good sense of humor: after starting my first job in the 1980's, he wrote to me, evidently at my request. The "letter" consisted on one word: "Regularly."
Neil White was a first-rate mathematician, a clear expositor, an inspiring teacher and a genuinely decent and humane person. I will miss him.
Neil White's obituary appears in The Gainesville Sun.
Sudoku Matroids and Graph Colouring Verification
Posted on October 20, 2014 by Tony Huynh
As this is my first post, I feel obliged to say the obligatory Hello World!. With that aside, the problem I am going to discuss comes from the popular game Sudoku. Suppose that we are given a filled Sudoku $S$ (which we cannot see), and we wish to verify its correctness. To do so, we are granted access to an oracle which can tell us if $S$ is consistent on any row, column or block. Consistent simply means that each number from 1 to 9 appears exactly once.
Question 1. How many checks are necessary to verify that $S$ is correct?
Certainly, 27 checks is sufficient, but can we do better? Here's a short proof that we can in fact do better.
Proposition 2. Every Sudoku can be verified with at most 21 checks.
Proof. It will be convenient to fix some notation. Let
$$K:= \{r_1, \dots, r_9\} \cup \{c_1, \dots, c_9\} \cup \{b_1, \dots, b_9\}$$
be the set of rows, columns and blocks of a Sudoku. By convention, the blocks are labelled as you (an English reader) would read the words along a page. Now suppose that a Sudoku $S$ is consistent on $b_1, b_2, b_3, r_1$ and $r_2$. We claim that this implies $S$ is also consistent on $r_3$.
Suppose not. Then some number, say 1, appears at least twice in $r_3$. However, since $S$ is consistent on $r_1$ and $r_2$, 1 appears exactly once in each of $r_1$ and $r_2$. Thus, 1 appears at least 4 times in $b_1 \cup b_2 \cup b_3$. By the Pigeonhole Principle, 1 appears at least twice in $b_1, b_2$ or $b_3$, which is a contradiction. Thus, to verify $S$ it suffices to check $K \setminus (\{r_3, r_6, r_9\} \cup \{c_3, c_6, c_9\})$. $\square$
Can we do better than 21 checks? At first this seems rather tricky. One can use information theory considerations to prove lower bounds. For example, clearly at least 9 checks are necessary, since otherwise there will be a cell $x$ for which the row, column and box containing $x$ are all unchecked. However, it seems difficult to get to a lower bound of 21 in this way. See this MathOverflow question for someone trying to carry this out.
At this point though, our hero Matroid Theory comes to the rescue. That is, somewhat remarkably, there is a matroid structure underlying this problem, which I will now describe. Define a subset $V$ of $K$ to be a verifier if every Sudoku which is consistent on $V$ is consistent everywhere.
Theorem 3. The set of minimal (under inclusion) verifiers $\mathcal{V}$ is the set of bases of a matroid on $K$.
There is a nice proof of this fact by Emil Jeřábek as an answer to the same MathOverflow question above. It turns out that this matroid is actually representable (over $\mathbb{Q}$) and that the proof is also valid for the more general $n \times n$ versions of Sudoku. We will not say anything more about the proof due to space limitations. However, using the fact that $M=(K, \mathcal{V})$ is a matroid, it is now easy to show that 21 is in fact the answer to Question 1.
Proposition 4. The minimum number of checks needed to verify the correctness of a Sudoku is 21.
Proof. Translated into matroid theory language, the original question is simply asking what the rank of $M$ is. We have already shown that $B:=K \setminus (\{r_3, r_6, r_9\} \cup \{c_3, c_6, c_9\})$ is a verifier, so it suffices to show that it is a minimal verifier. Certainly, we cannot remove a block from $B$ and remain verifying, since there would then be a completely unchecked cell. On the other hand, for any two rows (or columns) $a$ and $b$ in the same band, it is easy to construct a Sudoku which is consistent everywhere except $a$ and $b$. Thus, we also cannot remove a row or column from $B$ and remain verifying. $\square$
Of course, there is nothing special about the Sudoku graph, and we can attempt to play this game on an arbitrary graph. Let $G$ be a graph and suppose that we wish to verify the correctness of a (not necessarily proper) $\chi(G)$-colouring $C$ of $G$. Again we are given access to an oracle which can tell us whether $C$ is consistent on any maximal clique of $G$. As before, we can define the set family $(K(G), \mathcal{V}(G))$, where $K(G)$ is the set of maximal cliques of $G$ and $\mathcal{V}(G)$ is the family of minimal verifiers.
We now arrive at our main problem.
Problem 5. Characterize the graphs $G$ for which $(K(G), \mathcal{V}(G))$ is a matroid.
We now show that there are many such graphs.
Proposition 6. For all bipartite graphs $G$, $(K(G), \mathcal{V}(G))$ is a matroid.
Proof. We may assume that $G$ is connected. Since $G$ is bipartite, its set of maximal cliques is simply its set of edges. Let $T \subseteq E(G)$ be a minimal verifier. Evidently, $T$ must span all the vertices of $G$. On the other hand, if a 2-colouring of $G$ is correct on a spanning tree, then it is correct everywhere. Thus, $T$ is a spanning tree, and so $(K(G), \mathcal{V}(G))$ is actually the graphic matroid of $G$. $\square$
It would be nice if there were a unified proof that worked for both the bipartite case and the Sudoku graphs. Such a proof would have to utilize some common structure of these two classes, since it is not true that for all graphs $G$, $(K(G), \mathcal{V}(G))$ is a matroid.
Proposition 7. If $G=K_{2,2,2}$, then $(K(G), \mathcal{V}(G))$ is not a matroid.
Proof. It will be convenient to regard $K_{2,2,2}$ as the octahedron $O$. Let $1234$ be the middle 4-cycle of $O$ and $t$ and $b$ be the top and bottom vertices. Now, it is easy to see that $B_1:=\{t12, t34, b23\}$ and $B_2:=\{t23, t14, b34\}$ are minimal verifiers. On the other hand, one easily checks that basis exchange fails for $B_1$ and $B_2$. $\square$
There are also many interesting questions that one can ask about the framework we have set up for graph colouring verification. For example, we can define the verification number of a graph $G$ to be the minimum size of a verifier (this is the the same thing as the size of a minimal verifier if $(K(G), \mathcal{V}(G))$ is a matroid).
Question 8. Among all $n$-vertex graphs, which ones have the largest verification number?
I was not able to find much in the literature, so any comments or references would be appreciated. Thanks for reading and see you in $3 + \epsilon$ months!
Acknowledgements. Parts of this post are based on contributions from François Brunault, Emil Jeřábek and Zack Wolske on MathOverflow and from Ross Kang, Rohan Kapadia, Peter Nelson and Irene Pivotto at Winberie's in Princeton.
Valuations on matroid base polytopes
Posted on October 5, 2014 by Guest Contributor
Guest post by Joseph Kung.
We will give a short account of the $\mathcal{G}$-invariant from the point of view of a classical matroid theorist. The $\mathcal{G}$-invariant is a universal valuation on matroid base polytopes. Much of the theory holds for polymatroids and even more general submodular functions, but we will focus on matroids. This account is based on the work of H. Derksen and A. Fink (singly and jointly). My version differs in details and emphases from theirs. Also, I try to explain how bases of quasi-symmetric functions are used, as far as I can. I may be seriously misrepresenting the work of Derksen and Fink and I apologize in advance if I have done so.
For a finite set $E$, let $\mathbb{R}^E$ be the $|E|$-dimensional real vector space with coordinates labeled by the set $E$, so that the standard basis vectors $e_i,\, i \in E$ form a basis. The (matroid) base polytope $Q(M)$ of the matroid $M(E)$ is the convex polytope in $\mathbb{R}^E$ obtained by taking the convex closure of indicator vectors of bases of $M$, that is,
Q(M) = \mathrm{conv}\left\{ \sum_{b \in B} e_b: B \,\,\text{is a basis of}\,\,M \right\}.
A (base polytope) decomposition is a decomposition $Q(M) = Q(M_1) \cup Q(M_2)$ where (a) both $Q(M_1)$ and $Q(M_2)$ are base polytopes for matroids $M_1$ and $M_2$, and (b) the intersection $Q(M_1) \cap Q(M_2)$ is a base polytope and a face of the polytopes $Q(M_i)$ and $Q(M_j)$.
A function $v$ defined on base polytopes is a (matroid base polytope) valuation if
v(Q(M)) = v(Q(M_1)) + v(Q(M_2)) – v(Q(M_1) \cap Q(M_2))
when $Q(M) = Q(M_1) \cup Q(M_2)$ is a base polytope decomposition.
As a base polytope is the convex closure of indicator vectors of bases, any reasonable function defined on matroids which is a sum over bases should be a valuation. In particular the Tutte polynomial (as a sum of monomials defined by internal and external activities over all bases) is a valuation. (See the paper of Ardila, Fink, and Rincon.) In this sense, valuations are generalizations of Tutte polynomials. Derksen defined a valuation, the $\mathcal{G}$-invariant, in the following way: Let $M(E)$ be a rank-$r$ matroid on the set $E$, labeled as $\{1,2,\ldots,d\}$, where $d =|E|$. A permutation $\pi = x_1x_2 \ldots x_d$ of $E$ defines the sequence of non-negative integers
(r_1, r_2 – r_1, r_3 – r_2, \ldots, r_d-r_{d-1}),
r_j = r(\{x_1,x_2,\ldots,x_{j}\}).
This sequence is called the rank sequence $r(\pi)$ of the permutation $\pi$. The $\mathcal{G}$-invariant, is defined by
\mathcal{G}(M) = \sum_{\pi} b_{r(\pi)},
where the sum ranges over all $d!$ permutations of $E$ and $b_{r(\pi)}$ is a basis for quasi-symmetric functions.
We must digress here and explain what quasi-symmetric functions are. A formal power series $f(\underline{x})$ in the variables $x_1,x_2, \ldots$ is quasi-symmetric if $f$ has bounded degree and for a given (finite) sequence of non-negative integers $(\alpha_1, \alpha_2, \ldots, \alpha_k)$, the coefficients of the monomials $x^{\alpha_1}_{i_1} x^{\alpha_2}_{i_2} \cdots x^{\alpha_k}_{i_k}$, where $i_1 < i_2 < \cdots < i_k,$ are the same. Put another way, $f$ is quasi-symmetric if and only if it is a finite linear combination of monomial symmetric functions
m_{(\alpha_1, \alpha_2, \ldots, \alpha_k)} := \sum_{i_1 < i_2 < \cdots < i_k} x_{i_1}^{\alpha_1} x^{\alpha_2}_{i_2} \cdots x^{\alpha_k}_{i_k}.
In particular, a quasi-symmetric function is determined by an array of coefficients indexed by sequences of non-negative integers and one may choose any convenient basis for quasi-symmetric functions to define the $\mathcal{G}$-invariant. Each basis gives a different function $f(\underline{x})$ but all of them contain the same information about the matroid $M$. Thus, the $\mathcal{G}$-invariant is really an equivalence class of power series, defined up to a choice of basis of quasi-symmetric functions. Alternatively, we can simply think of the $\mathcal{G}$-invariant as an array of coefficients indexed by sequences of non-negative integers which may be transformed by certain change-of-basis matrices.
For matroids, rank sequences are sequences of $0$'s and $1$s with exactly $r$ $1$'s, where $r$ is the rank of the matroid. Using the formula for the rank function of the dual $M^*$, it is immediate that for a given permutation $\pi$ of $E$, the rank sequence of $\pi$ in $M^*$ is the complementary sequence (the sequence obtained by switching $0$'s with $1$'s and conversely) to the rank sequence in $M$. Thus, $\mathcal{G}(M^*)$ can be obtained from $\mathcal{G}(M)$ by a change of basis of quasi-symmetric functions.
The elements of the permutation where the $1$'s occur form a basis of $M$. Thus we can assign a (unique) basis to each rank sequence and the $\mathcal{G}$-invariant is a sum over bases. It is not surprising that it is a base polytope valuation.
Theorem (Derksen and Fink).The $\mathcal{G}$-invariant is a "universal" valuation on matroid base polytopes, in the sense that every base polytope valuation is an "evaluation" of the $\mathcal{G}$-invariant. In particular, the Tutte polynomial is an "evaluation" of the $\mathcal{G}$-invariant.
The proof is complicated. See the paper of Derksen and Fink cited at the end of this note.
As an example, consider the uniform matroid $U_{3,6}$ on the set $\{1,2,3,4,5,6\}$. All $3$-subsets are bases. The permutation $123456$ gives the composition $(1,1,1,0,0,0)$ and so does any other permutation. Hence, using the basis of monomial symmetric functions,
\mathcal{G}(U_{3,6})=720m_{(1,1,1,0,0,0)}=720\sum_{i_1 < i_2 < i_3} x_{i_1}x_{i_2}x_{i_3}.
For comparison,
\begin{multline*}
T(U_{3,6};x,y) = (x-1)^3+ 6 (x-1)^2 +15 (x-1) + 20 + 15(y-1) + 6(y-1)^2 + (y-1)^3
= x^3 + 3x^2 + 6x + 6y + 3y^2 + y^3.
\end{multline*}
Here is where I might be totally wrong. The only way I can see of getting $T$ from $\mathcal{G}$ is to do a change of basis and assign different values to basis elements. So "evaluation" as used in the theorem means something more general.
What information does the $\mathcal{G}$-invariant contain? Since almost all matroids are paving matroids, a first-order answer might be obtained by calculating the $\mathcal{G}$-invariant for paving matroids. This is a simple calculation. I think R. T. Tugger is the first to do this.
Most of you would know what a paving matroid is, but I need to establish notation. Recall that a (Hartmanis) $k$-partition $\underline{H}$ of the set $E$ is a collection of subsets called blocks satisfying three conditions:
(a) the union of all the blocks equals $E$;
(b) every block has size at least $k$;
(c) every subset of size $k$ in $E$ is contained in exactly one block.
The blocks with more than $k$ elements are non-trivial and those with exactly $k$ elements are trivial. Let $\underline{H}$ be an $(r-1)$-partition. The paving matroid $\mathrm{Pav}(\mathcal{H})$ is the matroid of rank $r$ with the following flats: the entire set $E,$ the blocks of $\underline{H},$ and all subsets of size $r-2$ or less. Roughly speaking, all the dependencies occur at the top.
Proposition. The invariant $\mathcal{G}(\mathrm{Pav}(\underline{H}))$ can be computed from the multi set $\{ |H|: H \,\,\text{is a non-trivial block}\}$.
Proof. Let $|E|=d$ and $\pi = x_1,x_2, \ldots, x_d$ be a permutation of $E$. since every subset of size $r-1$ is independent, the rank sequence of $\pi$ starts with $r-1$ $1$'s. The remaining $1$ occurs in position $i, \, i \ge r.$ If $\{x_1, x_2, \ldots,x_{r-1}\}$ is a trivial block, then $i = r$. If not, then $\{x_1, x_2, \ldots,x_{r-1}\}$ spans a non-trivial block $H$ and $i$ can vary from $r$ to $|H|$. Indeed, the index is $i$ if and only if $\{x_1,x_2, \ldots,x_{i-1}\} \subseteq H$. Hence, in the sum for the $\mathcal{G}$-invariant, a trivial block contributes
(r-1)!(d-r+1)! m_{(1^r0^{d-r})}.
On the other paw, a non-trivial block contributes
\sum_{i=r}^{|H|} \frac {|H|!} {(|H| – i + 1)!} (|E| – |H|) (|E| – i)! m_{(1^{r-1 }0^{i-r}10^{d-i})}.
Here $(1^{r-1 }0^{i-r}10^{d-i}) = (1,1,\ldots,1,0,0,\ldots,0,1,0,0,\ldots,0)$ is the $0$-$1$ sequence with $1$'s in the leading $r-1$st positions and the $i$th position. Note that the number of trivial blocks can be calculated from the sizes of the non-trivial blocks.
As an example, let $P$ be the paving matroid on $123456$ defined by the $ 2$-partition $\{1234,456, 15, 16, 25, 26, 35,36\}$. Geometrically, $\mathrm{P}$ is the rank-$3$ matroid consisting of a $4$-point line $1234$ and a $3$-point line $456$ intersecting at the common point $4$. Then
\mathcal{G}(P) = 48 m_{(1,1,0,0,1,0)} +144 m_{1,1,0,1,0,0} + 540 m_{(1,1,1,0,0,0)}.
The analog of the proposition holds for the Tutte polynomial. For a paving matroid, the $\mathcal{G}$-invariant contains exactly the same information about the matroid as the Tutte polynomial. In particular, the $\mathcal{G}$-invariant and the Tutte polynomial has the same asymptotic power to distinguish matroids. However, $\mathcal{G}$-invariant can distinguish pairs of matroids the Tutte polynomial cannot. For examples, see the paper of Billera, Jia, and Reiner.
R. T. Tugger conjectures that one can use the argument in the chapter of Brylawski and Oxley in "Matroid Applications" (Exercise 9, p.~198) to show that for any integer $N$, there are at least $N$ non-isomorphic matroids having the same $\mathcal{G}$-invariant.
Although it is a sum over a lot of permutations, calculating the $\mathcal{G}$-invariant feels somehow more elegant than calculating the Tutte polynomial. (Yes, I like the definition!) As an exercise, the reader might imagine calculating $\mathcal{G}(\mathrm{PG}(r-1,q))$. The bases of a projective geometry all behave in the same way, so one can easily write down the terms in the sum for the $\mathcal{G}$-invariant.
For each basis, the sum over the quasi-symmetric functions determined by its rank sequences summarizes in some way how the basis interacts with the other elements of the matroid. This includes its internal and external activity. It will be very interesting to derive directly internal and external activities from the quasi-symmetric function of the basis. I have no idea how to do this and will be happy if anyone tells me how to do this.
The $\mathcal{G}$-invariant has applications. I am not the person to write about them but I hope this initial attempt would motivate someone else to tell us about these applications.
Here are four papers out of many I could have cited:
F. Ardila, A. Fink, F. Rincon, Valuations for matroid polytope subdivisions, Canad. J. Math. 62 (2010) 1228-1245.
L.J. Billera, N. Jia, V. Reiner, A quasisymmetric function for matroids, European J. Combin. 30 (2009) 1727-1757.
H. Derksen, Symmetric and quasi-symmetric functions associated to polymatroids, arXiv: 0801.4393.
H. Derksen, A. Fink, Valuative invariants for polymatroids, Adv. Math. 225 (2010) 1840-1892.
An introduction to quasi-symmetric functions can be found in the book Enumerative Combinatorics II by Richard Stanley. | CommonCrawl |
View all Nature Research journals
Explore our content
Exciton diffusion in two-dimensional metal-halide perovskites
Michael Seitz ORCID: orcid.org/0000-0002-3515-16481,2,
Alvaro J. Magdaleno1,2,
Nerea Alcázar-Cano ORCID: orcid.org/0000-0002-4381-63631,3,
Marc Meléndez ORCID: orcid.org/0000-0001-5198-35861,3,
Tim J. Lubbers1,2,
Sanne W. Walraven ORCID: orcid.org/0000-0001-5980-64101,2,
Sahar Pakdel ORCID: orcid.org/0000-0002-4676-07804,
Elsa Prada ORCID: orcid.org/0000-0001-7522-47951,2,
Rafael Delgado-Buscalioni ORCID: orcid.org/0000-0001-6637-20911,3 &
Ferry Prins ORCID: orcid.org/0000-0001-7605-15661,2
Nature Communications volume 11, Article number: 2035 (2020) Cite this article
Electronic properties and materials
Inorganic LEDs
Photonic devices
Two-dimensional materials
Two-dimensional layered perovskites are attracting increasing attention as more robust analogues to the conventional three-dimensional metal-halide perovskites for both light harvesting and light emitting applications. However, the impact of the reduced dimensionality on the optoelectronic properties remains unclear, particularly regarding the spatial dynamics of the excitonic excited state within the two-dimensional plane. Here, we present direct measurements of exciton transport in single-crystalline layered perovskites. Using transient photoluminescence microscopy, we show that excitons undergo an initial fast diffusion through the crystalline plane, followed by a slower subdiffusive regime as excitons get trapped. Interestingly, the early intrinsic diffusivity depends sensitively on the choice of organic spacer. A clear correlation between lattice stiffness and diffusivity is found, suggesting exciton–phonon interactions to be dominant in the spatial dynamics of the excitons in perovskites, consistent with the formation of exciton–polarons. Our findings provide a clear design strategy to optimize exciton transport in these systems.
Metal-halide perovskites are a versatile material platform for light harvesting1,2,3,4 and light emitting applications5,6, combining the advantages of solution processability with high ambipolar charge carrier mobilities7,8, high defect tolerance9,10,11, and tunable optical properties12,13. Currently, the main challenge in the applicability of perovskites is their poor environmental stability14,15,16,17. Reducing the dimensionality of perovskites has proven to be one of the most promising strategies to yield a more stable performance17,18,19. Perovskite solar cells with mixed two-dimensional (2D) and three-dimensional (3D) phases, for example, have been fabricated with efficiencies above 22%20 and stable performance for more than 10,000 h21, while phase pure 2D perovskite solar cells have been reported with efficiencies above 18%22,23. Likewise, significant stability improvements have been reported for phase pure 2D perovskites as the active layer in light emitting technologies24,25,26,27,28,29. The improved environmental stability in 2D perovskite phases is attributed to a better moisture resistance due to the hydrophobic organic spacers that passivate the inorganic perovskite sheets, as well as an increased formation energy of the material17,18,19,30.
However, the reduced dimensionality of 2D perovskites dramatically affects the charge carrier dynamics in the material, requiring careful consideration in their application in optoelectronic devices31,32,33. 2D perovskites are composed of inorganic metal-halide layers, which are separated by long organic spacer molecules. They are described by their general chemical formula L2[ABX3]n-1BX4, where A is a small cation (e.g. methylammonium, formamidinium), B is a divalent metal cation (e.g. lead, tin), X is a halide anion (chloride, bromide, iodide), L is a long organic spacer molecule, and n is the number of octahedra that make up the thickness of the inorganic layer. The separation into few-atom thick inorganic layers yields strong quantum and dielectric confinement effects34. As a result, the exciton binding energies in 2D perovskites can be as high as several hundreds of meVs, which is around an order of magnitude larger than those found in bulk perovskites35,36,37. The excitonic character of the excited state is accompanied by an effective widening of the bandgap, an increase in the oscillator strength, and a narrowing of the emission spectrum36,37,38. The strongest confinement effects are observed for n = 1, where the excited state is confined to a single B-X-octahedral layer.
Consequently, light harvesting using 2D perovskites relies on the efficient transport of excitons and their subsequent separation into free charges39. This stands in contrast to bulk perovskites in which free charges are generated instantaneously thanks to the small exciton binding energy35. Particularly, with excitons being neutral quasi-particles, the charge extraction becomes significantly more challenging as they cannot be guided to the electrodes through an external electric field40. Excitons need to diffuse to an interface before the electron and hole can be efficiently separated into free charges41. On the other hand, for light emitting applications the spatial displacement is preferably inhibited, as a larger diffusion path increases the risk of encountering quenching sites which would reduce brightness. While charge transport in bulk perovskites has been studied in great detail, the mechanisms that dictate exciton transport in 2D perovskites remain elusive41. Moreover, it is unclear to what extent exciton transport is influenced by variations in the perovskite composition.
Here, we report the direct visualization of exciton diffusion in 2D single-crystalline perovskites using transient photoluminescence-microscopy42. This technique allows us to follow the temporal evolution of a near-diffraction-limited exciton population with sub-nanosecond resolution and reveals the spatial and temporal exciton dynamics. We observe two different diffusion regimes. For early times, excitons follow normal diffusion, while for later times a subdiffusive regime emerges, which is attributed to the presence of trap states. Using the versatility of perovskite materials, we study the influence of the organic spacer on the diffusion dynamics of excitons in 2D perovskites. We find that between commonly used organic spacers (phenethylammonium, PEA, and butylammonium, BA), diffusivities and diffusion lengths can differ by one order of magnitude. We show that these changes are closely correlated with variations in the softness of the lattice, suggesting a dominant role for exciton–phonon coupling and exciton–polaron formation in the spatial dynamics of excitons in these materials. These insights provide a clear design strategy to further improve the performance of 2D perovskite solar cells and light emitting devices.
Exciton diffusion imaging
We prepare single crystals of n = 1 phenethylammonium lead iodine (PEA)2PbI4 2D perovskite by drop-casting a saturated precursor solution onto a glass substrate43,44, as confirmed by XRD analysis and photoluminescence spectroscopy (see Methods section for details). Using mechanical exfoliation, we isolate single-crystalline flakes of the perovskite and transfer these to microscopy slides. The single-crystalline flakes have typical lateral sizes of tens to hundreds of micrometers and are optically thick. The use of thick flakes provides a form of self-passivation that prevents the typical fast degradation of the perovskite in ambient conditions.
To measure the temporal and spatial exciton dynamics, we create a near-diffraction-limited exciton population using a pulsed laser diode (λex = 405 nm) and an oil immersion objective (N.A. = 1.3). The image of the fluorescence emission of the exciton population is projected outside the microscope with high magnification (×330), as illustrated in Fig. 1b. By placing a scanning avalanche photodiode (20 µm in size) in the image plane, we resolve the time-dependent broadening of the population with high temporal and spatial resolution. Fig. 1c shows the resulting map of the evolution in space and time of the fluorescence emission intensity of an exciton population in (PEA)2PbI4. The fluorescence emission intensity I(x,t) is normalized at each point in time to highlight the broadening of the emission spot over time. Each time-slice I(x,tc) is well described by a Voigt function45, from which we can extract the variance σ(t)2 of the exciton distribution at each point in time (Fig. 1d). On a timescale of several nanoseconds, the exciton distribution broadens from an initial σ(t = 0 ns) = 171 nm to σ(t = 10 ns) = 448 nm, indicating fast exciton diffusion.
Fig. 1: Diffusion imaging of excitons in two-dimensional perovskites.
a Illustration of the (PEA)2PbI4 crystal structure, showing the perovskite octahedra sandwiched between the organic spacer molecules. b Schematic of the experimental setup. A near-diffraction limited exciton population is generated with a pulsed laser diode. The spatial and temporal evolution of the exciton population is recorded by scanning an avalanche photodiode through the magnified image of the fluorescence I(x,t). c Fluorescence emission intensity I(x,t) normalized at each point in time to highlight the spreading of the excitons. d Cross section of I(x,t) for different times tc. e Mean-square-displacement of the exciton population over time. Two distinct regimes are present: First, normal diffusion with α = 1 is observed, which is followed by a subdiffusive regime with α < 1. The inset shows a log–log plot of the same data, highlighting the two distinct regimes. Reported errors represent the uncertainty in the fitting procedure for σ(t)2.
To analyze the time-dependent broadening of the emission spot in more detail, we study the temporal evolution of the mean-square-displacement (MSD) of the exciton population, given by MSD(t) = σ(t)2 − σ(0)2. Taking the one-dimensional diffusion equation as a simple approximation, it follows that MSD(t) = 2Dtα, which allows us to extract the diffusivity D and the diffusion exponent α from our measurement (see Supplementary Note 1)42,45. In Fig. 1e we plot the MSD as a function of time. Two distinct regimes can be observed: For early times (t ≲ 1 ns) a fast linear broadening occurs with α = 1.01 ± 0.01, indicative of normal diffusion, while for later times (t ≳ 1 ns) the broadening becomes progressively slower with α = 0.65 ± 0.01, suggesting a regime of trap state limited exciton transport (see Supplementary Note 2). The two regimes are clearly visible in the log–log representation shown in the inset of Fig. 1e, where different slopes correspond to different α values. From these measurements, a diffusivity of 0.192 ± 0.013 cm2 s−1 is found for (PEA)2PbI4. Our diffusivity of single crystalline (PEA)2PbI4 is around an order of magnitude higher than previously reported mobility values from conductivity measurements (μ = 1 cm2 V−1 s−1; D = μkBT = 0.025 cm2 s−1) of polycrystalline films46. This finding is reasonable as grain boundaries slow down the movement of excitons and conventional methods measure a time-averaged mobility that cannot separate intrinsic diffusion from trap state limited diffusion.
Influence of trap states
The role of trap states in perovskite materials is well studied and is generally attributed to the presence of imperfections at the surface of the inorganic layer47. These lower-energy sites lead to a subdiffusive behavior as a subpopulation of excitons becomes trapped. To test the influence of trap states, we have performed diffusion measurements in the presence of a continuous wave (CW) background excitation of varying intensity (Fig. 2). The background excitation leads to a steady state population of excitons, which fill some of the traps and thereby reduce the effective trap density. To minimize the invasiveness of the measurement itself, the repetition rate and fluence were reduced to a minimum (see Supplementary Note 2). In the absence of any background illumination, we find a strongly subdiffusive diffusion exponent of α = 0.48 ± 0.02. As the background intensity is increased, an increasing α is observed, indicative of trap state filling. Ultimately, a complete elimination of subdiffusion (α = 0.99 ± 0.02) is obtained at a background illumination power of 60 mW cm−2. For comparison, this value corresponds roughly to a 2.5 Sun illumination. Additionally, we observe that the onset of the subdiffusive regime is delayed as more and more trap states are filled, as represented by the increasing tsplit parameter (see Fig. 2b, bottom panel).
Fig. 2: Exciton diffusion with different background excitation intensities.
a Mean-square-displacement of the exciton population for different continuous wave (CW) background intensities. Experimental values are displayed with open markers, while the fit functions (Supplementary Eq. 9), defined through the parameters D, α, and tsplit, are displayed as solid lines. Reported errors represent the uncertainty in the fitting procedure for σ(t)2. b Diffusivity D, diffusion exponent α, and the onset of subdiffusive regime tsplit extracted from fits in a. c Theoretical model (Eq. 2, solid lines), and numerical simulation (open markers) for exciton diffusion with different trap densities. Experimental values from a are displayed as shaded areas for comparison. Reported errors represent the standard deviation of 104 Brownian motion simulations. The inset shows the trap densities found with the simulations. Mirror axis of the inset is the sun equivalent of the background illumination intensity (AM1.5 Global with Ephoton > Ebandgap).
To gain theoretical insights and quantitative predictions concerning the observed subdiffusive behavior of excitons and its relation with trap state densities, we performed numerical simulations based on Brownian dynamics of individual excitons diffusing in a homogeneously distributed and random trap field (see Supplementary Note 3). In addition, we developed a coarse-grained theoretical model based on continuum diffusion of the exciton concentration (see Supplementary Note 4). The continuum theory predicts an exponential decay of the diffusion coefficient,
$$\frac{1}{2}\frac{{{\mathrm{dMSD}}\left( t \right)}}{{{\mathrm{d}}t}} = D\left( t \right) = D\,\exp \left( { - \frac{D}{{\lambda ^2}}t} \right)$$
where λ is the average distance between traps. The integral of this expression leads to
$${\mathrm{MSD}}\left( t \right) = 2\lambda ^2\left[ {1 - \exp \left( { - \frac{D}{{\lambda ^2}}t} \right)} \right],$$
which, as shown in Fig. 2c, successfully reproduces both experimental and numerical results and allows us to determine the value of the intrinsic trap state density, yielding 1/λ2 = 22 µm−2 per layer (≈1016 cm−3), which is of the same order of magnitude as previously reported values for bulk perovskites48,49. The inset in Fig. 2c shows the evolution of the effective trap state density 1/λ2 with increasing illumination intensity. We note that the exponential decay of Eq. 1 allows for a more intuitive characterization of D(t) by relating the subdiffusion directly to the trap density 1/λ2 rather than relying on the subdiffusive exponent α of a power law commonly used in literature42.
Structure-property relations of exciton transport
Importantly, the early diffusion dynamics is unaffected by the trap density and shows normal diffusion (α = 1) for all illumination intensities. This strongly suggests that the early diffusion dynamics is unaffected by energetic disorder, which would result in a sublinear behavior with α < 1, and any trap states, giving us direct access to the intrinsic exciton diffusivity of the material and allowing us to compare the intrinsic exciton diffusivity between perovskites of different compositions. To explore compositional variations, we substitute phenethylammonium (PEA) with butylammonium (BA)—another commonly used spacer molecule for 2D perovskites18,24,25,28,39,50,51.
Fig. 3a displays the MSD of the (BA)2PbI4 perovskite, again showing the distinct transition from normal diffusion to a subdiffusive regime. However, as compared to (PEA)2PbI4, excitons in (BA)2PbI4 are remarkably less mobile, displaying a diffusivity of only 0.013 ± 0.002 cm2 s−1, which is over an order of magnitude smaller than that of (PEA)2PbI4 with 0.192 ± 0.013 cm2 s−1 (green curve shown in Fig. 3b for comparison). Taking the exciton lifetime into account, the difference in diffusivity results in a reduction in the diffusion length from 236 ± 4 nm for (PEA)2PbI4 to a mere 39 ± 8 nm for (BA)2PbI4 (see Fig. 3c and Supplementary Note 5). These results indicate that the choice of ligand plays a crucial role in controlling the spatial dynamics of excitons in 2D perovskites. We would like to note that the reported diffusion lengths follow the literature convention of diffusion lengths in one dimension, as it is the relevant length scale for device design. The actual 2D diffusion length is greater by a factor of √2.
Fig. 3: Exciton diffusion in (PEA)2PbI4 and (BA)2PbI4.
a (PEA)2PbI4 and (BA)2PbI4 crystal structure along the a crystal axis53,54. b Mean-square-displacement of exciton population over time for (PEA)2PbI4 (dotted line) and (BA)2PbI4 (circles). Inset shows the normalized fluorescence emission intensity I(x,t) for (BA)2PbI4. Reported errors represent the uncertainty in the fitting procedure for σ(t)2. c Fractions of surviving excitons (extracted from lifetime data in Supplementary Fig. 7) as a function of net spatial displacement \(\sqrt {{\mathrm{MSD}}(t)}\) of excitons for (PEA)2PbI4 (triangles) and (BA)2PbI4 (circles). Reported errors represent the uncertainty in the fitting procedure for σ(t)2. d Average atomic displacement Ueq of the chemical elements in (PEA)2PbI4 and (BA)2PbI4. Data was extracted from previously published single crystal X-ray diffraction data53,54. e Diffusivity D as a function of average atomic displacement Ueq for different organic spacers: 4-fluoro-phenethylammonium (4FPEA)65 phenethylammonium (PEA)53, hexylammonium (HA)54, octylammonium (OA)66, decylammonium (DA)66, Butylammonium (BA)54. Reported errors represent the standard deviation of the average diffusivity D obtained from multiple single crystalline flakes.
To understand the large difference in diffusivity between (PEA)2PbI4 and (BA)2PbI4, we take a closer look at the structural differences between these two materials. Changing the organic spacer can have a significant influence on the structural and optoelectronic properties of 2D perovskites. Specifically, increasing the cross-sectional area of the organic spacer distorts the inorganic lattice and reduces the orbital overlap between neighboring octahedra, which in turn increases the effective mass of the exciton52. Comparing the octahedral tilt angles of (PEA)2PbI4 and (BA)2PbI4, a larger distortion for the bulkier (PEA)2PbI4 (152.8°) as compared to (BA)2PbI4 (155.1°) is found53,54. The larger exciton effective mass in (PEA)2PbI4 would, however, suggest slower diffusion, meaning a simple effective mass picture for free excitons cannot explain the observed trend in the diffusivity between (PEA)2PbI4 and (BA)2PbI4.
Recently, exciton–phonon interactions have been found to strongly influence exciton dynamics in perovskites31,33. To investigate the possible role of exciton–phonon coupling on exciton diffusion, we first quantify the softness of the lattices of both (PEA)2PbI4 and (BA)2PbI4 by extracting the atomic displacement parameters from their respective single crystal X-ray data55. The atomic displacement of the different atoms of both systems are summarized in Fig. 3d, showing distinctly larger displacements for (BA)2PbI4 as compared to (PEA)2PbI4 in both the organic and inorganic sublattice53,54. The increased lattice rigidity for (PEA)2PbI4 can be attributed to the formation of an extensive network of pi-hydrogen bonds and a more space-filling nature of the aromatic ring, both of which are absent in the aliphatic BA spacer molecule. Qualitatively, a stiffening of the lattice reduces the exciton–phonon coupling and would explain the observed higher diffusivity in (PEA)2PbI4 as compared to (BA)2PbI4. In addition to a softer lattice, we find that (BA)2PbI4 exhibits a larger exciton–phonon coupling strength than (PEA)2PbI4, as confirmed by analyzing the temperature-dependent broadening of the photoluminescence linewidth of the two materials (see Supplementary Note 6)56.
To further test the correlation between lattice softness and diffusivity, we have performed measurements on a wider range of 2D perovskites with different organic spacers. In Fig. 3e, we present the diffusivity as a function of average atomic displacement for each of the different perovskite unit cells. Across the entire range of organic spacers, a clear correlation between the diffusivity and the lattice softness is found, further confirming the dominant role of exciton–phonon coupling in the spatial dynamics of the excited state in 2D perovskites.
In the limit of strong exciton–phonon coupling, the presence of an exciton could potentially cause distortions of the soft inorganic lattice of the perovskite and lead to the formation of exciton–polarons57,58. As compared to a free exciton, an exciton–polaron would exhibit a larger effective mass and, consequently, a lower diffusivity. The softer the lattice, the larger the distortion, and the heavier the polaron effective mass would be59.
Polaron formation can significantly modify the mechanism of transport, in some cases causing a transition from band-like to a hopping type transport59. When short-range deformations of the lattice are dominant, the exciton–polaron is localized within a unit cell of the material and is known as a small polaron. The motion of small polarons occurs through site-to-site hopping and increases with temperature (∂D/∂T > 0). However, in the presence of dominant long-range lattice deformations, large exciton–polarons may form which extend across multiple lattice sites. The diffusion of large polarons decreases with increasing temperature (∂D/∂T < 0), resembling that of band-like free exciton motion, although with a strongly increased effective mass. In Fig. 4, we present temperature-dependent measurements of the diffusivity for both (PEA)2PbI4 and (BA)2PbI4. In both materials a clear negative scaling of the diffusivity with temperature is observed (∂D/∂T < 0), characteristic of band-like transport.
Fig. 4: Temperature-dependent diffusivity in (PEA)2PbI4 (triangles) and (BA)2PbI4 (circles).
Error bars represent the uncertainty of the fit and are smaller than the markers for (PEA)2PbI4.
The observed correlation between diffusivity and lattice softness in combination with band-like transport is in good qualitative agreement with the formation of large exciton–polarons. However, further studies will be needed to provide a more quantitative model that can explain the large differences in diffusivity between the various organic spacers. The correct theoretical description of exciton–phonon coupling and exciton–polarons in 2D perovskites is still the subject of ongoing debate, though the current consensus is that the polar anharmonic lattice of these materials requires a description beyond conventional Frohlich theory57,58,60. Crucial in this respect will be further spectroscopic investigations of temperature-dependent optical properties of these materials, which should allow to better distinguish the influence of exciton–polaron formation from more traditional phonon-scattering mechanisms in these materials.
Meanwhile, structural rigidity can be used as a design parameter in these systems for optimized exciton transport characteristics. Taking into account the close correlation between diffusivity and the atomic displacement, this parameter space can be readily explored using available X-ray crystal structure data for many 2D perovskite analogues. While the influence of the organic spacer is expected to be particularly strong in the class of n = 1 2D perovskites, we have observed consistent trends in the n = 2 analogues. Indeed, just like in n = 1, in n = 2 the use of the PEA cation yields higher diffusivities than for BA (see Supplementary Fig. 13). Similarly, the interstitial formamidinium (FA) cation in n = 2 yields higher diffusivity than the methylammonium (MA) cation, consistent with the trend in the atomic displacement parameters. It is important to note, though, that already for n = 2 perovskites a significant free carrier fraction may be present in the perovskites61, suggesting that transport in n > 1 perovskites cannot be assumed to be purely excitonic and needs to be evaluated more rigorously.
From a technological perspective, structural rigidity may play a particularly important role in light emitting devices. Long exciton diffusion lengths in light emitting applications can act detrimentally on device performance, as it increases the possibility of encountering a trapping site. From an exciton–polaron perspective, this suggests soft lattices are preferred. At the same time though, Gong et al. highlighted the role of structural rigidity in improving the luminescence quantum yield through a reduced coupling to non-radiative decay pathways55,62. A trade-off therefore exists in choosing the optimal rigidity for bright emission. Meanwhile, for light harvesting applications, long diffusion lengths are essential for the successful extraction of excitons. While strongly excitonic 2D perovskites are generally to be avoided due to the penalty imposed by the exciton binding energy, improving the understanding of the spatial dynamics of the excitonic state may help mitigate this negative impact of the thinnest members of the 2D perovskites in solar harvesting.
In summary, we have studied the spatial and temporal exciton dynamics in 2D metal-halide perovskites of the form L2PbI4. We show that excitons undergo an initial fast diffusion through the crystalline plane, followed by a slower subdiffusive regime as excitons get trapped. Traps can be efficiently filled through a continuous wave background illumination, extending the initial regime where excitons undergo normal diffusion. By varying the organic spacer L we find that the intrinsic diffusivity depends sensitively on the stiffness of the lattice, revealing a clear correlation between the lattice rigidity and the diffusivity. Our results indicate that exciton–phonon interactions dominate the spatial dynamics of excitons in 2D perovskites. Moreover, the observations are consistent with the formation of large exciton–polarons.
During the review process we became aware of a related manuscript by Deng et al.63 using transient-absorption microscopy to study excited-state transport in 2D perovskites, with a focus on the differences in the spatial dynamics as a function of layer thickness (n = 1 to 5).
Growth of single-crystalline flakes
Chemicals were purchased from commercial suppliers and used as received (see Supplementary Methods). Layered perovskites, with the exception of (HA)2PbI4 and (DA)2PbI4, were synthesized under ambient laboratory conditions following the over-saturation techniques43,44. In a nutshell, the precursor salts LI, PbI2, and AI were mixed in a stoichiometric ratio (2:1:0 for n = 1 and 2:2:1 for n = 2) and dissolved in γ-butyrolactone. The solution was heated to 70 °C and more γ-butyrolactone was added (while stirring) until all the precursors were completely dissolved. The resulting solutions were heated to 70 °C and the solvent was left to evaporate. After 2–3 days, millimeter sized crystals formed in the solution, which was subsequently cooled down to room temperature. For this study, we drop cast some of the remaining supersaturated solution on a glass slide, heated it up to 50 °C with a hotplate and after the solvent was evaporated, crystals with crystal sizes of up to several hundred microns were formed. The saturated solution can be stored and re-used to produce freshly grown 2D perovskites within several minutes. We would like to note that drop cast n = 2 solutions form several crystals with different n values. However, n = 2 crystals can be easily isolated during the exfoliation (see next section) and the formation of n = 2 can be favored by preheating the substrate to 50 °C before drop casting.
(HA)2PbI4 and (DA)2PbI4 were synthesized by dissolving PbI2 (100 mg) in HI (800 µl) through heavy stirring and heating the solution to 90 °C. After PbI2 was completely dissolved a stoichiometric amount of the amine was added dropwise to the solution.
The perovskite crystals of the thin film were mechanically exfoliated using the Scotch tape method (Nitto SPV 224). The exfoliation guarantees a freshly cleaved and atomically flat surface area for inspection, which is crucial to avoid emission from edge states and guarantee direct contact with the glass substrate. After several exfoliation steps, the crystals were transferred on a glass slide and were subsequently studied through the glass slide with a ×100 oil immersion objective (Nikon CFI Plan Fluor, NA = 1.3). A big advantage of this technique is that the perovskites are encapsulated through the glass slide from one side and by the bulk of the crystal from the other side. It is important to use thick crystals to guarantee good self-encapsulation and prevent premature degradation of the perovskite flakes to affect the measurement43.
X-ray diffraction (XRD) was performed with a PANanaltical X'Pert PRO operating at 45 kV and 40 mA using a copper radiation source (λ = 1.5406 Å). The polycrystalline perovskite films were prepared by drop casting the saturated perovskite solutions on a silicon zero diffraction plate.
Temperature-dependent photoluminescence measurements
Perovskite flakes were excited with a 385 nm light emitting diode (Thorlabs) and the emission spectrum was measured using a spectrograph and an EMCCD camera coupled to a spectrograph (Princeton Instruments, SpectraPro HRS-300, ProEM HS 1024BX3). Temperature of the flakes was varied with a Peltier element (Adaptive Thermal Management, ET-127-10-13-H1), using a PID temperature controller (Dwyer Instruments, Series 16C-3) connected to a type K thermocouple (Labfacility, Z2-K-1M) for feedback control and a fan for cooling.
Lifetime measurements
Perovskite flakes were excited with a 405 nm laser (PicoQuant LDH-D-C-405, PDL 800-D), which was focused down to a near-diffraction limited spot. The photoluminescence was collected with an APD (Micro Photon Devices PDM, 20 × 20 µm detector size). The laser and APD were synchronized using a timing board for time correlated single photon counting (Pico-Harp 300).
Diffusion measurements
Exciton diffusion measurements were measured following the same procedure as Akselrod et al.42,45. In short, a near diffraction limited exciton population was created using a 405 nm laser (PicoQuant LDH-D-C-405, PDL 800-D) and a ×100 oil immersion objective (Nikon CFI Plan Fluor, NA = 1.3). Fluorescence of the exciton population was then imaged with a total ×330 magnification onto an avalanche photodiode (APD, Micro Photon Devices PDM) with a detector size of 20 µm. The laser and APD were synchronized using a timing board for time correlated single photon counting (Pico-Harp 300). The APD was capturing an effective area of around 60 × 60 nm (= 20 µm/330). The APD was scanned through the middle of the exciton population in 60 or 120 nm steps, recording a time trace in every point. To minimize the degradation of the perovskites through laser irradiation, the perovskite flakes were scanned using an x-y-piezo stage (MCL Nano-BIOS 100), covering an area of 5 × 5 µm. Diffusion measurements were performed with a 40 MHz laser repetition rate and a laser fluence of 50 nJ cm−2 unless stated otherwise. The time binning of the measurement was set to 4 ps before software binning was applied. For the temperature-dependent measurements, the temperature was varied with a silicon heater mat (RS PRO, 245-499), using a PID temperature controller (Dwyer Instruments, Series 16C-3) connected to a type K thermocouple (Labfacility, Z2-K-1M) for feedback control. Here, a silicon heater mat was chosen over the Peltier element as a Peltier element expands during the heating process and causes mechanical vibrations that lead to drift.
Brownian motion simulations
We have performed Brownian dynamics simulations of a single exciton diffusing in a field of traps, representing ideal (non-interacting) excitons in the dilute limit carried out in experiments. In these simulations, an exciton diffuses freely until it finds a trap, where it just stops. Free diffusion is modelled using the standard stochastic differential equation for Brownian motion in the Itô interpretation. If r(t) is the position of the exciton in the plane at time t, its displacement Δr over a time Δt is given by,
$$\Delta {\mathbf{r}} = \sqrt {2D} {\mathrm{d}}{\mathbf{W}},$$
where D is the free-diffusion coefficient and dW is taken from a Wiener process, such that 〈dWdW〉 = Δt. Traps were scattered throughout the plane following a uniform random distribution. The exciton is considered to be trapped as soon its location gets closer than Rtrap = 1.2 nm to the trap center. The value was taken from estimations of the exciton Bohr radius and corresponds to a trap area of 1.44 nm2 37. In any case, in the dilute regime, the diffusion is not sensitive to the trap size Rtrap, because the trap radius is much smaller than the average separation between traps, Rtrap ≪ λ. To numerically integrate the equation of motion, we used a simple second-order-accurate modification of the well-known Euler Maruyama algorithm: the BAOAB-Limit method64. Trajectories were computed for many independent excitons and the data was averaged to determine the MSD as a function of time. While the simulation of the MSD was done in two dimensions, we used the MSD in one dimension to match the experimental conditions: \({\mathrm{MSD}}\left( t \right) = \frac{1}{2}\left( {{\mathrm{MSD}}_x\left( t \right) + {\mathrm{MSD}}_y\left( t \right)} \right)\).
The data supporting the findings of this study are available within the article and its Supplementary information. Extra data are available upon reasonable request to the corresponding author.
Code availability
Correspondence and requests for codes used in the paper should be addressed to the corresponding author.
Kojima, A., Miyasaka, T., Teshima, K. & Shirai, Y. Organometal halide perovskites as visible-light sensitizers for photovoltaic cells. J. Am. Chem. Soc. 131, 6050–6051 (2009).
Burschka, J. et al. Sequential deposition as a route to high-performance perovskite-sensitized solar cells. Nature 499, 316–319 (2013).
ADS CAS PubMed Article PubMed Central Google Scholar
Liu, M., Johnston, M. B. & Snaith, H. J. Efficient planar heterojunction perovskite solar cells by vapour deposition. Nature 501, 395–398 (2013).
Green, M. A., Ho-Baillie, A. & Snaith, H. J. The emergence of perovskite solar cells. Nat. Photonics 8, 506–514 (2014).
ADS CAS Article Google Scholar
Veldhuis, S. A. et al. Perovskite materials for light-emitting diodes and lasers. Adv. Mater. 28, 6804–6834 (2016).
Tan, Z.-K. et al. Bright light-emitting diodes based on organometal halide perovskite. Nat. Nanotechnol. 9, 687–692 (2014).
Stranks, S. D. et al. Electron-hole diffusion lengths exceeding 1 micrometer in an organometal trihalide perovskite absorber. Science 342, 341–344 (2013).
Shi, D. et al. Low trap-state density and long carrier diffusion in organolead trihalide perovskite single crystals. Science 347, 519–522 (2015).
Yin, W. J., Shi, T. & Yan, Y. Unusual defect physics in CH3NH3PbI3 perovskite solar cell absorber. Appl. Phys. Lett. 104, 063903-063904 (2014).
Brandt, R. E., Stevanović, V., Ginley, D. S. & Buonassisi, T. Identifying defect-tolerant semiconductors with high minority-carrier lifetimes: beyond hybrid lead halide perovskites. MRS Commun. 5, 265–275 (2015).
Steirer, K. X. et al. Defect tolerance in methylammonium lead triiodide perovskite. ACS Energy Lett. 1, 360–366 (2016).
Weidman, M. C., Seitz, M., Stranks, S. D. & Tisdale, W. A. Highly tunable colloidal perovskite nanoplatelets through variable cation, metal, and halide composition. ACS Nano 10, 7830–7839 (2016).
Shamsi, J., Urban, A. S., Imran, M., De Trizio, L. & Manna, L. Metal halide perovskite nanocrystals: synthesis, post-synthesis modifications, and their optical properties. Chem. Rev. 119, 3296–3348 (2019).
Jena, A. K., Kulkarni, A. & Miyasaka, T. Halide perovskite photovoltaics: background, status, and future prospects. Chem. Rev. 119, 3036–3103 (2019).
Niu, G., Guo, X. & Wang, L. Review of recent progress in chemical stability of perovskite solar cells. J. Mater. Chem. A 3, 8970–8980 (2015).
Berhe, T. A. et al. Organometal halide perovskite solar cells: Degradation and stability. Energy Environ. Sci. 9, 323–356 (2016).
Yang, S., Fu, W., Zhang, Z., Chen, H. & Li, C. Z. Recent advances in perovskite solar cells: efficiency, stability and lead-free perovskite. J. Mater. Chem. A 5, 11462–11482 (2017).
Smith, I. C., Hoke, E. T., Solis-Ibarra, D., McGehee, M. D. & Karunadasa, H. I. A layered hybrid perovskite solar-cell absorber with enhanced moisture stability. Angew. Chem. Int. Ed. 53, 11232–11235 (2014).
Quan, L. N. et al. Ligand-stabilized reduced-dimensionality perovskites. J. Am. Chem. Soc. 138, 2649–2655 (2016).
Liu, Y. et al. Ultrahydrophobic 3D/2D fluoroarene bilayer-based water-resistant perovskite solar cells with efficiencies exceeding 22%. Sci. Adv. 5, eaaw2543 (2019).
Grancini, G. et al. One-Year stable perovskite solar cells by 2D/3D interface engineering. Nat. Commun. 8, 1–8 (2017).
Yang, R. et al. Oriented quasi-2D perovskites for high performance optoelectronic devices. Adv. Mater. 30, 1804771 (2018).
Ortiz-Cervantes, C., Carmona-Monroy, P. & Solis-Ibarra, D. Two-dimensional halide perovskites in solar cells: 2D or not 2D? ChemSusChem 12, 1560–1575 (2019).
Tsai, H. et al. Stable light-emitting diodes using phase-pure ruddlesden-popper layered perovskites. Adv. Mater. 30, 1704217 (2018).
Xing, J. et al. Color-stable highly luminescent sky-blue perovskite light-emitting diodes. Nat. Commun. 9, 3541 (2018).
Lin, Y. et al. Suppressed ion migration in low-dimensional perovskites. ACS Energy Lett. 2, 1571–1572 (2017).
Zhang, L., Liu, Y., Yang, Z. & Liu, F. S. Two dimensional metal halide perovskites: Promising candidates for light-emitting diodes. J. Energy Chem. 37, 97–110 (2019).
Yuan, M. et al. Perovskite energy funnels for efficient light-emitting diodes. Nat. Nanotechnol. 11, 872–877 (2016).
Congreve, D. N. et al. Tunable light-emitting diodes utilizing quantum-confined layered perovskite emitters. ACS Photonics 4, 476–481 (2017).
Fu, Q. et al. Recent progress on the long-term stability of perovskite solar cells. Adv. Sci. 5, (2018).
Straus, D. B. & Kagan, C. R. Electrons, excitons, and phonons in two-dimensional hybrid perovskites: connecting structural, optical, and electronic properties. J. Phys. Chem. Lett. 9, 1434–1447 (2018).
Mao, L., Stoumpos, C. C. & Kanatzidis, M. G. Two-dimensional hybrid halide perovskites: principles and promises. J. Am. Chem. Soc. 141, 1171–1190 (2019).
Mauck, C. M. & Tisdale, W. A. Excitons in 2D organic–inorganic halide perovskites. Trends Chem. 1, 380–393 (2019).
Katan, C., Mercier, N. & Even, J. Quantum and dielectric confinement effects in lower-dimensional hybrid perovskite semiconductors. Chem. Rev. 119, 3140–3192 (2019).
D'Innocenzo, V. et al. Excitons versus free charges in organo-lead tri-halide perovskites. Nat. Commun. 5, 3586 (2014).
Blancon, J. C. et al. Scaling law for excitons in 2D perovskite quantum wells. Nat. Commun. 9, 2254 (2018).
Papavassiliou, G. C. Three- and low-dimensional inorganic semiconductors. Prog. Solid State Ch. 25, 125–270 (1997).
Chong, W. K., Giovanni, D. & Sum, T.-C. Excitonics in 2D Perovskites. in Halide Perovskites 55–79 (Wiley, 2018).
Luque, A. & Hegedus, S. Handbook of Photovoltaic Science and Engineering. (John Wiley & Sons, Ltd, 2003).
Blancon, J. C. et al. Extremely efficient internal exciton dissociation through edge states in layered 2D perovskites. Science 355, 1288–1292 (2017).
Akselrod, G. M. et al. Visualization of exciton transport in ordered and disordered molecular solids. Nat. Commun. 5, 3646 (2014).
Seitz, M., Gant, P., Castellanos-Gomez, A. & Prins, F. Long-term stabilization of two-dimensional perovskites by encapsulation with hexagonal boron nitride. Nanomaterials 9, 1120 (2019).
CAS PubMed Central Article Google Scholar
Ha, S. T., Shen, C., Zhang, J. & Xiong, Q. Laser cooling of organic-inorganic lead halide perovskites. Nat. Photonics 10, 115–121 (2016).
Akselrod, G. M. et al. Subdiffusive exciton transport in quantum dot solids. Nano Lett. 14, 3556–3562 (2014).
Wright, A. D. et al. Electron-phonon coupling in hybrid lead halide perovskites. Nat. Commun. 7, 11755 (2016).
ADS Article CAS Google Scholar
Wu, X. et al. Trap states in lead iodide perovskites. J. Am. Chem. Soc. 137, 2089–2096 (2015).
Wenger, B. et al. Consolidation of the optoelectronic properties of CH3NH3PbBr3 perovskite single crystals. Nat. Commun. 8, 1–10 (2017).
Xing, G. et al. Low-temperature solution-processed wavelength-tunable perovskites for lasing. Nat. Mater. 13, 476–480 (2014).
Fu, W. et al. Two-dimensional perovskite solar cells with 14.1% power conversion efficiency and 0.68% external radiative efficiency. ACS Energy Lett. 3, 2086–2093 (2018).
Cao, D. H., Stoumpos, C. C., Farha, O. K., Hupp, J. T. & Kanatzidis, M. G. 2D homologous perovskites as light-absorbing materials for solar cell applications. J. Am. Chem. Soc. 137, 7843–7850 (2015).
Lee, J. H. et al. Resolving the physical origin of octahedral tilting in halide perovskites. Chem. Mater. 28, 4259–4266 (2016).
Du, K. Z. et al. Two-dimensional lead(II) halide-based hybrid perovskites templated by acene alkylamines: crystal structures, optical properties, and piezoelectricity. Inorg. Chem. 56, 9291–9302 (2017).
Billing, D. G. & Lemmerer, A. Synthesis, characterization and phase transitions in the inorganic—organic layered perovskite-type research papers. Acta Crystallogr. Sect. B 63, 735–747 (2007).
Gong, X. et al. Electron-phonon interaction in efficient perovskite blue emitters. Nat. Mater. 17, 550–556 (2018).
Rudin, S., Reinecke, T. L. & Segall, B. Temperature-dependent exciton linewidths in semiconductors. Phys. Rev. B Condens. Matter. 42, 11218–11231 (1990).
Zhu, H. et al. Screening in crystalline liquids protects energetic carriers in hybrid perovskites. Science 353, 1409–1413 (2016).
Katan, C., Mohite, A. D. & Even, J. Entropy in halide perovskites. Nat. Mater. 17, 277–279 (2018).
Emin, D. Polarons. Polarons (Cambridge University Press, 2010).
Guo, Y. et al. Dynamic emission Stokes shift and liquid-like dielectric solvation of band edge carriers in lead-halide perovskites. Nat. Commun. 10, 1175 (2019).
Gélvez-Rueda, M. C. et al. Interconversion between free charges and bound excitons in 2D hybrid lead halide perovskites. J. Phys. Chem. C. 121, 26566–26574 (2017).
He, J., Fang, W. H., Long, R. & Prezhdo, O. V. Increased lattice stiffness suppresses nonradiative charge recombination in MAPbI3 doped with larger cations: time-domain ab initio analysis. ACS Energy Lett. 3, 2070–2076 (2018).
Deng, S. et al. Long-range exciton transport and slow annihilation in two-dimensional hybrid perovskites. Nat. Commun. 11, 1–8 (2020).
Leimkuhler, B. & Matthews, C. Rational construction of stochastic numerical methods for molecular sampling. Appl. Math. Res. eXpress. https://doi.org/10.1093/amrx/abs010 (2012).
Hu, J. et al. Synthetic control over orientational degeneracy of spacer cations enhances solar cell efficiency in two-dimensional perovskites. Nat. Commun. 10, 1276 (2019).
Lemmerer, A. & Billing, D. G. Synthesis, characterization and phase transitions of the inorganic-organic layered perovskite-type hybrids [(CnH2n+1NH3)2PbI4], n = 7, 8, 9 and 10. Dalt. Trans. 41, 1146–1157 (2012).
This work has been supported by the Spanish Ministry of Economy and Competitiveness through The "María de Maeztu" Program for Units of Excellence in R&D (MDM-2014-0377). M.S. acknowledges the financial support of a fellowship from "la Caixa" Foundation (ID 100010434). The fellowship code is LCF/BQ/IN17/11620040. M.S. has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 713673. F.P. acknowledges support from the Spanish Ministry for Science, Innovation, and Universities through the state program (PGC2018-097236-A-I00) and through the Ramón y Cajal program (RYC-2017-23253), as well as the Comunidad de Madrid Talent Program for Experienced Researchers (2016-T1/IND-1209). N.A., M.M. and R.D.B. acknowledges support from the Spanish Ministry of Economy, Industry and Competitiveness through Grant FIS2017-86007-C3-1-P (AEI/FEDER, EU). E.P. acknowledges support from the Spanish Ministry of Economy, Industry and Competitiveness through Grant FIS2016-80434-P (AEI/FEDER, EU), the Ramón y Cajal program (RYC-2011- 09345) and the Comunidad de Madrid through Grant S2018/NMT-4511 (NMAT2D-CM). S.P. acknowledges financial support by the VILLUM FONDEN via the Centre of Excellence for Dirac Materials (Grant No. 11744).
Condensed Matter Physics Center (IFIMAC), Autonomous University of Madrid, 28049, Madrid, Spain
Michael Seitz, Alvaro J. Magdaleno, Nerea Alcázar-Cano, Marc Meléndez, Tim J. Lubbers, Sanne W. Walraven, Elsa Prada, Rafael Delgado-Buscalioni & Ferry Prins
Department of Condensed Matter Physics, Autonomous University of Madrid, 28049, Madrid, Spain
Michael Seitz, Alvaro J. Magdaleno, Tim J. Lubbers, Sanne W. Walraven, Elsa Prada & Ferry Prins
Department of Theoretical Condensed Matter Physics, Autonomous University of Madrid, 28049, Madrid, Spain
Nerea Alcázar-Cano, Marc Meléndez & Rafael Delgado-Buscalioni
Department of Physics and Astronomy, Aarhus University, 8000, Aarhus C, Denmark
Sahar Pakdel
Michael Seitz
Alvaro J. Magdaleno
Nerea Alcázar-Cano
Marc Meléndez
Tim J. Lubbers
Sanne W. Walraven
Elsa Prada
Rafael Delgado-Buscalioni
Ferry Prins
M.S. and F.P. designed this study. M.S. led the experimental work and processing of experimental data. M.S. set up the diffusion measurement technique with the assistance of T.J.L., and S.W.W. A.J.M. and M.S. performed temperature-dependent measurements. M.S. and A.J.M. prepared perovskite materials. N.A., M.M., and R.D.-B. performed theoretical and numerical modelling of exciton transport. M.S, F.P., S.P., and E.P. provided the theoretical interpretation of the intrinsic exciton transport. F.P. supervised the project. M.S. and F.P. wrote the original draft of the paper. All authors contributed to reviewing the paper.
Correspondence to Ferry Prins.
The authors declare no competing interests.
Peer review information Nature Communications thanks the anonymous reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer Review File
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Seitz, M., Magdaleno, A.J., Alcázar-Cano, N. et al. Exciton diffusion in two-dimensional metal-halide perovskites. Nat Commun 11, 2035 (2020). https://doi.org/10.1038/s41467-020-15882-w
DOI: https://doi.org/10.1038/s41467-020-15882-w
Efficient interlayer exciton transport in two-dimensional metal-halide perovskites
, Michael Seitz
, Michel Frising
, Ana Herranz de la Cruz
, Antonio I. Fernández-Domínguez
& Ferry Prins
Materials Horizons (2021)
Tuning the Structural Rigidity of Two-Dimensional Ruddlesden–Popper Perovskites through the Organic Cation
Magnus B. Fridriksson
, Nadia van der Meer
, Jiska de Haas
& Ferdinand C. Grozema
Structural Dynamics of Two-Dimensional Ruddlesden–Popper Perovskites: A Computational Study
, Sudeep Maheshwari
Semiconductor physics of organic–inorganic 2D halide perovskites
Jean-Christophe Blancon
, Jacky Even
, Costas. C. Stoumpos
, Mercouri. G. Kanatzidis
& Aditya D. Mohite
Nature Nanotechnology (2020)
Phenethylammonium Functionalization Enhances Near-Surface Carrier Diffusion in Hybrid Perovskites
Ti Wang
, Yongping Fu
, Linrui Jin
, Shibin Deng
, Dongxu Pan
, Liang Dong
, Song Jin
& Libai Huang
Journal of the American Chemical Society (2020)
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Editors' Highlights
Nature Communications ISSN 2041-1723 (online)
Close banner Close
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing | CommonCrawl |
\begin{definition}[Definition:Algebraically Closed Field/Definition 2]
Let $K$ be a field.
$K$ is '''algebraically closed''' {{iff}}:
:Every irreducible polynomial $f$ over $K$ has degree $1$.
\end{definition} | ProofWiki |
Mice adaptively generate choice variability in a deterministic task
Marwen Belkaid1,
Elise Bousseyrol ORCID: orcid.org/0000-0001-8718-61832,
Romain Durand-de Cuttoli ORCID: orcid.org/0000-0003-0240-76082,
Malou Dongelmans2,
Etienne K. Duranté ORCID: orcid.org/0000-0003-1221-48182,
Tarek Ahmed Yahia2,
Steve Didienne2,
Bernadette Hanesse2,
Maxime Come2,
Alexandre Mourot ORCID: orcid.org/0000-0002-8839-74812,
Jérémie Naudé ORCID: orcid.org/0000-0001-5781-64982,
Olivier Sigaud ORCID: orcid.org/0000-0002-8544-02291 na1 &
Philippe Faure ORCID: orcid.org/0000-0003-3573-49712 na1
Communications Biology volume 3, Article number: 34 (2020) Cite this article
Learning algorithms
An Author Correction to this article was published on 31 January 2020
This article has been updated
Can decisions be made solely by chance? Can variability be intrinsic to the decision-maker or is it inherited from environmental conditions? To investigate these questions, we designed a deterministic setting in which mice are rewarded for non-repetitive choice sequences, and modeled the experiment using reinforcement learning. We found that mice progressively increased their choice variability. Although an optimal strategy based on sequences learning was theoretically possible and would be more rewarding, animals used a pseudo-random selection which ensures high success rate. This was not the case if the animal is exposed to a uniform probabilistic reward delivery. We also show that mice were blind to changes in the temporal structure of reward delivery once they learned to choose at random. Overall, our results demonstrate that a decision-making process can self-generate variability and randomness, even when the rules governing reward delivery are neither stochastic nor volatile.
Principles governing random behaviors are still poorly understood, despite well-known ecological examples ranging from vocal and motor babbling in trial-and-error learning1,2 to unpredictable behavior in competitive setups (e.g preys-versus-predators3 or humans competitive games4). Dominant theories of behavior and notably reinforcement learning (RL) rely on exploitation, namely the act of repeating previously rewarded actions5,6. In this context, choice variability is associated with exploration of environmental contingencies. Directed exploration aims at gathering information about environmental contingencies7,8, whereas random exploration introduces variability regardless of the contingencies9,10. Studies have shown that animals are able to produce variable, unpredictable choices11,12, especially when the reward delivery rule changes13,14,15, is stochastic9,16,17 or is based on predictions about their decisions18,19. However, even approaches based on the prediction of the animal behavior18,19 keep the possibility to distribute reward stochastically—for example if no systematic bias in the animal's choice behavior has been found19,20. Thus, because of the systematic use of volatile or probabilistic contingencies, it has remained difficult to experimentally isolate variability generation from environmental conditions. To test the hypothesis that animals can adaptively adjust the randomness of their behavior, we implemented a task where the reward delivery rule is deterministic, predetermined and identical for all animals, but where a purely random choices strategy is successful.
Mice can generate variable decisions in a complex task
Mice were trained to perform a sequence of binary choices in an open-field where three target locations were explicitly associated with rewards delivered through intra-cranial self-stimulation (ICSS) in the medial forebrain bundle. Importantly, mice could not receive two consecutive ICSS at the same location. Thus, they had to perform a sequence of choices16 and at each location to choose the next target amongst the two remaining alternatives (Fig. 1a). In the training phase, all targets had a 100% probability of reward. We observed that after learning, mice alternated between rewarding locations following a stereotypical circular scheme interspersed with occasional changes in direction, referred to as U-turn (Fig. 1b). Once learning was stabilized, we switched to the complexity condition, in which reward delivery was non-stochastic and depended on sequence variability. More precisely, we calculated the Lempel-Ziv (LZ) complexity21 of choice subsequences of size 10 (9 past choices + next choice) at each trial. Animals were rewarded when they chose the one target (out of the two options) associated with the highest complexity (given the previous nine choices). Despite its difficulty, this task is fully deterministic. Indeed, mice were asked to move along a tree of binary choices (see Fig. 1a) where some paths ensured 100% rewards. Whether each node was rewarded or not was pre-determined in advance. Thus, choice variability could not be imputed to the inherent stochasticity of the outcomes. For each trial, if choosing randomly, the animal had either 100% or 50% chance of being rewarded depending on whether the two subsequences of size 10 (=9 past choices + 1 choice out of 2 options) had equal or unequal complexities. Another way to describe the task is thus to consider all possible situations, not as sequential decisions made by the animal during the task but as the set of all possible subsequences of size 10 of which the algorithm may evaluate the complexity. From this perspective, there is an overall 75% probability of being rewarded if subsequences are sampled uniformly (Fig. 1a). To summarize, theoretically, while a correct estimation of the complexity of the sequence leads to a success rate of 100%, a pure random selection at each step leads to 75% of success, and a repetitive sequence (e.g. A,B,C,A,B,C,…) grants no reward.
Fig. 1: Mice generate unpredictable decisions.
a Left: task setting and complexity algorithm for reward delivery (see text) Right: tree structure of the task and reward distribution. b Typical trajectories in training (T) and complexity (C) conditions. c Increase of the success rate over sessions in the complexity setting. Mice improved their performance in the first sessions (c01 versus c05, T = 223.5, p = 0.015, Wilcoxon test) then reached a plateau (c05 versus c10, t(25) = −0.43, p = 0.670, paired t-test) close to the theoretical 75% success rate of random selection (c10, t(25) = −1.87, p = 0.073, single sample t-test). The shaded area represents a 95% confidence interval. Inset, linear regressions of individual mice performance increase for individual mice (gray line) and average progress (blue line). d Increase of the behavior complexity over sessions: the NLZcomp measure of complexity increased in the beginning (training versus c01, T = 52, p = 0.0009, Wilcoxon test, c01 versus c05, t(26) = −2.67, p = 0.012, paired t-test) before reaching a plateau (c05 versus c10, T = 171, p = 0.909, Wilcoxon test). The average complexity reached by animal is lower than 1 (c10, t(25) = −9.34, p = 10–9, single sample t-test), which corresponds to the complexity of random sequences. The RQA ENT entropy-based measure of complexity decreased over sessions (training versus c01, t(26) = 2.81, p = 0.009, paired t-test, c01 versus c05, T = 92, p = 0.019, Wilcoxon test, c05 versus c10, T = 116, p = 0.13, Wilcoxon test). The rate of U-turns increased over sessions (training versus c01, t(26) = −2.21, p = 0.036, c01 versus c05, t(26) = −3.07, p = 0.004, paired t-test, c05 versus c10, T = 75, p = 0.010, Wilcoxon test). Error bars represent 95% confidence intervals. e Correlation between individual success rate and complexity of mice sequences. Also noteworthy is the decrease in data dispersion in session c10 compared to c1. N = 27 in all sessions except c10 where N = 26.
Unlike the stereotypical circular scheme observed during training, at the end of the complexity condition, choice sequence became more variable (Fig. 1b). We found that mice progressively increased the variability of their choice sequences and thus their success rate along sessions (Fig. 1c). This increased variability in the generated sequences was demonstrated by an increase in the normalized LZ-complexity measure (hereafter NLZcomp) of the session sequences, a decrease in an entropy measure based on recurrence plot quantification and an increase in the percentage of U-turns (Fig. 1d). Furthermore, in the last session, 65.5% of the sequences were not significantly different from surrogate sequences generated randomly (Supplementary Fig 1a). The success rate was correlated with the NLZcomp of the entire session of choice sequences (Fig. 1e), suggesting that mice increased their reward through an increased variability in their choice. The increase in success rate was associated with an increase of the percentage of U-turns (Fig. 1d), yet mice performed a suboptimal U-turn rate of 30%, below the 50% U-turn rate ensuring 100% of rewards (Supplementary Fig 1b).
Computational modeling indicates the use of random strategy
From a behavioral point of view, mice thus managed to increase their success rate in a highly demanding task. They did not achieve 100% success but reached performances that indicate a substantial level of variability. Given that the task is fully deterministic, the most efficient strategy would be to learn and repeat one (or a subset) of the 10-choice long sequences that are always rewarded. This strategy ensures the highest success rate but incurs a tremendous memory cost. On the other hand, a purely random selection is another appealing strategy since it is less costly and leads to about 75% of reward. To differentiate between the two strategies and better understand the computational principles underlying variability generation in mice, we examined the ability of a classical RL algorithm to account for the mouse decision-making process under these conditions.
As in classical reinforcement learning, state-action values were learned using the Rescorla-Wagner rule22 and action selection was based on a softmax policy5 (Fig. 2a; see "Methods"). Two adaptations were applied: (i) rewards were discounted by a U-turn cost \(\varkappa\) in the utility function in order to reproduce mouse circular trajectories in the training phase; (ii) states were represented as vectors in order to simulate mouse memory of previous choices. By defining states as vectors including the history of previous locations instead of the current location alone, we were able to vary the memory size of simulated mice and to obtain different solutions from the model accordingly. We found that, with no memory (i.e. state = current location), the model learned equal values for both targets in almost all states (Fig. 2b). In contrast, and in agreement with classical RL, with the history of the nine last choices stored in memory, the model favored the rewarded target in half of the situations by learning higher values (approximately 90 vs 10%) associated with rewarded sequences of choices (Fig. 2b). This indicates that classical RL can find the optimal solution of the task if using a large memory. Furthermore, choosing randomly was dependent not only on the values associated with current choices, but also on the softmax temperature and the U-turn cost. The ratio between these two hyperparameter controls the level of randomness in action selection (see "Methods"). Intuitively, a high level of randomness leads to high choice variability and sequence complexity. But interestingly, the randomness hyperparameter had opposite effects on the model behavior with small and large memory sizes. While increasing the temperature always increased the complexity of choice sequences, it increased the success rate for small memory sizes but decreased it for larger memories (Fig. 2c). A boundary between the two regimes was found between memory sizes of 3 and 4.
Fig. 2: Computational modeling suggests a memory-free pseudo-random selection behind mice choice variability.
a Schematic illustration of the computational model fitted to mouse behavior. b Repartition of the values learned by the model with memory size equal to 0 or 9. c Influence of increased randomness on success rate and complexity for various memory sizes. Each line describes the trajectory followed by a model with a certain memory size (see color scale) when going from a low to high level of randomness (defined as τ/κ). Red and blue dots represent experimental data of mice in the last training and complexity sessions, respectively. d Model fitting results. With an increase of randomness and a small memory, the model fits the increase in mice performance. The shaded areas represent values of the 15 best parameter sets. Dark lines represent the average randomness value (continuous values) and the best fitting memory size (discrete values), respectively. e Schematic of ambiguous state representations and simulation results. The main simulations rely on an unambiguous representation of states in which each choice sequence is represented by one perfectly recognized code. With ambiguous states, the same sequence can be encoded by various representations. In the latter case, the model best fits mouse performance with a smaller memory (null, weak and medium ambiguity, H = 27.21, p = 10–6, Kruskal–Wallis test, null versus weak, U = 136, p = 0.006, weak versus med, U = 139, p = 0.002, Mann–Whitney test) and with a higher learning rate (null, weak, and medium ambiguity, H = 7.61, p = 0.022, Kruskal–Wallis test, null versus weak, U = 45.5, p = 0.016, null versus med, U = 54, p = 0.026, weak versus med, U = 101, p = 0.63, Mann–Whitney test) but a similar exploration rate (null, weak and medium ambiguity, H = 3.64, p = 0.267, Kruskal–Wallis test). Gray dots represent the 15 best fitting parameter sets. White dots represent the best fit in case of a discrete variable (memory) while black dots represent the average in case of continuous variables (temperature and learning rate). N = 15.
Upon optimization of the model to fit mouse behavior, we found that their performance improvement over sessions was best accounted for by an increase of choice randomness using a small memory (Fig. 2d). This model captured mouse learning better than when using fixed parameters throughout sessions (Bayes factor = 3.46; see "Methods", and Supplementary Fig. 2a, b). The model with a memory of size 3 best reproduced mouse behavior (Fig. 2d), but only slightly better than versions with smaller memories (Supplementary Fig 2c). From a computational perspective, one possible explanation for the fact that although theoretically sufficient, a memory of size 1 fits less than size 3, is that state representation is overly simplified in the model. Accordingly, altering the model's state representation to make it more realistic should reduce the size of the memory needed to reproduce mice performances. To test this hypothesis, we used a variant of the model in which we manipulated state representation ambiguity: each of the locations {A, B, C} could be represented by n ≥ 1 states, with n = 1 corresponding to unambiguous states (see "Methods'", and Fig. 2e). As expected, the model fitted better with a smaller memory as representation ambiguity was increased (Fig. 2e). We also found that the best fitting learning rate was higher with ambiguous representations while the randomness factor remained unchanged regardless of ambiguity level (Fig. 2e). This corroborates that the use of additional memory capacity by the model is due to the model's own limitations rather than an actual need to memorize previous choices. Hence, this computational analysis overall suggests that mice adapted the randomness parameter of their decision-making system to achieve more variability over sessions rather than remembered rewarded choice sequences. This conclusion was further reinforced by a series of behavioral arguments detailed below supporting the lack of memorization of choice history in their strategy.
Mice choose randomly without learning the task structure
We first looked for evidence of repeated choice patterns in mouse sequences using a Markov chain analysis (see "Methods"). We found that the behavior at the end of the complexity condition was Markovian (Fig. 3a). In other words, the information about the immediately preceding transition (i.e. to the left or to the right) was necessary to determine the following one (e.g. p(L) ≠ P(L|L)) but looking two steps back was not informative on future decisions (e.g. p(L|LL) ≈ P(L|L)) (see "Methods" Markov Chain Analysis). The analysis of the distribution of subsequence of length 10 (see "Methods") provides an additional evidence of the lack of structure in the animals' choice sequence. Indeed, while at the end of the training, mice exhibit a peaky distribution with a strong preference for the highly repetitive circular patterns and their variants, the distribution was dramatically flattened under the complexity condition (Fig. 3b) demonstrating that mice behavior is much less structured in this setting. Furthermore, we tested whether mice use of a win-stay-lose-switch strategy18. Indeed, mice could have used this heuristic strategy when first confronted with the complexity condition after a training phase in which all targets were systematically rewarded. Changing directions in the absence of reward could have introduced enough variability in the animals' sequence to improve their success rate. Yet, we found that being rewarded (or not) had no effect on the next transition, neither at the beginning nor the end of the complexity condition (Fig. 3c; see "Methods"); thus eliminating another potential form of structure in mice behavior under the complexity rule.
Fig. 3: Behavioral evidence of the absence of memorization in mouse choices.
a Tree representation of the Markovian structure of mouse behavior in session c10 (N = 26). In the expression of probabilities, P(X) refers to P(L) (L Left) or P(R) (R right), whose repartition is illustrated in the horizontal bars (respectively in orange and blue). Dashed areas inside the bars represent overlapping 95% confidence intervals. The probability of a transition (i.e. to the left or to the right) is different from the probability of the same transition given the previous one (p < 0.05, paired t-test, see "Methods" for detailed analysis). However, the probability given two previous transitions is not different from the latter (p > 0.05, paired t-test, see "Methods" for detailed analysis). b Distribution of subsequences of length 10. c Absence of influence of rewards on mice decisions. P(F) and P(U) respectively refer to the probabilities of going forward (e.g. A → B → C) and making a U-turn (e.g. A → B → A). These probabilities were not different from the conditional probabilities given that the previous choice was rewarded or not (p > 0.05, Kruskal–Wallis test, see "Methods" for detailed analysis). This means that the change in mice behavior under the complexity condition was not stereotypically driven by the outcome of their choices (e.g. "u-turn if not rewarded"). Error bars in B represent 95% confidence intervals. N = 34 in c01, N = 38 in c02, and N = 52 in c10.
To further support the notion that mice did not actually memorize rewarded sequences to solve the task, we finally performed a series of experiments to compare the animals' behavior under the complexity rule and under a probabilistic rule in which all targets were rewarded with a 75% probability (the same frequency reached at the end of the complexity condition). We first analyzed mice behavior when the complexity condition was followed by the probabilistic condition (Group 1 in Fig. 4a). We hypothesized that, if animals choose randomly at each node in the complexity setting (and thus do not memorize and repeat specific sequences), they would not detect the change of the reward distribution rule when switching to the probabilistic setting. In agreement with our assumption, we observed that as we switched to the probabilistic condition, animals did not modify their behavior although the optimal strategy would have been to avoid U-turns, as observed in the 100% reward setup used for training (Fig. 4b and Supplementary Fig 3a). Hence, after the complexity setting, mice were likely stuck in a "random" mode given that the global statistics of the reward delivery were conserved. In contrast, when mice were exposed to the probabilistic distribution of reward right after the training session (Group 2 in Fig. 4a), they slightly changed their behavior but mostly stayed in a circular pattern with few U-turns and low sequence complexity (Fig. 4b and Supplementary Fig 3a). Thus, animals from Group 2 exhibited lower sequence complexity and U-turn rate in the probabilistic condition than animals from Group 1, whether in the complexity or the probabilistic condition (Fig. 4c). The distribution of patterns of length 10 in the sequences performed by animals from Group 2 during the last probabilistic session shows a preference for repetitive circular patterns that is very similar to that observed at the end of the training; contrasting with the sequences performed by animals from Group 1 (Fig. 4d, e). A larger portion of sequences performed by animals from Group 1 were not different from surrogate sequences generated randomly in comparison with animals from Group 2 (Supplementary Fig 3b). Last, if the sequences performed by mice from Group 2 were executed under the complexity rule, these animals would have obtained lower success rate than animals from Group 1 in the complexity condition (Supplementary Fig. 3c).
Fig. 4: Comparison of mice behavior under the complexity condition and a probabilistic condition.
a Experimental setup and typical trajectories under the two conditions. For a first group of mice (G1), the complexity condition was followed by a probabilistic condition. For a second group (G2), the probabilistic condition was experienced right after training. Under the probabilistic condition all targets were rewarded with a 75% probability. b G1 and G2 mice behavior in the probabilistic setting compared to the end of the preceding condition (resp. complexity and training). The U-turn rate, NLZcomp complexity, and RQA ENT measure remain unchanged for G1 (pooled "end vc", "beg. p75" and "end p75", U-turn rate, H = 4.22, p = 0.120, Complexity, H = 0.90, p = 0.637, RQA ENT, H = 4.57, p = 0.101, Kruskal–Wallis test) and for G2 (pooled "end tr", "beg. p75", and "end p75", U-turn rate, H = 5.68, p = 0.058, Complexity, H = 4.10, p = 0.128, RQA ENT, H = 2.66, p = 0.073, Kruskal–Wallis test). c Comparison of G2 behavior in the probabilistic setting with G1 behavior under the complexity and the probabilistic conditions. G1 mice exhibit higher sequence complexity and U-turn rate than G2 under both the complexity condition (G1-cplx versus G2-p75, Complexity, pooled "beg", t(136) = 2.99, p = 0.003, pooled "end", t(136) = 4.72, p = 7.10−6, Welch t-test, U-turn, pooled "beg", U = 2866.5, p = 0.015, pooled "end", U = 3493, p = 10−7, Mann–Whitney test) and the probabilistic condition (G1-p75 versus G2-p75, Complexity, pooled "beg", U = 1375, p = 0.005, Mann–Whitney test, pooled "end", t(91) = 2.92, p = 0.004, t-test, U-turn, pooled "beg", U = 1478, p = 0.0003, pooled "end", U = 1424, p = 0.001, Mann–Whitney test). N = 80 for G1-cplx, N = 36 for G1-p75, N = 54 for G2-p75. d Distribution of subsequences of length 10 performed by G1 and G2 animals in the last sessions of the training, the complexity (same as in Fig. 3b), and the probabilistic conditions. e Cumulative distribution of ranked patterns of length 10.
In summary, mice behavior under the probabilistic condition changed markedly depending on the preceding condition and the strategy that the animal was adopting. This further supports our initial claim that stochastic experimental setups make it difficult to unravel the mechanisms underlying random behavior generation.
The deterministic nature of complexity rule used in our experiments makes it possible to categorize animals' behavior into one of three possible strategies (i.e. repetitive, random or optimal based on sequence learning). This is crucial in understanding the underlying cognitive process leveraged by the animals. Importantly, this shall not be interpreted as implying that animals were aware of the existence of these possible strategies. Mice had no way of discovering that a 100% success rate could be obtained with an optimal sequence learning before ever reaching such a level of performance. In fact, we postulated that the optimal behavior would be too difficult to implement by the animals and that they shall turn to random selection instead. Overall, our results indicate that this is the case, as we found no evidence of sequence memorization nor any behavioral pattern that might have been used by mice as a heuristic to solve the complex task.
Whether and how the brain can generate random patterns has always been puzzling23. In this study, we addressed two fundamental aspects in this matter: the implication of memory processes and the dependence upon external (environmental) factors. Regarding memory, one hypothesis holds that in human, the process of generating random patterns leverages memory24, to ensure the equality of response usage for example25. Such a procedure could indeed render choices uniformly distributed but is also very likely to produce structured sequences (i.e. dependence upon previous choices). A second hypothesis suggests that the lack of memory may help eliminate counterproductive biases26,27. Our experiments revealed neither sequence learning nor structure, thus supporting the latter hypothesis and the notion that the brain is able to effectively achieve high variability by suppressing biases and structure, at least in some contexts. The second aspect is the degree of dependence upon external, environmental factors. Exploration and choice variability are generally studied by introducing stochasticity and/or volatility in environmental outcomes16,17,18,19. However, such conditions make it difficult to interpret the animal's strategy and to know whether the observed variability in the mouse choice is inherited or not from the statistics of the behavioral task. In this work, we took a step further toward understanding the processes underlying the generation of variability per se, independently from environmental conditions. Confronted with a deterministic task which yet favors complex choice sequences, mice avoided repetitions by engaging in a behavioral mode where decisions were random and independent from their reward history. Animals adaptively tuned their decision-making parameters to increase choice randomness, which suggests an internal process of randomness generation.
Male C57BL/6J (WT) mice obtained from Charles Rivers Laboratories France (L'Arbresle Cedex, France) were used. Mice arrived to the animal facility at 8 weeks of age, and were housed individually for at least 2 weeks before the electrode implantation. Behavioral tasks started one week after implantation to ensure full recovery. Since intracranial self-stimulation (ICSS) does not require food deprivation, all mice had ad libitum access to food and water except during behavioral sessions. The temperature (20–22 °C) and humidity was automatically controlled and a circadian light cycle of 12/12 h light–dark cycle (lights on at 8:30 a.m.) was maintained in the animal facility. All experiments were performed during the light cycle, between 09:00 a.m. and 5:00 p.m. Experiments were conducted at Sorbonne University, Paris, France, in accordance with the local regulations for animal experiments as well as the recommendations for animal experiments issued by the European Council (directives 219/1990 and 220/1990).
Mice were introduced into a stereotaxic frame and implanted unilaterally with bipolar stimulating electrodes for ICSS in the medial forebrain bundle (MFB, anteroposterior = 1.4 mm, mediolateral = ±1.2 mm, from the bregma, and dorsoventral = 4.8 mm from the dura). After recovery from surgery (1 week), the efficacy of electrical stimulation was verified in an open field with an explicit square target (side = 1 cm) at its center. Each time a mouse was detected in the area (D = 3 cm) of the target, a 200-ms train of twenty 0.5-ms biphasic square waves pulsed at 100 Hz was generated by a stimulator. Mice self-stimulating at least 50 times in a 5 min session were kept for the behavioral sessions. In the training condition, ICSS intensity was adjusted so that mice self-stimulated between 50 and 150 times per session at the end of the training (ninth and tenth session), then the current intensity was kept the same throughout the different settings.
Experiment were performed in a 1-m diameter circular open-field with three explicit location on the floor. Experiments were performed using a video camera, connected to a video-track system, out of sight of the experimenter. A home-made software (Labview National instrument) tracked the animal, recorded its trajectory (20 frames per s) for 5 min and sent TTL pulses to the ICSS stimulator when appropriate (see below). Mice were trained to perform a sequence of binary choices between the two out of three target locations (A, B, and C) associated with ICSS rewards. In the training phase all target had a 100% probability of reward.
Complexity task
In the complexity condition, reward delivery was determined by an algorithm that estimated the grammatical complexity of animals' choice sequences. More specifically, at a trial in which the animal was at the target location A and had to choose between B and C, we compared the LZ-complexity21 of the subsequences comprised of the nine past choices and B or C (last nine choices concatenated with the two options). Both choices were rewarded if those subsequences were of equal complexity. Otherwise, only the option making the subsequence of highest complexity was rewarded. Giving that the reward delivery is deterministic, the task can be seen as a decision tree in which some paths ensure 100% rewards. From a local perspective, for each trial, the animal has either 100 or 50% chance of reward; resp. if the evaluated subsequences of size 10 have equal or unequal complexities. Considering all these possible sequences, 75% of the trials would be rewarded if animals were to choose randomly.
Measures of choice variability
Two measures of complexity were used to analyze mouse behavior. First, the normalized LZ-complexity (referred to as NLZcomp or simply complexity throughout the paper) which corresponds to the LZ-complexity divided by the average LZ-complexity of 1000 sequences of the same length generated randomly (a surrogate) with the constraint that two consecutive characters could not be equal, as in the experimental setup. NLZcomp is small for highly repetitive sequence and is close to 1 for uncorrelated, random signals. Second, the entropy of the frequency distribution of the diagonal length (noted RQA ENT), taken from recurrence quantification analysis (RQA). RQA is a series of methods in which the dynamics of complex systems are studied using recurrence plots (RP)28,29 where diagonal lines illustrate recurrent patterns. Thus, the entropy of diagonal lines reflects the deterministic structure of the system and is smaller for uncorrelated, random signals. RQA was measured using the Recurrence-Plot Python module of the "pyunicorn.timeseries" package.
Computational models
The task was represented as a Markov Decision Process (MDP) with three states s ∊ {A, B, C} and three actions a ∈ {GoToA, GoToB, GoToC}, respectively, corresponding to the rewarded locations and the transitions between them. State-action values Q(s, a) were learned using the Rescorla-Wagner rule22:
$$\Delta {\cal{Q}}({\mathbf{s}}_{\mathbf{t}},a_t) = \alpha ({\cal{U}}_{t + 1} - {\cal{Q}}({\mathbf{s}}_{\mathbf{t}},a_t))$$
where \({\mathbf{s}}_{\boldsymbol{t}} = [S_t,S_{t - 1}, \ldots ,S_{t - m}]\) is the current state, which may include the memory of up to the mth past location, at the current action, α the learning rate and U the utility function defined as follows:
$${\cal{U}}_{t + 1} = \left\{ {\begin{array}{*{20}{c}} {(1 - \kappa ).r_{t + 1}} & {{\mathrm{if}}\,s_{t + 1} = s_{t - 1}} \\ {r_{t + 1}} & {{\mathrm{otherwise}}} \end{array}} \right.$$
where r is the reward function and κ the U-turn cost parameter modeling the motor cost or any bias against the action leading the animal back to its previous location. The U-turn cost was necessary to reproduce mouse stereotypical trajectories at the end of the training phase (see Supplementary Fig. 2).
Action selection was performed using a softmax policy, meaning that in state st the action at is selected with probability:
$$P(a_t|{\mathbf{s}}_{\mathbf{t}}) = \frac{{e^{{\cal{Q}}({\mathbf{s}}_{\mathbf{t}},a_t)/\tau }}}{{\mathop {\sum }\nolimits_a e^{{\cal{Q}}({\mathbf{s}}_{\mathbf{t}},a_t)/\tau }}}$$
where τ is the temperature parameter. This parameter reduces the sensitivity to the difference in actions values thus increasing the amount of noise or randomness in decision-making. The U-turn cost κ has the opposite effect since it represents a behavioral bias and constrains choice randomness. We refer to the hyperparameter defined as ρ = τ/κ as the randomness parameter.
In the version referred to as BasicRL (see Supplementary Fig. 2), we did not include any memory of previous locations nor any U-turn cost. In other words, m = 0 (i.e. st = [st]) and κ = 0.
To manipulate state representation ambiguity (see Fig. 2), each of the locations {A, B, C} could be represented by n ≥ 1 states. For simplicity, we used n = 1, 2, and 3 for all locations for what we referred to as 'null', 'low, and 'med' levels of ambiguity. This allowed us to present a proof of concept regarding the potential impact of using a perfect state representation in our model.
Model fitting
The main model-fitting results presented in this paper were obtained by fitting the behavior of the mice under training and complexity conditions session by session independently. This process aimed to determine which values of the two hyperparameters m and ρ = τ/κ make the model behave as mice in terms of success rate (i.e. percentage of rewarded actions) and complexity (i.e. variability of decisions). Our main goal was to decide between the two listed strategies that can solve the task: repeating rewarded sequences or choosing randomly. Therefore, we momentarily put aside the question of learning speed and only considered the model behavior after convergence. α was set to 0.1 in these simulations.
Hyperparameters were selected through random search30 (see ranges listed in Supplementary Table 1). The model was run for 2.106 iterations for each parameter set. The fitness score with respect to mice average data at each session was calculated as follows:
$${\mathrm{fitness}} = 1 - D_{{\mathrm{session}}}$$
$${\mathrm{with}}\,D_{{\mathrm{session}}} = \frac{1}{2}\left( {\left| {\hat S - \bar S} \right| + \left| {\hat C - \bar C} \right|} \right.$$
where \(\bar S\) and \(\bar C\) are the average success rate and complexity in mice respectively and \(\hat S\) and \(\hat C\) the model success rate and complexity – all the four ∊ [0, 1]. Simulations were long enough for the learning to converge. Thus, instead of multiple runs for each parameter set, which would have been computationally costly, \(\hat S\) and \(\hat C\) were averaged over the last 10 simulated sessions. We considered that 1 simulated session = 200 iterations, which is an upper bound for the number of trials performed by mice in one actual session.
Since mice were systematically rewarded during training, their success rate under this condition was not meaningful. Thus, to assess the ability of the model to reproduce stereotypically circular trajectories in the last training session, we replaced \(\hat S\) and \(\bar S\) in Eq. (5) by \(\hat U\) and \(\bar U\) representing the average U-turn rates for mice and for the model respectively.
Additional simulations were conducted with two goals: (1) test whether one single parameter set could fit mice behavior without the need to change parameter values over sessions, (2) test the influence of state representation ambiguity on memory use in the computational model. Therefore, each simulation attempted to reproduce mice behavior from training to the complexity condition. Hence, the learning rate α was optimized in addition to the previously mentioned m and ρ = τ/κ hyperparameters (see ranges listed in Supplementary Table 1). Each parameter set was tested over 20 different runs. Each run is a simulation of 4000 iterations, which amounts to 10 training sessions and 10 complexity sessions since simulated sessions consist of 200 iterations. The fitness score was computed as the average score over the last training session and the 10 complexity sessions using Eqs. (4) and (5). Using a grid search ensured comparable values for different levels of ambiguity ('null', 'low, and 'med'; see previous section). Given the additional computational cost induced by higher ambiguity levels, we gradually decreased the upper bound of the memory size range in order to avoid long and useless computations in uninteresting regions of the search space.
A sample code for the model fitting procedure is publicly available at https://zenodo.org/record/2564854#.Xe07NB-YUpg (see ref. 31).
Markov chain analysis
Markov chain analysis allows to mathematically describe the dynamic behavior of the system, i.e. transitions from one state to another, in probabilistic terms. A process is a first-order Markov chain (or more simply Markovian) if the transition probability from state A to a state B depends only on the current state A and not on the previous ones. Put differently, the current state contains all the information that could influence the realization of the next state. A classical way to demonstrate that a process is Markovian is to show that the sequence cannot be described by a zeroth-order process, i.e. that P(B|A) ≠ P(B), and that the second-order probability is not required to describe the state transitions, i.e. that P(B|A) = P(B|AC).
In this paper, we analyzed the 0th, 1st, and 2nd order probabilities in sequences performed by each mouse in the last session of the complex condition (c10). Using the targets A, B, and C as the Markov chain states would have provided a limited amount of data. Instead, we described states as movements to the left (L) or to the right (R) thereby obtaining larger pools of data (e.g. R = {A → B, B → C, C → A}) and a more compact description (e.g. two 0th order groups instead of three). The probability of a transition (i.e. to the left or to the right, Fig. 3a) is different from the probability of the same transition given the previous one (p(L) versus P(L|L), t(25) = −7.86, p = 3.10–8, p(L) versus P(L|R), t(25) = 7.57, p = 6.10–8, p(R) versus P(R|R), t(25) = −7.57, p = 6.10–8, p(R) versus P(R|L), t(25) = 7.86, p = 3.10–8, paired t-test). However, the probability given two previous transitions is not different from the latter (p(L|L) versus P(L|LL), t(25) = 1.36, p = 0.183, p(L|L) versus P(L|LR), t(25) = −1.66, p = 0.108, p(L|R) versus P(L|RL), t(25) = −0.05, p = 0.960, p(L|R) versus P(L|RR), t(25) = −0.17, p = 0.860, p(R|R) versus P(R|RR), t(25) = 0.17, p = 0.860, p(R|R) versus P(R|RL), t(25) = 0.05, p = 0.960, p(R|L) versus P(R|LR), t(25) = 1.66, p = 0.108, p(R|L) versus P(R|LL), t(25) = −1.36, p = 0.183, paired t-test).
To assess the influence of rewards on mouse decisions when switching to the complexity condition (i.e. win-stay-lose-switch strategy), we also compared the probability of going forward P(F) or backward P(U) with the conditional probabilities given the presence or absence of reward (e.g. P(F|rw ) or P(U|rw )). In this case, F = R → R, L → L and U = R → L, R → L. These probabilities (Fig. 3c) were not different from the conditional probabilities given that the previous choice was rewarded or not (c01, P(F), P(F|rw) and P(F|unrw), H = 2.93, p = 0.230, P(U), P(U|rw) and P(U|unrw), H = 1.09, p = 0.579, c02, P(F), P(F|rw) and P(F|unrw), H = 1.08, p = 0.581, P(U), P(U|rw) and P(U|unrw), H = 0.82, p = 0.661, c10, P(F), P(F|rw) and P(F|unrw), H = 0.50, p = 0.778, P(U), P(U|rw) and P(U|unrw), H = 0.50, p = 0.778, Kruskal–Wallis test). In the latter analysis, we discarded the data in which ICSS stimulation could not be associated with mouse choices with certainty, due to time lags between trajectory data files and ICSS stimulation data files, or due to the animal moving exceptionally fast between two target locations (<1 s)
Analysis of subsequences distribution
All patterns starting by A were extracted and pooled from the choice sequences of mice in the last sessions of the three conditions (training, complexity, probabilistic). The histograms represent the distribution of these patterns following the decision tree structure. In other words, two neighbor branches shared the same prefix.
Bayesian model comparison
Bayesian model comparison aims to quantify the support for a model over another based on their respective likelihoods P(D|M), i.e. the probability that data D are produced under the assumption of model M. In our case, it is useful to compare the fitness of the Mind model fitted session by session independently from that of the model Mcon fitted to all sessions in a continuous way. Since these models do not produce explicit likelihood measures, we used approximate Bayesian computation: considering the 15 best fits (i.e. the 15 parameter sets that granted the highest fitness score), we estimated the models' likelihood as the fraction of \((\hat S,\hat C)\) pairs that were within the confidence intervals of mouse data. Then, the Bayes factor was calculated as the ratio between the two competing likelihoods:
$$B = \frac{{P(D/M_{{\mathrm{ind}}})}}{{P(D/M_{{\mathrm{con}}})}}$$
B > 3 was considered to be a substantial evidence in favor of Mind over Mcon32.
Statistics and reproducibility
No statistical methods were used to predetermine sample sizes. Our sample sizes are comparable to many studies using similar techniques and animal models. The total number of observations (N) in each group as well as details about the statistical tests were reported in figure captions. Error bars indicate 95% confidence intervals. Parametric statistical tests were used when data followed a normal distribution (Shapiro test with p > 0.05) and non-parametric tests when they did not. As parametric tests, we used t-test when comparing two groups or ANOVA when more. Homogeneity of variances was checked preliminarily (Bartlett's test with p > 0.05) and the unpaired t-tests were Welch-corrected if needed. As non-parametric tests, we used Mann–Whitney test when comparing two independent groups, Wilcoxon test when comparing two paired groups and Kruskal–Wallis test when comparing more than two groups. All statistical tests were applied using the scipy.stats Python module. They were all two-sided except Mann–Whitney. p > 0.05 was considered to be statistically non-significant.
In all Figures: error bars represent 95% confidence intervals. *p < 0.05, **p < 0.01, ***p < 0.001. n.s., not significant at p > 0.0533.
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
The data that support the findings of this study33 are available at https://zenodo.org/record/3576423#.Xfdez-tCe3A and from the corresponding author upon reasonable request.
A sample code for the model fitting procedure31 is publicly available at https://zenodo.org/record/2564854#.Xe07NB-YUpg.
An amendment to this paper has been published and can be accessed via a link at the top of the paper.
Wu, H. G., Miyamoto, Y. R., Gonzalez Castro, L. N., Ölveczky, B. P. & Smith, M. A. Temporal structure of motor variability is dynamically regulated and predicts motor learning ability. Nat. Neurosci. 17, 312–321 (2014).
Aronov, D., Andalman, A. S. & Fee, M. S. A specialized forebrain circuit for vocal babbling in the juvenile songbird. Science 320, 630–634 (2008).
Driver, P. M. & Humphries, D. A. Protean behaviour. (Oxford University Press, USA, 1988).
Rapoport, A. & Budescu, D. V. Generation of random series in two-person strictly competitive games. J. Exp. Psychol. Gen. 121, 352–363 (1992).
Sutton, R. S. & Barto, A. G. Reinforcement Learning. (MIT Press, 1998).
Schultz, W. Getting formal with dopamine and reward. Neuron 36, 241–263 (2002).
Cohen, J. D., McClure, S. M. & Yu, A. J. Should I stay or should I go? How the human brain manages the trade-off between exploitation and exploration. Philos. Trans. R. Soc. Lond., B, Biol. Sci. 362, 933–942 (2007).
Rao, R. P. N. Decision making under uncertainty: a neural model based on partially observable markov decision processes. Front. Comput. Neurosci. 4, 146 (2010).
Wilson, R. C., Geana, A., White, J. M., Ludvig, E. A. & Cohen, J. D. Humans use directed and random exploration to solve the explore-exploit dilemma. J. Exp. Psychol. Gen. 143, 2074–2081 (2014).
Mansouri, F. A., Koechlin, E., Rosa, M. G. P. & Buckley, M. J. Managing competing goals - a key role for the frontopolar cortex. Nat. Rev. Neurosci. 18, 645–657 (2017).
Grunow, A. & Neuringer, A. Learning to vary and varying to learn. Psychonomic Bull. Rev. 9, 250–258 (2002).
Kane, G. A. et al. Increased locus coeruleus tonic activity causes disengagement from a patch-foraging task. Cogn. Affect Behav. Neurosci. 17, 1–11 (2017).
Daw, N. D., O'Doherty, J. P., Dayan, P., Seymour, B. & Dolan, R. J. Cortical substrates for exploratory decisions in humans. Nature 441, 876–879 (2006).
Karlsson, M. P., Tervo, D. G. R. & Karpova, A. Y. Network resets in medial prefrontal cortex mark the onset of behavioral uncertainty. Science 338, 135–139 (2012).
Findling, C., Skvortsova, V., Dromnelle, R., Palminteri, S. & Wyart, V. Computational noise in reward-guided learning drives behavioral variability in volatile environments. Nat. Neurosci. 441, 876–12 (2019).
Naudé, J. et al. Nicotinic receptors in the ventral tegmental area promote uncertainty-seeking. Nat. Neurosci. 19, 471–478 (2016).
Cinotti, F. et al. Dopamine regulates the exploration-exploitation trade-off in rats. 1–36, https://doi.org/10.1101/482802 (2019).
Lee, D., Conroy, M. L., McGreevy, B. P. & Barraclough, D. J. Reinforcement learning and decision making in monkeys during a competitive game. Cogn. brain Res. 22, 45–58 (2004).
Tervo, D. G. R. et al. Behavioral variability through stochastic choice and its gating by anterior cingulate cortex. Cell 159, 21–32 (2014).
Barraclough, D. J., Conroy, M. L. & Lee, D. Prefrontal cortex and decision making in a mixed-strategy game. Nat. Neurosci. 7, 404–410 (2004).
Lempel, A. & Ziv, J. On the complexity of finite sequences. IEEE Trans. Inf. Theory 22, 75–81 (1976).
Rescorla, R. A. & Wagner, A. R. A Theory of Pavlovian Conditioning: Variations in the Effectiveness of Reinforcement and Nonreinforcement. In (eds AH. Black & W.F. Prokasy), Classical conditioning II: current research and theory. 64–99 (Appleton-Century-Crofts, New York, 1972).
Glimcher, P. W. Indeterminacy in brain and behavior. Annu Rev. Psychol. 56, 25–56 (2005).
Towse, J. N. & Cheshire, A. Random number generation and working memory. Eur. J. Cogn. Psychol. 19, 374–394 (2007).
Oomens, W., Maes, J. H. R., Hasselman, F. & Egger, J. I. M. A time series approach to random number generation: using recurrence quantification analysis to capture executive behavior. Front. Hum. Neurosci. 9, 319 (2015).
Wagenaar, W. Generation of random sequences by human subjects: a critical survey of literature. Psychological Bull. 77, 65–72 (1972).
Maes, J. H. R., Eling, P. A. T. M., Reelick, M. F. & Kessels, R. P. C. Assessing executive functioning: on the validity, reliability, and sensitivity of a click/point random number generation task in healthy adults and patients with cognitive decline. J. Clin. Exp. Neuropsychol. 33, 366–378 (2011).
Marwan, N., Romano, M. C., Thiel, M. & Kurths, J. Recurrence plots for the analysis of complex systems. Phys. Rep. 438, 237–329 (2007).
Faure, P. & Lesne, A. Recurrence plots for symbolic sequences. Int. J. Bifur. Chaos 20, 1731–1749 (2010).
Bergstra, J. & Bengio, Y. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13, 281–305 (2012).
Belkaid, M. Code for basic q-learning model fitting, https://doi.org/10.5281/zenodo.2564854 (2019).
Kass, R. E. & Raftery, A. E. Bayes factors. J. Am. Stat. Assoc. 90, 773–795 (1995).
Belkaid, M. et al. Mice adaptively generate choice variability in a deterministic task - behavioral data. https://doi.org/10.5281/zenodo.3576423 (2019).
This work was supported by the Centre National de la Recherche Scientifique CNRS UMR 8246 et UMR 7222, the Labex SMART (ANR-11-LABX-65) supported by French state funds managed by the ANR within the Investissements d'Avenir programme under reference ANR-11-IDEX-0004-02, the Foundation for Medical Research (FRM, Equipe FRM EQU201903007961 to P.F), ANR (ANR-16 Nicostress to PF), the French National Cancer Institute Grant TABAC-16-022 (to P.F.). P.F. team is part of the École des Neurosciences de Paris Ile-de-France RTRA network and member of LabEx Bio-Psy.
These authors contributed equally: Olivier Sigaud, Philippe Faure.
Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique (ISIR), 75005, Paris, France
Marwen Belkaid & Olivier Sigaud
Sorbonne Université, INSERM, CNRS, Neuroscience Paris Seine - Institut de Biologie Paris Seine (NPS - IBPS), 75005, Paris, France
Elise Bousseyrol, Romain Durand-de Cuttoli, Malou Dongelmans, Etienne K. Duranté, Tarek Ahmed Yahia, Steve Didienne, Bernadette Hanesse, Maxime Come, Alexandre Mourot, Jérémie Naudé & Philippe Faure
Marwen Belkaid
Elise Bousseyrol
Romain Durand-de Cuttoli
Malou Dongelmans
Etienne K. Duranté
Tarek Ahmed Yahia
Steve Didienne
Bernadette Hanesse
Maxime Come
Alexandre Mourot
Jérémie Naudé
Olivier Sigaud
Philippe Faure
P.F. designed the behavioral experiment. P.F., E.B., R.D.C., M.D., E.D., T.A.Y., B.H. and M.C. performed the behavioral experiments, P.F. and M.B. analyzed the behavioral data. M.B. developed the computational model, S.D. developed some acquisition tools. J.N. and O.S. contributed to modeling studies and data analysis. M.B., P.F., J.N. and O.S. wrote the paper with inputs from A.M.
Correspondence to Philippe Faure.
Peer Review File
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Belkaid, M., Bousseyrol, E., Durand-de Cuttoli, R. et al. Mice adaptively generate choice variability in a deterministic task. Commun Biol 3, 34 (2020). https://doi.org/10.1038/s42003-020-0759-x
DOI: https://doi.org/10.1038/s42003-020-0759-x
Distributed processing of side-choice biases
Mario Treviño
& Ricardo Medina-Coss y León
Brain Research (2020)
Communications Biology ISSN 2399-3642 (online) | CommonCrawl |
If the number \[\frac{1}{2} \left(\frac{5}{\sqrt[3]{3} + \sqrt[3]{2}} + \frac1{\sqrt[3]{3} -\sqrt[3]{2}}\right)\]can be expressed in the form $\sqrt[3]{a} + \sqrt[3]{b},$ where $a$ and $b$ are integers, compute $a+b.$
We rationalize each of the fractions in parentheses, by using the sum and difference of cubes identities. First, we have \[\begin{aligned} \frac{5}{\sqrt[3]{3} + \sqrt[3]{2}} &= \frac{5\left(\sqrt[3]{9} - \sqrt[3]{6} + \sqrt[3]{4}\right)}{\left(\sqrt[3]{3} + \sqrt[3]{2}\right)\left(\sqrt[3]{9} - \sqrt[3]{6} + \sqrt[3]{4}\right)} \\ &= \frac{5\left(\sqrt[3]{9}-\sqrt[3]{6}+\sqrt[3]{4}\right)}{3+2} \\ &= \sqrt[3]{9} - \sqrt[3]{6} + \sqrt[3]{4}. \end{aligned}\]Similarly, \[\begin{aligned} \frac{1}{\sqrt[3]{3} - \sqrt[3]{2}} &= \frac{\sqrt[3]{9} + \sqrt[3]{6} + \sqrt[3]{4}}{\left(\sqrt[3]{3} - \sqrt[3]{2}\right)\left(\sqrt[3]{9} + \sqrt[3]{6} + \sqrt[3]{4}\right)} \\ &= \frac{\sqrt[3]{9}+\sqrt[3]{6}+\sqrt[3]{4}}{3 - 2} \\ &= \sqrt[3]{9} + \sqrt[3]{6} + \sqrt[3]{4}. \end{aligned}\]Therefore,\[\begin{aligned} \frac{1}{2} \left(\frac{5}{\sqrt[3]{3} + \sqrt[3]{2}} + \frac1{\sqrt[3]{3} -\sqrt[3]{2}}\right) &= \frac{1}{2} \left(\left(\sqrt[3]{9}-\sqrt[3]{6}+\sqrt[3]{4}\right) + \left(\sqrt[3]{9}+\sqrt[3]{6}+\sqrt[3]{4}\right) \right) \\ &= \sqrt[3]{9} + \sqrt[3]{4}, \end{aligned}\]so $a+b=9+4=\boxed{13}.$ | Math Dataset |
Drug dosage adjustment in hospitalized patients with renal impairment at Tikur Anbessa specialized hospital, Addis Ababa, Ethiopia
Henok Getachew1,
Yewondwossen Tadesse2 and
Workineh Shibeshi3Email author
© Getachew et al. 2015
Received: 16 January 2015
Accepted: 30 September 2015
Published: 7 October 2015
Dose adjustment for certain drugs is required in patients with reduced renal function to avoid toxicity as many drugs are eliminated by the kidneys. The aim of this study was to assess whether appropriate dosage adjustments were made in hospitalized patients with renal impairment.
A prospective cross-sectional study was carried out in the internal medicine wards of Tikur Anbessa Specialized Hospital. All patients with creatinine clearance ≤59 ml/min admitted to hospital between April and July, 2013 were included in the analysis. Data regarding serum creatinine level, age, sex and prescribed drugs and their dosage was collected from the patients' medical records. Serum creatinine level ≥1.2 mg/dL was used as a cutoff point in pre-selection of patients. The estimated creatinine clearance was calculated using the Cockcroft- Gault (CG) equation. Guideline for Drug prescribing in renal failure provided by the American College of Physicians was used as the standard for dose adjustment.
Nine percent (73/810) of medical admissions were found to have renal impairment (CrCl ≤ 59 ml/min). There were 372 prescription entries for 73 patients with renal impairment. Dose adjustment was required in 31 % (115/372) of prescription entries and fifty eight (51 %) prescription entries requiring dose adjustment were found to be inappropriate. Of 73 patients, 54 patient received ≥1 drug that required dose adjustment (median 2; range 1–6). Fifteen (28 %) patients had all of their drugs appropriately adjusted while twenty two (41 %) patients had some drugs appropriately adjusted, and seventeen (31 %) of patients had no drugs appropriately adjusted. No patients were documented to have received dialysis.
The findings indicate that dosing errors were common among hospitalized patients with renal impairment. Improving the quality of drug prescription in patients with renal impairment could be of importance for improving the quality of care.
Dose adjustment
The metabolism and excretion of many drugs and their pharmacologically active metabolites depend on normal renal function. In patients with kidney dysfunction, the renal excretion of parent drug and its metabolites will be impaired leading to their excessive accumulation in the body [1]. In addition, the plasma protein binding of drugs may be significantly reduced, which in turn could influence the pharmacokinetic processes of distribution and elimination. The activity of several drug-metabolizing enzymes and drug transporters has been shown to be impaired in chronic renal failure [2].
Medication dosing errors are the most important drug-related problems in patients with renal impairment [3, 4]. Inappropriate dosing in patients with kidney disease can cause toxicity or ineffective therapy [5]. In particular, older patients are at a higher risk of developing advanced disease and related adverse events caused by age related decline in renal function and the use of multiple medications to treat co-morbid conditions [6]. Drug accumulation and toxicity can develop rapidly if dosages are not adjusted in patients with impaired renal function. Drug elimination by the kidneys correlates with the glomerular filtration rate (GFR). It is thus logical to use eGFR or eCrCl for adjusting dosages in patients with renal failure [1].
Drug dosing in renal insufficiency needs to be individualized whenever possible to optimize therapeutic outcomes and to minimize toxicity. The two major approaches are either to lengthen the interval between doses or to reduce the dose. Occasionally both interval and dose adjustments are needed [7]. Drug dosage adjustment for patients with acute or chronic kidney disease is an accepted standard of practice, though there are no clear parameters to adjust drug dosing in acute kidney injury.
The challenge is how to accurately estimate a patient's kidney function in both acute and chronic kidney disease [8], which includes renal replacement therapy which is totally different, any Scr based equations are not valid in patients with acute kidney injury and end stage kidney disease.
Many renal function estimation approaches have been proposed, amongst which the Cockcroft-Gault (CG) equation, provides an estimate of creatinine clearance (CrCl) [9]. An apparently minor increase in serum creatinine (SCr) can reflect a marked fall in GFR. For this reason the estimation of GFR through the calculation of CrCl or eGFR using validated formula is mandatory in every patient [10]. When in doubt, appropriate information for dosing guidelines should be sought in recently published monographs or texts [11]. There are no published reports on studies that evaluate drug dosage adjustment in renal patients in Ethiopia. Therefore, this study was initiated to assess drug dosage adjustment among hospitalized patients with renal impairment at Tikur Anbessa Specialized Hospital.
Study area
The study was conducted in the internal medicine wards of Tikur Anbessa specialized Hospital (TASH), the largest tertiary care teaching hospital of Addis Ababa University in Ethiopia. The Hospital has about 600 beds and provides diagnosis and treatment for 370,000–400,000 clients/year.
The study design was prospective cross sectional study involving chart review and patient interview.
Inclusion and exclusion criteria
The source population was all patients visiting the Internal medicine department of TASH, and the study population was all inpatients in the internal medicine wards with renal impairment. Patients older or equal to eighteen years of age, patients receiving at least one pharmacological agent, hospitalized for at least one day and patients who had at least one estimated creatinine clearance value of 59 ml/min or less were included in the study. Patients not receiving any pharmacological agent, female patients who were pregnant and patients with CrCl > 60 ml/min were excluded from the study.
All patients admitted in the 4 months from April 2013 to July 2013 were considered for sampling purpose. From 810 admissions, only 73 patients were included in the final analysis based on the inclusion criteria.
Data collection procedures
Data were collected by four ward nurses who were trained for 2 days on the extraction of data from patient files and techniques of data collection and supervised by the principal investigator to check completeness every day. Patient chart review was used to collect individual patient data including age, sex, serum creatinine (this was later used to estimate CrCL), blood urea nitrogen, co-morbid condition, reason for admission, medications prescribed during hospitalization and medications that need dose adjustment using data abstraction format. Actual weight was recorded and for those who were critical and immovable patients, either the patient, if conscious, or the care giver was asked to provide the most recent weight of the patient. We didn't use ideal body weight unless patient's BMI was greater than 30 kg/m2.
The glomerular filtration rate was estimated based on creatinine clearance from serum creatinine (SCr) using the Cockcroft Gault equation as shown below for men and women respectively:
$$ \mathrm{Men}:\ \mathrm{CrCl}\ \left(\mathrm{ml}/ \min \right) = \frac{\left[\left(140\hbox{-} \mathrm{age}\right) \times \mathrm{weight}\ \left(\mathrm{kg}\right)\right]}{\mathrm{SCr}\ \left(\mathrm{mg}/\mathrm{dl}\right) \times 72} $$
$$ \mathrm{Women}:\ \mathrm{C}\mathrm{r}\mathrm{C}\mathrm{l}\ \left(\mathrm{ml}/ \min \right) = \frac{\left[\left(140\hbox{-} \mathrm{age}\right) \times \mathrm{weight}\ \left(\mathrm{kg}\right)\right] \times 0.85}{\left(\mathrm{S}\mathrm{C}\mathrm{r}\ \left(\mathrm{mg}/\mathrm{dl}\right) \times 72\right)} $$
Serum creatinine concentrations were measured using the two-point, fixed-time kinetic Jaffé reaction on a Humalyzer 3000 automated analyzer (HUMAN Gesellschaft für Biochemica und Diagnostica mbH, Wiesbaden, Germany). The serum creatinine results were not calibrated using isotope–dilution mass spectrometry (IDMS) method.
SCr level ≥1.2 mg/dL was used as a cut-off point in the pre-selection rather than CrCl due to several reasons; first SCr values was available in the patients' medical files, however, neither body weight nor CrCl was available in the patients' medical files. SCr value was the only laboratory value available for the physician in the patients' medical files. So using SCr values reflected the current situation in the hospital. The SCr value of 1.2 mg/dL was considered the upper normal value for SCr in clinical practice [12]. Appropriateness was determined by comparing practice with the guideline "Drug Prescribing in Renal Failure: Dosing Guidelines for Adults and Children (Aronoff et al., 2007)" [13].
Operational definitions
Appropriate: when the drug regimen is adjusted based on the patient's CrCl as recommended by the guideline "Drug Prescribing in Renal Failure: Dosing Guidelines for Adults and Children" [13].
Inappropriate: when the dosage prescribed is not in conformity to the patient's CrCl as recommended by "Drug Prescribing in Renal Failure: Dosing Guidelines for Adults and Children" [13].
Hospitalized patient: A patient admitted in Hospital at least for 24 h.
Prescription entries: Lines of prescriptions in which a certain medication may be prescribed on multiple occasions for different patients.
Renal impairment: is a medical condition in which the kidneys fail to adequately filter waste products from the blood.
Renal related: is a condition where the primary diagnosis is one or another type of kidney injury
Stage of renal impairment- is the severity of renal impairment based on CrCl value regardless of the definite cause of CKD.
Ethical clearance
Letter of ethical clearance was obtained from the School of Pharmacy Research Ethics review Board and the Department of Internal medicine Research and ethics committee, School of medicine, College of Health Sciences, Addis Ababa University. Additionally informed verbal consent for participation in the study was obtained from all participants.
Data were edited, cleaned and analyzed using Statistical Package for Social Science (SPSS) version 17. The data were summarized and described using tables and graphs. Univariate and multivariate analysis were performed to compute crude odds ratio (COR) and adjusted odds ratio (AOR). Statistical significance was set at p value < 0.05.
As shown in Table 1, a total of 810 patients with SCr ≥1.2 mg/dL were identified during the 4-months study period between April and July 2013. Based on the inclusion criteria, a total of 73 patients (9 % of medical admissions) were included in the final analysis. Those 73 patients were designated as the renal impairment group which consisted of 40 (55 %) males and 33 (45 %) females. The median age of patients with renal impairment was 42 years (range 18–87); the median weight of patients was 60 kg; 18 (25 %) patients were admitted due to renal related disease and no attempt was made to make a distinction between CKD and AKI. Comorbidity was present in 62 (85 %) of patients among 73 renal impaired patients.
Demographic and clinical data of patients with renal impairment in Tikur Anbessa specialized hospital, Addis Ababa, Ethiopia, August 2013
Demographic and clinical data
Number (%)
Total number of hospitalized patients during the study period
Number of Patients with renal impairment
73 (9 %)
40 (55 %)
Age (median)
42 (range 18–87)
SCr (mean)
2.24 ± 1.5
Estimated CrCl (mean)
39.6 ± 1.4
Drugs per patient (mean) ± SD
5.1 ± 2.3
Drugs required dose adjustment per patient (mean) ± SD
Patients with stage of renal Impairment
53/73 (72.5)
Reason for admission
Renal related
Non- Renal
The median number of drugs prescribed per patient was 5 (range 1–12) and 40/73 (54.8 %) had ≥5 drugs prescribed. The mean estimated CrCl was 39.6 ml/min (IQR 29.8–49.2), with a mean SCr valueof 2.24 mg/dl (IQR 1.3–2.3). No patients were documented to have received dialysis when the prescription was reviewed for dose adjustment.
Dose adjustment was required in 115 (31 %) of 372 prescription entries. Of the 115 prescription entries, 58 (51 %) were found to be inappropriate (Fig. 1). Analysis of the proportion of appropriately adjusted prescription entries per patient indicated that of the 73 patients, 54 (74 %) received ≥1 drug that required dose adjustment (median 2; range 1–6). Patients who had all of their medications appropriately adjusted were 15 (28 %); 22 (41 %) of patients had some drugs appropriately adjusted whereas 17/54 (31 %) of patients had all drugs inappropriately adjusted (Fig. 2). Age related analysis of dose adjustment indicated that a greater proportion of inappropriate dose adjustment in prescription entries was observed in the elderly (≥60 age group) (Fig. 3).
Appropriateness of prescription entries in all patients (n = 73) of the study, Tikur Anbessa Specialized Hospital, Addis Ababa, Ethiopia, August 2013
Proportion of appropriately adjusted prescription entries per patient at Tikur Anbessa Specialized Hospital, Addis Ababa, Ethiopia, August 2013
Dose adjustment of prescription entries across various age groups at Tikur Anbessa Specialized Hospital, Addis Ababa, Ethiopia, August 2013
When type of medication and dose adjustment were evaluated, cimetidine was the most frequently prescribed drug that required dose adjustment, which was appropriately adjusted in 15/18 (83.3 %) of cases; followed by spironolactone, vancomycin and ceftazidime which were appropriately adjusted in 2/16 (12.5 %), 10/14 (71.4 %) and 7/11 (63.6 %) of cases respectively. Enalapril was the only drug correctly dose adjusted in all cases (6/6). Allopurinol and co-trimoxazole remained unadjusted in all cases. Medications that were less frequently prescribed (≤2) were categorized under "others" (Fig. 4).
Dose adjustment by types of medication at Tikur Anbessa Specialized Hospital, Addis Ababa, Ethiopia, August 2013
Based on the stage of renal impairment, data showed that a total of 83/115(72 %) prescription entries that required dose adjustment were prescribed to patients with stage 3. Of 83 prescription entries, 51 (61.4 %) were appropriately adjusted for patients with stage 3. Of the 22 prescription entries, 4 (18.2 %) were appropriately adjusted for patients with stage 4. Patients in stage 5 had a total of 10 prescription entries of which 2 (20 %) were appropriately adjusted (Fig. 5). In the present study, few medications were inappropriately prescribed in stage 5 renal impairment. Two ceftazidime, one cimetidine, one vancomicin, one fluconazole and others (three) were inappropriately adjusted in patients with stage 5 renal impairment.
Dose adjustment of prescription entries by stage of renal impairment at Tikur Anbessa Specialized Hospital, Addis Ababa, Ethiopia, August 2013
On univariate and multivariate analysis, COR and AOR revealed that age, sex, weight, SCr, CrCl, BUN, reason of admission, comorbidity, stage of renal impairment, number of medication prescribed per patient and number of medications that required dose adjustment per patient did not show significant difference on the proportion ofappropriately adjusted prescriptions per patient (Table 2).
Relationship between independent variables and proportion of appropriately adjusted prescription entries per patient in Tikur Anbessa specialized hospital, Addis Ababa, Ethiopia, August 2013
All medications per patient were inappropriately adjusted
OR (95 %) CI
0.74 (0.23,2.36)
0.59 (0.03,14.21)
2.92 (0.06,127.78)
0.42 (0.023,7.677)
1.25 (0.196,7.96)
3.88 (0.471,31.91)
0.73 (0.00,715.2)
Weight (mean)
0.51 (0.31,0.84)a
BUN (mean)
Reason of admission
Non renal
0.33(0.09,1.18)
No of Med prescribed Per patient (mean)
-No of Med need Dose adjustment per patient (mean)
astatistically significant; COD crude odds ratio; AOR adjusted odds ratio
Relationship between independent variables and appropriateness of dose adjustment of prescription entries in Tikur Anbessa specialized hospital, Addis Ababa, Ethiopia, August 2013
Appropriately adjusted
Drugs prescribed
Cimetidine
0.060 (0.013, 0.280)
0.013 (0.001, 0.15)a
2.100 (0.369, 11.96)
4.639 (0.353, 60.919)
0.045 (0.004, 0.525)a
Ceftazidime
0.043 (0 .004, 0.421)
0.000 (0.000, −)
4.846 (0 .000, −)
0.90(0 .078, 10.327)
Cotrimoxazole
4.846 (0.000,-)
64.159 (0.159, 2.59)
1.125 (0.170 7.452)
587.70 (4.040, 8.549)a
2.115 (0.877, 5.10)
11.77 (0.635, 218.265)
aStatistically significant, COD crude odds ratio; AOR adjusted odds ratio
However, dose adjustment of prescription entries was associated with type of medications prescribed, stage of renal impairment, SCr level and BUN (Table 3). There was a negative association between the type of medication prescribed and the likelihood of appropriately adjusting medications. When cimetidine (AOR = 0.013 (0.001, 0.150)), vancomycin (AOR = 0.045 (0.004, 0.525)), ceftazidime (AOR = 0.067 (0.005, 0.894)) and digoxin (AOR = 0.009 (0.000, 0.297)) were prescribed, dose was appropriately adjusted less frequently than any other medications. Prescription entries were appropriately adjusted more frequently in stage 4 than any other stages of renal impairment (AOR = 587.70 (4.040, 8.549)). Specifically each 1-unit increase in SCr level was associated to an increase in the likelihood of appropriately adjusting dose of medications by a factor of 129.95.
The present study evaluated the drug dosage adjustment in hospitalized patients with renal impairment. The prevalence of renal impairment was 9 % of internal medicine ward admissions. Comparing this result to Decloedt et al. (32 %) study [14], the figure is low.
This may be attributed to the fact that we used a serum creatinine cutoff point, rather than eGFR to define renal impairment. It is likely that we may have missed some patients with renal impairment as a result of this.
This study also assessed the proportion of appropriately adjusted drugs per patient. In this study, 74 % received ≥1 drug that required dose adjustment. That means 26 % of patients did not have any medications that required dose adjustment. These patients might have been prescribed medications that either did not require dose adjustment or nephrotoxic medications might have been avoided or switched to safer drugs. Among those who received medications that required dose adjustment, fifteen (28 %) patients had all of their drugs appropriately adjusted; twenty two (41 %) patients had some drugs appropriately adjusted and seventeen (31 %) patients had no drugs appropriately adjusted. In Decloedt et al. [14] study, 71 % received ≥1 drug that required dose adjustment. All drugs were correctly adjusted in only 12 % of patients; some drugs were correctly adjusted in 29 % of patients and no drug was correctly adjusted in 59 % of patients. This study suggested that a lot has to be done yet regarding dose adjustment. However, the findings were much better than a similar study done in South Africa [14].
The total number of prescription entries that required dose adjustment and the percentage of appropriate dosing varied in different studies [14–17]. Our study has different figures of prescription entries that required dose adjustment (31 %) when compared with other studies, Decloedt et al. (19 %), Sweileh et al. (19 %) and Salomon et al. (71 %) [14–16]. The doses were found to be inappropriately high in 42.2 % [17]. Appropriate dosing in this study (49 %) was much higher than Decloedt et al. (32 %), Sweileh et al. (26.42 %) and Salomon et al. (34 %) studies [14–16].
In this study, the percentage of appropriately adjusted prescription entries was higher than the findings in South Africa [14] and Palestine [15]. This is quite encouraging but may not be surprising as TASH is the largest teaching hospital in the country with many specialists and residents in training that are supposed to have better awareness of dose adjustment compared to physicians in general hospitals. However, the figure of this study was less than the study findings in France [16] and Australia [17]. Most developed countries have introduced an automated system of reporting renal function with eGFR which alerts physicians of the need for dose adjustment.
The other finding reported in the present study is that SCr had a positive association with appropriate prescribing. It appears that physicians become more careful in medication prescription and make appropriate dose adjustments among patients with elevated SCr. Assessment of the relationship between age and dose adjustment indicated that a higher proportion of inappropriate prescription entries was found in the age group ≥ 60. This is in keeping with the well known fact that using SCr, that underestimates the presence and degree of renal impairment in the elderly, often results in improper dose adjustment [6].
Our study has some limitations. The sample size may be considered rather small. Our study was a cross sectional study and the study design did not allow us to make a distinction between acute kidney injury and chronic kidney disease. The study, hence, cannot answer the question whether there was a need for frequent dose adjustment that may be necessary in those with rapidly changing kidney function. It is quite conceivable that in addition to consideration of renal function, prescribers may have made dose adjustments on the basis of other parameters like the blood pressure, heart rate, electrolyte levels. Physicians may have used guidelines other than the one we used in the study to make dose adjustments. Using the same cut-off point(s-creatinine > 1.2 mg/dl) for all patients would result in an underestimation of the prevalence of impaired kidney function in women and the elderly. This in turn may have led to an underestimation the proportion of patients who needed dosage adjustment.
We used the CG formula and estimated the Creatinine Clearance rather than the MDRD equation to estimate GFR for various reasons. First, although the MDRD equation is widely used to estimate GFR in many parts of the world, the equation has to be validated as a measure of the GFR in a particular population before it can be adopted for use. To our knowledge there are no studies that have validated the use of the MDRD equation in an Ethiopian population. The use of the MDRD equation for drug dosing purposes often yields higher doses than the CG equation, which many believe is a safety concern [18]. CG typically yields a more conservative estimate and indicates the need for dose adjustment more often [18]. In addition little information has been published on the performance of the MDRD equation in the elderly (age > 65 years), the obese, individuals with liver disease, and races other than Caucasian or African-American and the findings have been inconsistent [19].
Estimation of GFR from combined serum creatinine and cystatin C–based equation recently published [20]. Serum cystatin C–based GFR estimates were closely comparable to MDRD (abbreviated) estimates and an equation combining it with serum creatinine, age, sex and race yielded the best possible estimates of GFR. However, the test is neither widely used nor easily available at this time, and the experience is limited [20].
This study indicates that appropriate dose adjustment was not done for patients with renal impairment by practitioners in a significant percentage of patients. This finding indicates the need for providing doctors with information and guidelines for dose adjustment in patients with renal impairment to prevent poor clinical outcome and toxicity resulting from dosing errors in patients with renal impairment.
GFR:
CrCl:
SCr:
CG:
Cockcroft-Gault equation
IQR:
Interquartile range
Crude odds ratio
AOR:
Adjusted odds ratio
AKI:
BUN:
Modification of diet in renal disease study
The authors acknowledge the patients for participation in the study. We also acknowledge Addis Ababa University for partial support of the research.
HG designed and conducted study, analyzed data, interpreted results and drafted manuscript. YT and WS were involved in design of study, supervision, drafting the manuscript and its critical review. All authors have given final approval of the version to be published.
Department of Clinical Pharmacy, School of Pharmacy, College of Medical and Health Science, University of Gondar, Gondar, Ethiopia
Department of Internal Medicine, School of Medicine, College of Health Sciences, Addis Ababa University, Addis Ababa, Ethiopia
Department of Pharmacology and Clinical Pharmacy, School of Pharmacy, College of Health Sciences, Addis Ababa University, P.O. Box 9086, Addis Ababa, Ethiopia
Swan SK, Bennett WM. Drug dosing guidelines in patients with renal failure. West J Med. 1992;156:633–8.PubMedPubMed CentralGoogle Scholar
Verbeeck RK, Musuamba FT. Pharmacokinetics and dosage adjustment in patients with renal dysfunction. Eur J Clin Pharmacol. 2009;65:757–73.View ArticlePubMedGoogle Scholar
Fink JC, Chertow GM. Medication errors in chronic kidney disease: one piece in the patient safety puzzle. Kidney Int. 2009;76:1123–5.View ArticlePubMedGoogle Scholar
Yap C, Dunham D, Thompson J, Baker D. Medication dosing errors for patients with renal insufficiency in ambulatory care. JtComm J Qual Patient Saf. 2005;31(9):514–21.Google Scholar
Munar MY, Singh H. Drug dosing adjustments in patients with chronic kidney disease. Am Fam Physician. 2007;75:1487–96.PubMedGoogle Scholar
Modig S, Lannering C, Östgren CJ, Mölstad S, Midlöv P. The assessment of renal function in relation to the use of drugs in elderly in nursing homes; a cohort study. BMC Geriatr. 2011;11:1–6.View ArticlePubMedPubMed CentralGoogle Scholar
Robert LT. Drug dosing in renal insufficiency. J Clin Pharmacol. 1994;34:99–110.View ArticleGoogle Scholar
Matzke GR, Aronoff GR, Atkinson Jr AJ, Bennett WM, Decker BS, Eckardt KU. Drug dosing consideration in patients with acute and chronic kidney disease: Improving Global Outcomes (KDIGO). Kidney Int'l. 2011;80(11):1122–37.View ArticleGoogle Scholar
Thomas CD, Gary RM, John EM, Gilbert JB. Evaluation of renal drug dosing. Prescribing information and clinical pharmacist approaches. Pharmacotherapy. 2010;30(8):776–86.View ArticleGoogle Scholar
Salomon L, Levu S, Deray G, Launay-Vacher V, Brücker G, Ravaud P. Assessing residents' prescribing behavior in renal impairment. Int'l J for Qal in HC. 2003;3:235–40.Google Scholar
Joanne K, Piera C. Safe drug prescribing for patients with renal in sufficiency. CMAJ. 2002;166(4):473–7.Google Scholar
National Kidney Foundation. KDOQI clinical practice guidelines and clinical practice recommendations for diabetes and chronic kidney disease. Am J Kidney Dis. 2007;49(2 SUPPL 2):S12–154.Google Scholar
Aronoff GR, Berns JS, Brier ME, Golper TA, Morrison G, Singer I. Drug prescribing in renal failure: dosing guidelines for adults and children. 5th ed. Philadelphia: American College of Physicians; 2007.Google Scholar
Decloedt E, Leisegang R, Blockman M, Cohen K. Dosage adjustment in medical patients with renal impairment at Groote Schuur Hospital. S Afr Med J. 2010;100:304–6.View ArticlePubMedGoogle Scholar
Sweileh WM, Janem SA, Sawalha AF, Abu-Taha AS, Zyoud SH, Sabri IA, et al. Medication dosing errors in hospitalized patients with renal impairment: a study in Palestine. Pharmacoepidemiol Drug Saf. 2007;16:908–12.View ArticlePubMedGoogle Scholar
Salomon L, Deray G, Jaudon MC, Chebassier C, Bossi P, Launay-Vacher V, et al. Medication misuse in hospitalized patients with renal impairment. Int J Qual Health Care. 2003;15:331–5.View ArticlePubMedGoogle Scholar
Pillans PI, Landsberg PG, Fleming AM, Fanning M, Sturtevant JM. Evaluation of dosage adjustment in patients with renal impairment. Intern Med J. 2003;33:10–3.View ArticlePubMedGoogle Scholar
Moranville MP, Jennings HR. Implications of using modification of diet in renal disease versus Cockcroft-Gault equations for renal dosing adjustments. Am J Health Syst Pharm. 2009;66:154–61.View ArticlePubMedGoogle Scholar
Stevens LA, Coresh J, Feldman HI, Greene T, Lash JP, Nelson RG, et al. Evaluation of the modification of diet in renal disease study equation in a large diverse population. J Am Soc Nephrol. 2007;18:2749–57.View ArticlePubMedGoogle Scholar
Devraj M: Limitations of Various Formulae and Other Ways of Assessing GFR in the Elderly: Is There a Role for Cystatin C? Am Soc Nephrol 2009.S. 1–6Google Scholar | CommonCrawl |
The Peculiarities of Strain Relaxation in GaN/AlN Superlattices Grown on Vicinal GaN (0001) Substrate: Comparative XRD and AFM Study
Andrian V. Kuchuk1, 2Email author,
Serhii Kryvyi1,
Petro M. Lytvyn1,
Shibin Li2, 3,
Vasyl P. Kladko1,
Morgan E. Ware2,
Yuriy I. Mazur2,
Nadiia V. Safryuk1,
Hryhorii V. Stanchu1,
Alexander E. Belyaev1 and
Gregory J. Salamo2
© Kuchuk et al. 2016
Superlattices (SLs) consisting of symmetric layers of GaN and AlN have been investigated. Detailed X-ray diffraction and reflectivity measurements demonstrate that the relaxation of built-up strain in the films generally increases with an increasing number of repetitions; however, an apparent relaxation for subcritical thickness SLs is explained through the accumulation of Nagai tilt at each interface of the SL. Additional atomic force microscopy measurements reveal surface pit densities which appear to correlate with the amount of residual strain in the films along with the appearance of cracks for SLs which have exceeded the critical thickness for plastic relaxation. These results indicate a total SL thickness beyond which growth may be limited for the formation of high-quality coherent crystal structures; however, they may indicate a growth window for the reduction of threading dislocations by controlled relaxation of the epilayers.
GaN/AlN
Superlattices
Strain relaxation
Crystallographic tilt
GaN/AlN superlattices (SLs) have been considered for high-performance photonic devices operating throughout the ultraviolet, visible, and infrared optical regions [1–3]. Among other factors such as growth conditions and design parameters, the structural and consequently optical properties of these SLs are strongly influenced by both the substrate type and the strain in the SLs. In general, this strain is a result of the large lattice mismatch between the GaN quantum well (QW) and the AlN barrier (2.5 % in-plane); however, an additional strain component results from the difference between the lattice spacing of the substrate and the averaged lattice spacing of the entire SL.
There has been significant research devoted to studying the influence of the substrate and buffer on the deformation and relaxation processes in GaN/Al(Ga)N SLs in recent years [4–14]. In particular, it has been demonstrated that both the Al mole fraction and the buffer layer type (tensile-strained GaN or compressive-strained AlGaN) have strong influences on the misfit relaxation process in 40-period, 7/4-nm, GaN/Al x Ga1 − x N SLs [6, 7]. However, a minimization of strain relaxation by growth of both GaN and AlN under Ga excess conditions was shown for GaN/AlN (1.5/3 nm) SLs grown on both AlN- and GaN-on-sapphire templates [5]. A bimodal strain relaxation of GaN/AlN short-period SL structures independent of the type of template (GaN-thick- or AlN-thin-on-sapphire) was observed in [8, 9]. This is contrasted by the data presented in [10], which unambiguously demonstrates that the structural quality of a 10-period GaN/AlGaN SL is limited by the structural properties of the GaN substrate. This can be improved upon as seen in [12, 13] by growing on non-polar free-standing GaN substrates. In particular, for growth of non-polar m-plane GaN/AlGaN multi-QWs, extended defects introduced by the epitaxial process, such as stacking faults or dislocations, were not observed [13].
A commonly used technique to improve the crystal quality of III-nitride heteroepitaxial layers is to grow on miscut substrates [15–20], resulting, however, also in a crystallographic tilt of the epilayers. This has been observed for GaN films grown on both vicinal Al2O3 and 6H-SiC substrates [15, 16]. Here, the relationship between the tilt of the GaN lattice and the offcut angles and the surface steps of the substrate was directly established. The influence of the c-plane vicinal GaN substrates on the crystallographic orientation and deformation of InGaN layers was shown in [17]. The crystallographic tilting of GaN/AlN layers grown on Si (111) substrates with different miscut angles towards the [110] direction was reported in [18, 19]. A common result to these studies is a tilting of the lattice planes of the epitaxial layer with respect to the lattice planes of the substrate in addition to what would be expected by simple geometric arguments of the miscut and step density. This is the so-called Nagai tilt [20]. Apart from this, the impact of the substrate miscut (via influence on misfit dislocation) on the epilayer quality has been demonstrated. As for the GaN/AlN SLs, it was found that growth on vicinal Al2O3 (0001) substrates shows uniform layer structures with abrupt interfaces and good periodicity [21]. It was demonstrated that the use of an appropriate vicinal substrate with an angle of ~0.5° improves the quality of GaN/AlN SLs, leaving an extremely flat surface without any growth-induced defects.
In this work, we present the peculiarities of crystallographic tilting and strain relaxation in GaN/AlN SLs grown on vicinal GaN (0001) surfaces by plasma-assisted molecular beam epitaxy (PAMBE). Structural properties and the evolution of the deformation state as a result of changes in the number of periods in GaN/AlN SLs are investigated by high-resolution X-ray diffraction (HRXRD), X-ray reflectivity (XRR), and atomic force microscopy (AFM) techniques.
The GaN/AlN SLs were grown by PAMBE under an activated nitrogen plasma flux in a metal-rich regime at a substrate temperature of ~760 °C. Three SLs consisting of GaN/AlN (5/5 nm) periods capped with an additional GaN (10 nm) layer were grown on GaN buffer layers (100 nm) deposited on GaN (4 μm)/c-Al2O3 templates. The numbers of periods were 5 (sample number S5), 10 (S10), and 20 (S20). The evolution of the deformation state and structural properties of the SLs were examined ex situ using PANalytical X'Pert Pro MRD XL (X'Pert, PANalytical B.V., Almelo, The Netherlands) and NanoScope IIIa Dimension 3000™ (Digital Instruments, Inc., Tonawanda, NY, USA) systems for HRXRD and AFM characterization. For HRXRD, we used a standard four-bounce Ge (220) monochromator and a three-bounce (022) channel-cut Ge analyzer crystal along with a 1.6-kW X-ray tube with CuKα1 radiation and vertical line focus.
XRD Characterization
To accurately determine the misorientation angles of the GaN substrates \( \left({\alpha}_0^{\mathrm{GaN}}\right) \) and the GaN/AlN SLs \( \left({\alpha}_0^{\mathrm{SL}}\right) \), i.e., the crystallographic tilts of the GaN and SL [0001] axes from the surface normal direction, ω − φ 2D intensity scattering maps for the GaN (0002) and the SL (0002) reflections were measured in the azimuthal scanning range of φ = 0° to 360°, with a step size of 10°. Typical ω − φ 2D maps for S20 are shown in Fig. 1(a, b). From the position, ω, of the diffraction maximum as a function of the azimuthal angle φ, i.e., ω(φ), we can determine the offset angle as a function of azimuth, α(φ) = ω(φ) − θ B (where θ B is the Bragg angle).
The experimental ω − φ 2D intensity scattering maps of a GaN (0002) and b SL (0002) reflections for S20. The red curves are the fitted offset angles of α GaN and α SL with Eq. (1). The subtracted α GaN − α SL curve along with the φ-scan for GaN \( \left(10\overline{1}2\right) \) reflection is shown in c. The inset demonstrates the misorientation angles relative to the surface normal direction
The misorientation angle, α 0, of a target lattice plane was found by fitting the experimental function, α(φ), with the following equation:
$$ \tan \left(\alpha \left(\varphi \right)+{c}_1\right)= \cos \left(\varphi +{c}_2\right)\times \tan \left({\alpha}_0\right) $$
where c 1 and c 2 are the fitting parameters [19]. The fitted \( {\alpha}_0^{\mathrm{GaN}} \) and \( {\alpha}_0^{\mathrm{SL}} \) values along with the extracted crystallographic tilt (\( \varDelta {\alpha}_0^{\mathrm{SL}}={\alpha}_0^{\mathrm{GaN}}-{\alpha}_0^{\mathrm{SL}} \)) of the SL are summarized in Table 1 for all samples. As can be seen, the misorientation of the GaN (0002) is 0.69° ± 0.015° and that of the SL (0002) is ~0.67° ± 0.015°, both tilted to the same azimuth without any significant phase shift (i.e., difference in φ). To determine the crystallographic direction of the misorientation, the φ-scan of GaN \( \left(10\overline{1}2\right) \) reflection was measured in the same azimuthal scanning range as above. As can be seen from Fig. 1(c), the orientation of the crystallographic tilts of GaN and the SL follows the same crystallographic direction, along the GaN \( \left[10\overline{1}0\right] \). It should be noted that an additional tilt \( \left(\varDelta {\alpha}_0^{\mathrm{SL}}\right) \) of the lattice c-planes of the SLs with respect to the lattice c-planes of the GaN substrates is not the same for all samples (see Table 1). In order to establish the relationship between the tilt of the lattice of SLs and the offcut angles of GaN substrate, their lattice parameters need to be taken into account.
The misorientation angles of GaN substrate and SLs for each sample. The fitted \( {\alpha}_0^{\mathrm{GaN}} \) and \( {\alpha}_0^{\mathrm{SL}} \) values are given along with the experimental (\( \varDelta {\alpha}_0^{\mathrm{SL}} \)) and calculated (\( \varDelta {\alpha}_0^{\mathrm{SL}}\kern0.5em \left(\mathrm{Nagai}\right) \)) crystallographic tilt of SLs
\( {\alpha}_0^{\mathrm{GaN}} \) (°)
\( {\alpha}_0^{\mathrm{SL}} \) (°)
\( \varDelta {\alpha}_0^{\mathrm{SL}} \) (°)
\( \varDelta {\alpha}_0^{\mathrm{SL}}\kern0.5em \left(\mathrm{Nagai}\right) \) (°)
In order to study the evolution of the deformation state and structural parameters of GaN/AlN SLs, reciprocal space mapping (RSM) was used firstly. To avoid errors by characterization of tilted layers, we used an approach well described in [17]: we mount the sample in such a way that the direction of the miscut is perpendicular to the diffraction plane. The interplanar distances of asymmetric planes measured in such arrangement of the sample are not influenced by the tilt. All samples were measured in the vicinity of the GaN \( \left(11\overline{2}4\right) \) reflection, and the results are shown in Fig. 2. It is seen that for all samples, the SL peaks are not vertically aligned with the GaN peak. This arrangement of the GaN and SL peaks on the Q x -axis indicates that the SL structures are not fully strained to the GaN buffer layer (a SLs ≠ a GaN). Moreover, the arrangement of the SL peaks on the Q z -axis indicates an evolution of the out-of-plane lattice parameter by changing the number of periods in the SLs. Therefore, the mean strain of the SLs must depend on the number of SL periods. The measured values of the in-plane lattice parameters for SL and GaN buffers extracted from the asymmetrical RSMs are listed in Table 2. Here, by comparing a SLs and a GaN, we can conclude that the in-plane strain relaxation of a SL increases by increasing the number of SL periods. This leads to a change in the relaxation degree of individual layers of the SLs, i.e., the GaN QW and AlN barrier layers. To define the relaxation values of the GaN QW and AlN barrier layers, we used Eq. (2) [22].
The \( \left(11\overline{2}4\right) \) RSMs of GaN buffer layers and GaN/AlN SLs for S5, S10, and S20. The vertical dashed lines indicate the Q x positions for GaN and SLs
Structural parameters for SL layers and GaN substrate obtained from HRXRD data for the different samples investigated
RSM \( \left(11\overline{2}4\right) \)
ω/2θ (0002)
ω (0002)
a (nm)
R GaN (%)
R AlN (%)
T SL (nm)
t GaN/t AlN (nm)
Δω (arcsec)
N screw (×108 cm2)
0.3183 ± 0.0001
9.9 ± 0.15
9.45 ± 0.05
GaNtempl.
0.31878 ± 0.00002
$$ {R}_{\mathrm{well},\ \mathrm{barrier}}=100\times \raisebox{1ex}{$\left({a}_{\mathrm{SL}} - {a}_{\mathrm{AlN},\ \mathrm{G}\mathrm{a}\mathrm{N}}\right)$}\!\left/ \!\raisebox{-1ex}{$\left({a}_{\mathrm{GaN},\ \mathrm{A}\mathrm{l}\mathrm{N}} - {a}_{\mathrm{AlN},\ \mathrm{G}\mathrm{a}\mathrm{N}}\right)$}\right. $$
where a SL = a well = a barrier and a GaN and a AlN are the bulk relaxed lattice parameters of GaN and AlN, respectively. If we assume that the GaN QW and the AlN barrier layers in the SLs are mutually lattice-matched with each other, then the sum of the relaxation values R well + R barrier = 100 %. Thus, a change in the relaxation degree of the entire SL by increasing the number of SL periods leads to the decrease and increase of relaxation degree of the GaN QW and AlN barrier, respectively (see Table 2).
Taking into account the relaxation values, R well, barrier, we simulated the HRXRD (0002) ω/2θ-scan using the X'Pert Epitaxy software package (see Fig. 3). Firstly, we determine directly the SL period thickness from the separation angle of the SL satellite peaks. Next, by varying the GaN QW and AlN barrier thicknesses at fixed relaxation values, we achieved a good fitting of the experimental HRXRD spectra. The extracted SL periods (T SL) and the GaN QW and AlN barrier thicknesses (t GaN/t AlN) are given in Table 2. The simulations fit the experimental spectra quite well, and we observed some differences between these measured and design thicknesses of the SL layers.
The experimental (gray curves) and fitted (color curves) (0002) ω/2θ XRD spectra for each sample. The inset represents the (0002) ω-scans for the zero-order satellite peak of SLs along with the FWHM (Δω) presented for each sample
In order to confirm the T SL, t GaN, and t AlN values obtained from the simulation of the (0002) ω/2θ XRD spectra, we additionally measured the ω/2θ XRR profiles for each sample. This method is not sensitive to the deformation of the lattice parameter. The thickness oscillations in XRR, i.e., Kiessig fringes, are caused by the interference of the waves reflected at the layer surfaces; therefore, the oscillation period determines the thicknesses associated with the well, barrier, and SL period, respectively. To assess these values, the XRR experimental scans were fitted with the X'Pert Reflectivity software package (see Fig. 4). Comparing these two techniques, it was observed that there was a relatively small disparity in the thickness of the calculated layers between the two methods, which may be easily accounted for by experimental error. However, the trends in thickness change remain the same. The observed thickness reduction mainly for the GaN QW can be explained by the following [23, 24]: (i) the Al-N binding energy is much higher than the Ga-N binding energy, (ii) the exchange between the Ga atoms of the QW and the Al adatoms of the barrier is thermally activated, and (iii) the strain in the GaN QWs influences the Al-Ga exchange mechanism.
The XRR profiles of S20 (green), S10 (blue), and S5 (red). The gray curves are the experimental XRR profiles for each sample
In order to more deeply study the evolution of the structural parameters due to the changing number of periods in the SLs, we measured the ω-scan for the zero-order satellite peaks (see the inset in Fig. 3). As we can see, the full width at half maximum (Δω) of the (0002) rocking curves for the SL peaks is larger than that for the GaN buffer peak for all samples. By using the equation N s = Δω 2 (0002)/(4.35 × |b s|2), where b s is Burger's vector of screw-type threading dislocations (TDs), we calculate the density of screw-type TDs (N screw). The pure screw-type TD has a Burger's vector, |b s | = c = 0.51851 nm, in the [0001] direction. As can be seen from Table 2, N screw for the SLs is higher than that for the GaN buffer for all samples. Moreover, the non-monotonic change in the density of TDs with the changing number of periods in SLs is evident. This indicates that some critical thickness has been exceeded in the growth of S20.
The densities of TDs in the SLs appear to correlate directly with the strain in the SLs. In addition to the lattice mismatch between the GaN/AlN layers of the SL and the GaN substrate, determined through the ratio of layer thicknesses in the SL (t GaN/t AlN), the strain also accumulates in the film through an increasing number of periods. The critical thickness for plastic strain relaxation depends, finally, on the accumulated elastic energy, the surface free energy, and the energy required for the generation of a dislocation. In the case of the GaN/AlN SL deposited on GaN with t AlN/t GaN = 2 [5], it was shown experimentally that the average in-plane lattice parameter decreases gradually as strain builds up and dislocations are generated and reach a stable value after about 20 SL periods with a critical thickness of ~90 nm. For our SLs with t AlN/t GaN ∼ 1, the non-monotonic change in the density of TDs with the evolution of the deformation state in SLs indicates that the critical thickness for plastic relaxation is exceeded for films greater than 10 periods or a total thickness of ~97 nm. Below this thickness, like for S5 (~50 nm), the elastic strain is due entirely to lattice mismatch. Above the critical thickness like S20 (~190 nm), plastic relaxation through misfit dislocation generation must be considered in the evaluation of the total strain in addition to the lattice mismatch. Even after these considerations, the resulting strain cannot be explained. We must also take into account the strain relief due to the non-ideality of the crystal orientation, i.e., the miscut.
Finally, in order to include the miscut in the analysis, we must consider the out-of-plane lattice parameters of the SLs (c SL) and GaN (c GaN) which allows us to calculate the Nagai tilt angle, \( \varDelta {\alpha}_0^{\mathrm{SL}}\ \left(\mathrm{Nagai}\right) \), using the following equation [20]:
$$ \tan \varDelta {\alpha}_0^{\mathrm{SL}}\ \left(\mathrm{Nagai}\right)/ \tan {\alpha}_0^{\mathrm{GaN}}=\left({c}_{\mathrm{SL}}-{c}_{\mathrm{GaN}}\right)/{c}_{\mathrm{GaN}} $$
As can be seen from Table 1, for all SL samples, the measured tilt angles, \( \varDelta {\alpha}_0^{\mathrm{SL}} \), are larger than the Nagai angle predicted by Eq. (3). Huang et al. [15] reported that the tilt angle of the GaN layer obeys the Nagai model if the misorientation angles of the sapphire substrate are small. The small angle miscut is in fact the required approximation for the classical Nagai theory. However, despite the small misorientation angles of our GaN substrate, the classic Nagai theory does not appear to be valid for our samples. First of all, the small miscut approximation of the classical Nagai model only considers the out-of-plane lattice mismatch between two ideal crystal lattices at one interface, which would treat a superlattice as some average alloy. In general, the multiple layers of the SLs require a more complicated consideration of the difference in the out-of-plane lattice mismatch as well as a consideration of the in-plane mismatch. This was explicitly demonstrated for GaN/AlN layers grown on Si (111) substrates with different misorientation angles [18, 19]. Additionally, these models only consider tilting or lattice misorientation of single epitaxial interfaces. For our GaN/AlN SLs grown on vicinal GaN (0001) substrate, the deposition of each additional layer is characterized by (i) the presence of elastic and plastic relaxation components [5], due to the in-plane lattice mismatch with the resulting layers below it, and (ii) the tilting or a triclinic unit cell deformation, due to the out-of-plane lattice mismatch with the resulting layers below. The triclinic deformation of the unit cells of fully strained InGaN grown on vicinal GaN (0001) substrates was reported in [17]. A full analysis including these considerations is beyond the scope of this paper but should explain the result that the SL peak of S5 is not vertically aligned with the GaN substrate peak (Fig. 2(a)), even though S5 is below the critical thickness for plastic relaxation and is pseudomorphic with and fully strained to the substrate. The final result of this Nagai tilt is only a small additional elastic relaxation component at an interface, but here, we have shown that this elastic relaxation pathway can be considerable after the addition of many interfaces through the growth of a SL.
AFM Characterization
The topographic features of the epitaxially grown surface indirectly carry information about mechanisms of growth and relaxation, residual deformations, and structural defects. Therefore, we have used atomic force microscopy to analyze the surface features of the samples. To ensure high sensitivity and resolution, measurements were carried out in the tapping mode using silicon tips with a nominal tip radius of less than 10 nm. The typical topography of GaN-on-sapphire template substrate and cap layer of the multilayer superlattice AlN/GaN structures is shown in Fig. 5. Here, we see that the surface is densely covered with depressions, which are evidently the emergence of mixed or pure screw TDs with a Burger's vector, |b m|2 = (1/3 × a)2 + c 2, in the \( \left[11\overline{2}3\right] \) direction, or |b s| = c, in the [0001] direction [25]. The formation of pits at TD cores is possible due to the strain energy density associated with surface-terminated threading dislocations being equivalent to a line of tension directed into the material [26]. All surfaces show characteristic features which result from the 2D step-flow growth mechanism of Ga-rich growth, but each sample has individual features. In the GaN buffer (Fig. 5a), narrow terraces are observed with a width of ~129 nm and a height of ~0.53 nm, which is very nearly the same as the bilayer step height of GaN (0.518 nm). The direction of terraces, of course, coincides with the direction of the misorientation of the sapphire substrate, while the surfaces of the terraces are atomically smooth. Dislocations act to pin the step flow of the terraces forming the characteristic triangular shapes seen at the step edges and resulting in an observed density of dislocations of ~0.5 · 109 cm−2. Deposition of the SL structures significantly increased the width of the terraces. These range from ~3500 nm in the 5-period SL sample to ~2500 nm in the 20-period SL sample and are evidence of significant step bunching (Fig. 6). This step bunching also results in terrace heights which range from ~8 nm in the 5-period SL to ~5 nm in the 20-period SL. When averaging the width/height of the terraces over 100 × 100 μm areas, the misorientation angle of the surface layers is 0.08°, 0.06°, and 0.10° for S5, S10, and S20, respectively. For the GaN buffer layer, the misorientation is much larger than ~0.23°.
AFM image of the surfaces of the substrate GaN/Al2O3 (a) and the SL structures of AlN/GaN with 5, 10, and 20 periods, respectively, in b–d. The arrows indicate crystallographic directions and the direction of misorientation of substrate
AFM images of surfaces of the AlN/GaN SL structures with 5, 10, and 20 periods, respectively, in a–c. The dotted lines mark the triangular-shaped areas of anisotropic step-flow growth resulting from the long-range pinning effects of the surface-terminated dislocations
In addition, the surfaces of the terraces cease to be atomically smooth and the nanograin substructures appear (Fig. 5b–d). Nanograins of about 20 nm are located in the vicinity of the dislocation pits on the 5-period SL. For the 20-period SL sample, the nanograin size is ~40 nm. However, the nanogranularity of the terraces is much more significant and in fact more uniform in the 10-period SL sample where the typical grain size is 50–60 nm and the grains evenly cover the entire surface. Looking, however, at the surfaces over a larger scale (Fig. 6a, b), we find the formation of triangular-shaped areas which are caused again by the blocking of the isotropic step-flow growth similar to the GaN buffer layer [25, 27, 28].
According to AFM measurements of the SL samples, S5 and S10 display an increased density of TDs of 1.2 × 109 and 1.7 × 109 cm−2, respectively. However, S20 exhibits a dislocation density of only 0.7 × 109 cm−2, which is commensurate with the density of dislocations in the buffer layer of GaN. At the same time, though, this reduction in the dislocation density is accompanied by an observed cracking of structure (Fig. 6c). It is known [29, 30] that this type of crack morphology is attributed to the combination of cleavage of \( \left(10\overline{1}0\right) \)-like planes in GaN and parting of \( \left(11\overline{2}0\right) \)-like planes in α-Al2O3, because cracking in the substrate and the epitaxial layer will occur simultaneously. With cracks generated at the (0001) GaN/(0001) α-Al2O3 interface, they run dominantly along the GaN \( \left[11\overline{2}0\right] \) and α-Al2O3 \( \left[10\overline{1}0\right] \) directions. It should be noted that the tendency of changes in density of TDs from AFM correlates well with changes in density of screw-type TDs from HRXRD data. The comparison of the absolute values is not possible, because the AFM gives the density of both screw- and edge-type TDs, but from HRXRD, we extract only the density of screw-type TDs. Moreover, since the cracking of the structure can influence the full width at half maximum of the XRD rocking curves, the density of dislocations extracted from HRXRD can be overestimated.
The experimental data obtained illustrate the progress of a number of structural relaxation processes, each of which is manifested to a different degree depending on the level of strain in the SLs. The most significant of them is cracking and generation of dislocations.
In this work, we have investigated strain relaxation through the growth and analysis of GaN/AlN SLs. SLs were grown nominally with symmetric 5-nm GaN wells and 5-nm AlN barriers repeated 5, 10, and 20 times. Detailed X-ray diffraction and reflectivity measurements demonstrated that the relaxation of the films generally increased with an increasing number of repetitions; however, the additional consideration of a Nagai tilt at each interface can explain the small apparent relaxation for the 5-period SL which is considered to be completely strained to the substrate. Additionally hidden is a transition to a different relaxation mechanism as the growth exceeds a critical thickness for relaxation, as AFM measurements revealed a sharp drop in the density of pits and associated threading dislocations for the 20-period sample. At the same time, an increase in the number of observed cracks was found for the 20-period sample. These results indicate a total SL thickness beyond which growth may be limited for the formation of high-quality coherent crystal structures; however, they may indicate a growth window for the reduction of threading dislocations by controlled relaxation of the epilayers.
This work was supported by the National Science Foundation Engineering Research Center for Power Optimization of Electro Thermal Systems (POETS) with cooperative agreement EEC-1449548 and the NAS of Ukraine via grant no. 24/15-H.
AK, SK, VK, NS, and HS carried out the XRD/XRR studies and experiment interpretation. PL carried out the AFM studies. SL, MW, and YuM grew the structures. AK, PL, and MW drafted the manuscript. AK, VK, AB, YuM, and GS participated in the design and coordination of the study. All authors read and approved the final manuscript.
V. Lashkaryov Institute of Semiconductor Physics, National Academy of Sciences of Ukraine, Pr. Nauky 41, 03680 Kiev, Ukraine
Institute for Nanoscience and Engineering, University of Arkansas, West Dickson 731, Fayetteville, AR 72701, USA
State Key Laboratory of Electronic Thin Film and Integrated Devices, University of Electronic Science and Technology of China, 610054 Chengdu, China
Morkoç H (2008) Front matter, in handbook of nitride semiconductors and devices, electronic and optical processes in nitrides. Volume 2. Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim, Germany. doi:10.1002/9783527628414.fmatter
Beeler M, Trichas E, Monroy E (2013) III-nitride semiconductors for intersubband optoelectronics: a review. Semicond Sci Technol 28:074022. doi:10.1088/0268-1242/28/7/074022 View ArticleGoogle Scholar
Kotsar Y, Monroy E (2014) Infrared emitters made from III-nitride semiconductors. In: Nitride semiconductor light-emitting diodes (LEDs) materials, technologies and applications. pp. 533–565. doi:10.1533/9780857099303.3.533
Kandaswamy PK, Guillot F, Bellet-Amalric E et al (2008) GaN/AlN short-period superlattices for intersubband optoelectronics: a systematic study of their epitaxial growth, design, and performance. J Appl Phys 104:093501. doi:10.1063/1.3003507 View ArticleGoogle Scholar
Kandaswamy PK, Bougerol C, Jalabert D et al (2009) Strain relaxation in short-period polar GaN/AlN superlattices. J Appl Phys 106:013526. doi:10.1063/1.3168431 View ArticleGoogle Scholar
Kotsar Y, Doisneau B, Bellet-Amalric E et al (2011) Strain relaxation in GaN/AlxGa1-xN superlattices grown by plasma-assisted molecular-beam epitaxy. J Appl Phys 110:033501. doi:10.1063/1.3618680 View ArticleGoogle Scholar
Kotsar Y, Kandaswamy PK, Das A et al (2011) Strain relaxation in GaN/Al0.1Ga0.9N superlattices for mid-infrared intersubband absorption. J Cryst Growth 323:64–67. doi:10.1016/j.jcrysgro.2010.11.076 View ArticleGoogle Scholar
Kladko V, Kuchuk A, Lytvyn P et al (2012) Substrate effects on the strain relaxation in GaN/AlN short-period superlattices. Nanoscale Res Lett 7:289. doi:10.1186/1556-276X-7-289 View ArticleGoogle Scholar
Kladko VP, Kuchuk AV, Safryuk NV et al (2011) Influence of template type and buffer strain on structural properties of GaN multilayer quantum wells grown by PAMBE, an x-ray study. J Phys D Appl Phys 44:025403. doi:10.1088/0022-3727/44/2/025403 View ArticleGoogle Scholar
Sun HH, Guo FY, Li DY et al (2012) Intersubband absorption properties of high Al content AlxGa1-xN/GaN multiple quantum wells grown with different interlayers by metal organic chemical vapor deposition. Nanoscale Res Lett 7:649. doi:10.1186/1556-276X-7-649 View ArticleGoogle Scholar
Bourret A, Barski A, Rouvière JL et al (1998) Growth of aluminum nitride on (111) silicon: microstructure and interface structure. J Appl Phys 83:2003. doi:10.1063/1.366929 View ArticleGoogle Scholar
Schubert F, Merkel U, Mikolajick T, Schmult S (2014) Influence of substrate quality on structural properties of AlGaN/GaN superlattices grown by molecular beam epitaxy. J Appl Phys 115:083511. doi:10.1063/1.4866718 View ArticleGoogle Scholar
Lim CB, Ajay A, Bougerol C et al (2015) Nonpolar m-plane GaN/AlGaN heterostructures with intersubband transitions in the 5-10 THz band. Nanotechnology 26:435201. doi:10.1088/0957-4484/26/43/435201 View ArticleGoogle Scholar
Kyutt RN, Shcheglov MP, Ratnikov VV et al (2013) X-ray diffraction study of short-period AlN/GaN superlattices. Crystallogr Rep 58:953–958. doi:10.1134/S1063774513070109 View ArticleGoogle Scholar
Huang XR, Bai J, Dudley M et al (2005) Epitaxial tilting of GaN grown on vicinal surfaces of sapphire. Appl Phys Lett 86:211916. doi:10.1063/1.1940123 View ArticleGoogle Scholar
Huang XR, Bai J, Dudley M et al (2005) Step-controlled strain relaxation in the vicinal surface epitaxy of nitrides. Phys Rev Lett 95:086101. doi:10.1103/PhysRevLett.95.086101 View ArticleGoogle Scholar
Krysko M, Domagala JZ, Czernecki R, Leszczynski M (2013) Triclinic deformation of InGaN layers grown on vicinal surface of GaN (00.1) substrates. J Appl Phys 114:113512. doi:10.1063/1.4821969 View ArticleGoogle Scholar
Wang L, Huang F, Cui Z et al (2014) Crystallographic tilting of AlN/GaN layers on miscut Si (111) substrates. Mater Lett 115:89–91. doi:10.1016/j.matlet.2013.10.036 View ArticleGoogle Scholar
Liu HF, Zhang L, Chua SJ, Chi DZ (2014) Crystallographic tilt in GaN-on-Si (111) heterostructures grown by metal–organic chemical vapor deposition. J Mater Sci 49:3305–3313. doi:10.1007/s10853-014-8025-6 View ArticleGoogle Scholar
Nagai H (1974) Structure of vapor-deposited GaxIn1-xAs crystals. J Appl Phys 45:3789. doi:10.1063/1.1663861 View ArticleGoogle Scholar
Shen XQ, Yamamoto T, Nakashima S et al (2005) GaN/AlN super-lattice structures on vicinal sapphire (0001) substrates grown by rf-MBE. Phys Status Solidi 2:2385–2388. doi:10.1002/pssc.200461303 View ArticleGoogle Scholar
Liu XY, Holmström P, Jänes P et al (2007) Intersubband absorption at 1.5–3.5 μm in GaN/AlN multiple quantum wells grown by molecular beam epitaxy on sapphire. Phys Status Solidi 244:2892–2905. doi:10.1002/pssb.200675606 View ArticleGoogle Scholar
Kuchuk AV, Kladko VP, Petrenko TL et al (2014) Mechanism of strain-influenced quantum well thickness reduction in GaN/AlN short-period superlattices. Nanotechnology 25:245602. doi:10.1088/0957-4484/25/24/245602 View ArticleGoogle Scholar
Gogneau N, Jalabert D, Monroy E et al (2004) Influence of AlN overgrowth on structural properties of GaN quantum wells and quantum dots grown by plasma-assisted molecular beam epitaxy. J Appl Phys 96:1104. doi:10.1063/1.1759785 View ArticleGoogle Scholar
Vézian S, Massies J, Semond F et al (2000) In situ imaging of threading dislocation terminations at the surface of GaN(0001) epitaxially grown on Si(111). Phys Rev B 61:7618–7621. doi:10.1103/PhysRevB.61.7618 View ArticleGoogle Scholar
Heying B, Tarsa EJ, Elsass CR et al (1999) Dislocation mediated surface morphology of GaN. J Appl Phys 85:6470. doi:10.1063/1.370150 View ArticleGoogle Scholar
Xie MH, Seutter SM, Zhu WK et al (1999) Anisotropic step-flow growth and island growth of GaN(0001) by molecular beam epitaxy. Phys Rev Lett 82:2749–2752. doi:10.1103/PhysRevLett.82.2749 View ArticleGoogle Scholar
Zauner ARA, Aret E, van Enckevort WJP et al (2002) Homo-epitaxial growth on the N-face of GaN single crystals: the influence of the misorientation on the surface morphology. J Cryst Growth 240:14–21. doi:10.1016/S0022-0248(01)02389-2 View ArticleGoogle Scholar
Itoh N, Rhee JC, Kawabata T, Koike S (1985) Study of cracking mechanism in GaN/α-Al2O3 structure. J Appl Phys 58:1828. doi:10.1063/1.336035 View ArticleGoogle Scholar
Etzkorn EV, Clarke DR (2001) Cracking of GaN films. J Appl Phys 89:1025. doi:10.1063/1.1330243 View ArticleGoogle Scholar | CommonCrawl |
# Understanding arrays and their use in interpolation
Arrays are an essential data structure in C++. They allow us to store multiple values of the same data type in a single variable. In the context of interpolation, arrays are commonly used to store sets of data points that we want to interpolate between.
In C++, arrays are declared by specifying the data type of the elements and the size of the array. For example, to declare an array of integers with a size of 5, we would write:
```cpp
int myArray[5];
```
We can then assign values to the elements of the array using indexing. The first element of the array has an index of 0, the second element has an index of 1, and so on. For example, to assign the value 10 to the first element of the array, we would write:
```cpp
myArray[0] = 10;
```
Arrays can be used to store data points for interpolation. For example, if we have a set of x and y coordinates, we can store them in two separate arrays:
```cpp
double xValues[] = {1.0, 2.0, 3.0, 4.0, 5.0};
double yValues[] = {2.0, 4.0, 6.0, 8.0, 10.0};
```
These arrays represent the points (1.0, 2.0), (2.0, 4.0), (3.0, 6.0), (4.0, 8.0), and (5.0, 10.0). We can then use these arrays to perform interpolation calculations.
Arrays are a fundamental concept in C++ and understanding how to use them is crucial for implementing interpolation algorithms. In the following sections, we will explore different data types in C++ and their role in interpolation.
## Exercise
Declare an array of integers named `myArray` with a size of 3. Assign the values 1, 2, and 3 to the elements of the array.
### Solution
```cpp
int myArray[3];
myArray[0] = 1;
myArray[1] = 2;
myArray[2] = 3;
```
# Different data types in C++ and their role in interpolation
C++ supports a variety of data types that can be used in interpolation. The choice of data type depends on the nature of the data being interpolated and the desired precision of the calculations.
1. Integer: Integers are used to represent whole numbers without fractional parts. They are commonly used when the data being interpolated is discrete, such as counting the number of occurrences of an event. For example, if we are interpolating the number of app downloads over time, we might use integers to represent the number of downloads at each time point.
2. Floating-point: Floating-point numbers are used to represent real numbers with fractional parts. They are commonly used when the data being interpolated is continuous, such as measuring temperature or distance. Floating-point numbers provide a higher level of precision compared to integers. In C++, there are two types of floating-point numbers: `float` and `double`. The `double` type provides more precision and is commonly used in scientific and engineering applications.
3. Character: Characters are used to represent individual characters, such as letters or symbols. They are commonly used when interpolating text data, such as in natural language processing tasks. Characters are represented using single quotes, such as `'a'` or `'%'`.
4. String: Strings are used to represent sequences of characters. They are commonly used when interpolating text data that consists of multiple characters, such as sentences or paragraphs. Strings are represented using double quotes, such as `"Hello, world!"`.
5. Boolean: Booleans are used to represent logical values, such as `true` or `false`. They are commonly used when performing conditional operations in interpolation algorithms. Booleans are represented using the keywords `true` and `false`.
## Exercise
Which data type would you use to represent the temperature in degrees Celsius? Which data type would you use to represent the number of students in a classroom?
### Solution
To represent the temperature in degrees Celsius, we would use a floating-point number, such as `double` or `float`, since temperature can have fractional values.
To represent the number of students in a classroom, we would use an integer, since the number of students is a discrete value and cannot have fractional parts.
# Creating and using functions for interpolation
Functions are an essential part of programming in C++. They allow us to organize our code into reusable blocks and perform specific tasks. In the context of interpolation, functions can be used to define the mathematical formulas and algorithms used to calculate interpolated values.
To create a function in C++, we need to specify the return type, the name of the function, and any input parameters it requires. The return type specifies the type of value that the function will return after performing its calculations. The input parameters are the values that the function requires to perform its calculations.
Here is an example of a function that calculates the linear interpolation between two points:
```cpp
double linearInterpolation(double x1, double y1, double x2, double y2, double x) {
double y = y1 + (y2 - y1) * (x - x1) / (x2 - x1);
return y;
}
```
In this function, `x1` and `y1` represent the coordinates of the first point, `x2` and `y2` represent the coordinates of the second point, and `x` represents the x-coordinate of the point we want to interpolate. The function calculates the y-coordinate of the interpolated point using the formula for linear interpolation and returns it.
To use this function, we can call it and pass the appropriate values as arguments:
```cpp
double interpolatedValue = linearInterpolation(2.0, 5.0, 4.0, 9.0, 3.0);
```
In this example, the function `linearInterpolation` is called with the arguments `2.0, 5.0, 4.0, 9.0, 3.0`. The function performs the calculations and returns the interpolated value, which is then stored in the variable `interpolatedValue`.
## Exercise
Create a function named `polynomialInterpolation` that performs polynomial interpolation between three points. The function should take six input parameters: `double x0, double y0, double x1, double y1, double x2, double y2`. The function should return the y-coordinate of the interpolated point.
### Solution
```cpp
double polynomialInterpolation(double x0, double y0, double x1, double y1, double x2, double y2) {
double y = y0 * ((x - x1) * (x - x2)) / ((x0 - x1) * (x0 - x2)) +
y1 * ((x - x0) * (x - x2)) / ((x1 - x0) * (x1 - x2)) +
y2 * ((x - x0) * (x - x1)) / ((x2 - x0) * (x2 - x1));
return y;
}
```
# Using loops for efficient interpolation
Loops are a powerful tool in programming that allow us to repeat a block of code multiple times. In the context of interpolation, loops can be used to efficiently calculate interpolated values for a large set of data points.
One common use of loops in interpolation is to iterate over a set of data points and calculate the interpolated value for each point. This can be done using a for loop, which allows us to specify the starting point, the ending point, and the increment for each iteration.
Here is an example of using a for loop to perform linear interpolation for a set of data points:
```cpp
double linearInterpolation(double x1, double y1, double x2, double y2, double x) {
double y = y1 + (y2 - y1) * (x - x1) / (x2 - x1);
return y;
}
void interpolateData(double* x, double* y, int size, double* interpolatedValues) {
for (int i = 0; i < size - 1; i++) {
interpolatedValues[i] = linearInterpolation(x[i], y[i], x[i + 1], y[i + 1], x[i]);
}
}
```
In this example, the function `interpolateData` takes an array of x-values `x`, an array of y-values `y`, the size of the arrays `size`, and an array to store the interpolated values `interpolatedValues`. The function uses a for loop to iterate over the data points and calls the `linearInterpolation` function to calculate the interpolated value for each point.
To use this function, we can pass the appropriate arrays and size as arguments:
```cpp
double x[] = {1.0, 2.0, 3.0, 4.0};
double y[] = {2.0, 4.0, 6.0, 8.0};
double interpolatedValues[3];
interpolateData(x, y, 4, interpolatedValues);
```
In this example, the arrays `x` and `y` contain the data points, and the array `interpolatedValues` will store the interpolated values. The function `interpolateData` is called with the arrays and the size of the arrays. After the function is executed, the array `interpolatedValues` will contain the interpolated values.
## Exercise
Create a function named `splineInterpolation` that performs spline interpolation between four points. The function should take eight input parameters: `double x0, double y0, double x1, double y1, double x2, double y2, double x3, double y3`. The function should use a loop to calculate the interpolated values for a set of x-coordinates and store them in an array. The function should return a pointer to the array of interpolated values.
### Solution
```cpp
double* splineInterpolation(double x0, double y0, double x1, double y1, double x2, double y2, double x3, double y3) {
double* interpolatedValues = new double[100];
double step = (x3 - x0) / 100;
double x = x0;
for (int i = 0; i < 100; i++) {
interpolatedValues[i] = cubicSplineInterpolation(x0, y0, x1, y1, x2, y2, x3, y3, x);
x += step;
}
return interpolatedValues;
}
```
In this example, the function `splineInterpolation` creates a dynamic array `interpolatedValues` to store the interpolated values. It uses a loop to calculate the interpolated value for each x-coordinate by calling the `cubicSplineInterpolation` function. The x-coordinate is incremented by the `step` value in each iteration. Finally, the function returns a pointer to the array of interpolated values.
Note that in this example, the `cubicSplineInterpolation` function is used to perform the actual spline interpolation. This function is not shown here, but it can be implemented using the formulas and algorithms specific to cubic spline interpolation.
# Pointers and their importance in data interpolation
Pointers are an important concept in C++ programming, and they play a crucial role in data interpolation. In interpolation, we often need to manipulate and access data values efficiently, and pointers provide a way to do this.
A pointer is a variable that stores the memory address of another variable. By using pointers, we can directly access and modify the value stored at a particular memory location. This is particularly useful in interpolation because it allows us to efficiently access and manipulate large sets of data.
For example, when performing linear interpolation, we often need to access the x and y values of two adjacent data points. Instead of creating separate variables to store these values, we can use pointers to directly access the values stored in the arrays that hold the data points.
Here's an example that demonstrates the use of pointers in linear interpolation:
```cpp
double linearInterpolation(double* x, double* y, int size, double target) {
for (int i = 0; i < size - 1; i++) {
if (target >= x[i] && target <= x[i + 1]) {
double* x1 = &x[i];
double* x2 = &x[i + 1];
double* y1 = &y[i];
double* y2 = &y[i + 1];
double interpolatedValue = *y1 + (*y2 - *y1) * (target - *x1) / (*x2 - *x1);
return interpolatedValue;
}
}
return 0.0; // default value if target is outside the range of the data
}
```
In this example, the function `linearInterpolation` takes arrays `x` and `y` that store the x and y values of the data points, the size of the arrays `size`, and the target value for interpolation `target`. The function uses a loop to find the two adjacent data points that bracket the target value. It then uses pointers to directly access the x and y values of these data points and performs the linear interpolation calculation.
By using pointers, we avoid the need to create separate variables to store the x and y values, which can be especially useful when dealing with large sets of data. Pointers allow us to efficiently access and manipulate data values, making them an important tool in data interpolation.
## Exercise
Consider the following code snippet:
```cpp
double x[] = {1.0, 2.0, 3.0, 4.0};
double y[] = {2.0, 4.0, 6.0, 8.0};
double target = 2.5;
double interpolatedValue = linearInterpolation(x, y, 4, target);
```
What is the value of `interpolatedValue` after executing this code?
### Solution
The value of `interpolatedValue` is 5.0. This is because the target value 2.5 falls between the second and third data points (2.0 and 4.0), and linear interpolation is used to calculate the interpolated value.
# Linear interpolation: definition and implementation
Linear interpolation is a simple and commonly used method for estimating values between two known data points. It assumes that the relationship between the data points is linear, meaning that the values change at a constant rate.
In linear interpolation, we find the equation of the straight line that passes through two known data points and use this equation to estimate the value at a target point between the two data points.
The equation for linear interpolation can be written as:
$$y = y_1 + \frac{{(y_2 - y_1) \cdot (x - x_1)}}{{x_2 - x_1}}$$
where:
- $y$ is the estimated value at the target point
- $y_1$ and $y_2$ are the values at the two known data points
- $x$ is the target point
- $x_1$ and $x_2$ are the positions of the two known data points
To implement linear interpolation in C++, we can create a function that takes the arrays of x and y values, the size of the arrays, and the target value as input. The function then iterates through the arrays to find the two data points that bracket the target value. It calculates the estimated value using the linear interpolation equation and returns the result.
Here's an example implementation of the linear interpolation function:
```cpp
double linearInterpolation(double* x, double* y, int size, double target) {
for (int i = 0; i < size - 1; i++) {
if (target >= x[i] && target <= x[i + 1]) {
double x1 = x[i];
double x2 = x[i + 1];
double y1 = y[i];
double y2 = y[i + 1];
double interpolatedValue = y1 + (y2 - y1) * (target - x1) / (x2 - x1);
return interpolatedValue;
}
}
return 0.0; // default value if target is outside the range of the data
}
```
In this implementation, the function `linearInterpolation` takes arrays `x` and `y` that store the x and y values of the data points, the size of the arrays `size`, and the target value for interpolation `target`. The function uses a loop to find the two adjacent data points that bracket the target value. It then calculates the interpolated value using the linear interpolation equation and returns the result.
## Exercise
Consider the following code snippet:
```cpp
double x[] = {1.0, 2.0, 3.0, 4.0};
double y[] = {2.0, 4.0, 6.0, 8.0};
double target = 2.5;
double interpolatedValue = linearInterpolation(x, y, 4, target);
```
What is the value of `interpolatedValue` after executing this code?
### Solution
The value of `interpolatedValue` is 5.0. This is because the target value 2.5 falls between the second and third data points (2.0 and 4.0), and linear interpolation is used to calculate the interpolated value.
# Polynomial interpolation: concepts and application
Polynomial interpolation is a method for estimating values between known data points using a polynomial function. Unlike linear interpolation, which assumes a linear relationship between data points, polynomial interpolation allows for more complex relationships by fitting a polynomial curve to the data.
The general form of a polynomial function is:
$$y = a_0 + a_1x + a_2x^2 + \ldots + a_nx^n$$
where:
- $y$ is the estimated value at the target point
- $a_0, a_1, a_2, \ldots, a_n$ are the coefficients of the polynomial
- $x$ is the target point
To perform polynomial interpolation, we need to determine the coefficients of the polynomial that best fits the given data points. This can be done using various methods, such as the method of least squares or Lagrange interpolation.
Once we have the coefficients, we can use the polynomial function to estimate the value at a target point. The accuracy of the estimation depends on the degree of the polynomial and the distribution of the data points.
In C++, we can implement polynomial interpolation by creating a function that takes the arrays of x and y values, the size of the arrays, the degree of the polynomial, and the target value as input. The function then calculates the coefficients of the polynomial using a suitable method and evaluates the polynomial at the target value to obtain the estimated value.
Here's an example implementation of a polynomial interpolation function using the method of least squares:
```cpp
#include <cmath>
double polynomialInterpolation(double* x, double* y, int size, int degree, double target) {
double coefficients[degree + 1] = {0.0};
// Calculate the coefficients using the method of least squares
for (int i = 0; i < size; i++) {
double xi = x[i];
double yi = y[i];
for (int j = 0; j <= degree; j++) {
coefficients[j] += pow(xi, j) * yi;
}
}
// Evaluate the polynomial at the target value
double interpolatedValue = 0.0;
for (int j = 0; j <= degree; j++) {
interpolatedValue += coefficients[j] * pow(target, j);
}
return interpolatedValue;
}
```
In this implementation, the function `polynomialInterpolation` takes arrays `x` and `y` that store the x and y values of the data points, the size of the arrays `size`, the degree of the polynomial `degree`, and the target value for interpolation `target`. The function calculates the coefficients of the polynomial using the method of least squares and evaluates the polynomial at the target value to obtain the interpolated value.
## Exercise
Consider the following code snippet:
```cpp
double x[] = {1.0, 2.0, 3.0, 4.0};
double y[] = {2.0, 4.0, 6.0, 8.0};
double target = 2.5;
double interpolatedValue = polynomialInterpolation(x, y, 4, 2, target);
```
What is the value of `interpolatedValue` after executing this code?
### Solution
The value of `interpolatedValue` is 5.5. This is because the target value 2.5 falls between the second and third data points (2.0 and 4.0), and polynomial interpolation using a degree 2 polynomial is used to calculate the interpolated value.
# Spline interpolation: theory and practical examples
Spline interpolation is a method for estimating values between known data points using piecewise-defined polynomial functions called splines. Unlike polynomial interpolation, which fits a single polynomial curve to the entire data set, spline interpolation breaks the data set into smaller intervals and fits a separate polynomial curve to each interval.
The basic idea behind spline interpolation is to ensure that the polynomial curves are smooth and continuous at the points where they meet. This is achieved by imposing certain conditions on the polynomials and their derivatives at the interval boundaries.
There are different types of splines, such as linear splines, quadratic splines, and cubic splines, depending on the degree of the polynomial used in each interval. Cubic splines are the most commonly used because they provide a good balance between flexibility and smoothness.
To perform spline interpolation, we need to determine the coefficients of the polynomials for each interval. This can be done using various methods, such as the method of solving a tridiagonal system of equations or the method of solving a system of linear equations.
Once we have the coefficients, we can use the spline functions to estimate the value at a target point. The accuracy of the estimation depends on the degree of the splines and the distribution of the data points.
In C++, we can implement spline interpolation by creating a function that takes the arrays of x and y values, the size of the arrays, and the target value as input. The function then calculates the coefficients of the splines using a suitable method and evaluates the splines at the target value to obtain the estimated value.
Here's an example implementation of a cubic spline interpolation function:
```cpp
#include <cmath>
double cubicSplineInterpolation(double* x, double* y, int size, double target) {
// Calculate the coefficients of the cubic splines
double h[size - 1];
double alpha[size - 1];
double l[size];
double mu[size - 1];
double z[size];
double c[size];
double b[size - 1];
double d[size - 1];
for (int i = 0; i < size - 1; i++) {
h[i] = x[i + 1] - x[i];
alpha[i] = (3.0 / h[i]) * (y[i + 1] - y[i]) - (3.0 / h[i - 1]) * (y[i] - y[i - 1]);
}
l[0] = 1.0;
mu[0] = 0.0;
z[0] = 0.0;
for (int i = 1; i < size - 1; i++) {
l[i] = 2.0 * (x[i + 1] - x[i - 1]) - h[i - 1] * mu[i - 1];
mu[i] = h[i] / l[i];
z[i] = (alpha[i] - h[i - 1] * z[i - 1]) / l[i];
}
l[size - 1] = 1.0;
z[size - 1] = 0.0;
c[size - 1] = 0.0;
for (int j = size - 2; j >= 0; j--) {
c[j] = z[j] - mu[j] * c[j + 1];
b[j] = (y[j + 1] - y[j]) / h[j] - h[j] * (c[j + 1] + 2.0 * c[j]) / 3.0;
d[j] = (c[j + 1] - c[j]) / (3.0 * h[j]);
}
// Find the interval where the target value falls
int interval = 0;
for (int i = 0; i < size - 1; i++) {
if (target >= x[i] && target <= x[i + 1]) {
interval = i;
break;
}
}
// Evaluate the cubic spline at the target value
double interpolatedValue = y[interval] + b[interval] * (target - x[interval]) + c[interval] * pow(target - x[interval], 2) + d[interval] * pow(target - x[interval], 3);
return interpolatedValue;
}
```
In this implementation, the function `cubicSplineInterpolation` takes arrays `x` and `y` that store the x and y values of the data points, the size of the arrays `size`, and the target value for interpolation `target`. The function calculates the coefficients of the cubic splines using the method of solving a tridiagonal system of equations and evaluates the cubic splines at the target value to obtain the interpolated value.
Consider the following data points:
```
x = [1.0, 2.0, 3.0, 4.0]
y = [2.0, 4.0, 6.0, 8.0]
target = 2.5
```
Using the cubic spline interpolation function `cubicSplineInterpolation`, what is the value of `interpolatedValue` after executing the following code?
```cpp
double interpolatedValue = cubicSplineInterpolation(x, y, 4, target);
```
Answer
The value of `interpolatedValue` is 4.5. This is because the target value 2.5 falls between the second and third data points (2.0 and 4.0), and cubic spline interpolation is used to calculate the interpolated value.
# Interpolation in higher dimensions
Interpolation in higher dimensions extends the concepts of interpolation to data sets with more than one independent variable. In addition to the independent variables, the data set also includes a dependent variable that we want to estimate at a target point.
In the previous sections, we focused on interpolation in one dimension, where the data points are arranged along a single axis. In higher dimensions, the data points are arranged in a grid-like structure, with each point having multiple independent variables.
To perform interpolation in higher dimensions, we need to determine the values of the dependent variable at the target point by considering the values of the independent variables at nearby data points. There are different methods for accomplishing this, such as linear interpolation, polynomial interpolation, and spline interpolation.
Linear interpolation in higher dimensions involves finding the nearest data points to the target point and estimating the value of the dependent variable by interpolating linearly between these points. This method assumes a linear relationship between the dependent variable and the independent variables.
Polynomial interpolation in higher dimensions extends the concept of polynomial interpolation in one dimension to multiple dimensions. It involves fitting a polynomial surface to the data points and estimating the value of the dependent variable at the target point by evaluating the polynomial surface at that point.
Spline interpolation in higher dimensions extends the concept of spline interpolation in one dimension to multiple dimensions. It involves fitting a spline surface to the data points and estimating the value of the dependent variable at the target point by evaluating the spline surface at that point.
In C++, we can implement interpolation in higher dimensions by creating functions that take multidimensional arrays of independent variables and a one-dimensional array of the dependent variable as input. The functions then calculate the values of the dependent variable at the target point using the chosen interpolation method.
Here's an example implementation of a linear interpolation function in two dimensions:
```cpp
#include <cmath>
double linearInterpolation2D(double** x, double* y, int rows, int cols, double targetX, double targetY) {
int rowIndex = 0;
int colIndex = 0;
// Find the row index of the target point
for (int i = 0; i < rows - 1; i++) {
if (targetX >= x[i][0] && targetX <= x[i + 1][0]) {
rowIndex = i;
break;
}
}
// Find the column index of the target point
for (int j = 0; j < cols - 1; j++) {
if (targetY >= x[0][j] && targetY <= x[0][j + 1]) {
colIndex = j;
break;
}
}
// Perform linear interpolation in the row direction
double interpolatedValueRow = y[rowIndex] + (targetX - x[rowIndex][0]) * (y[rowIndex + 1] - y[rowIndex]) / (x[rowIndex + 1][0] - x[rowIndex][0]);
// Perform linear interpolation in the column direction
double interpolatedValueCol = y[colIndex] + (targetY - x[0][colIndex]) * (y[colIndex + 1] - y[colIndex]) / (x[0][colIndex + 1] - x[0][colIndex]);
// Perform linear interpolation in both directions
double interpolatedValue = (interpolatedValueRow + interpolatedValueCol) / 2.0;
return interpolatedValue;
}
```
In this implementation, the function `linearInterpolation2D` takes a two-dimensional array `x` that stores the values of the independent variables, a one-dimensional array `y` that stores the values of the dependent variable, the number of rows and columns in the arrays, and the target values for interpolation `targetX` and `targetY`. The function finds the nearest data points to the target point in both the row and column directions, performs linear interpolation in each direction, and calculates the interpolated value as the average of the interpolated values in both directions.
Consider the following data points in two dimensions:
```
x = [[1.0, 2.0, 3.0],
[4.0, 5.0, 6.0]]
y = [2.0, 4.0, 6.0]
targetX = 2.5
targetY = 4.5
```
Using the linear interpolation function `linearInterpolation2D`, what is the value of `interpolatedValue` after executing the following code?
```cpp
double interpolatedValue = linearInterpolation2D(x, y, 2, 3, targetX, targetY);
```
Answer
The value of `interpolatedValue` is 4.0. This is because the target point (2.5, 4.5) falls between the second and third data points in both the row and column directions, and linear interpolation is used to calculate the interpolated value.
# Interpolation error analysis and optimization
Interpolation is a powerful technique for estimating values of a function at points between known data points. However, it is important to understand that interpolation introduces errors, especially when the function being interpolated is complex or has irregular behavior.
Interpolation error refers to the difference between the true value of the function and the estimated value obtained through interpolation. The error can be caused by various factors, such as the choice of interpolation method, the density and distribution of the data points, and the smoothness of the function being interpolated.
To analyze interpolation error, we can compare the interpolated values with the true values of the function at a set of test points. By calculating the difference between the interpolated values and the true values, we can evaluate the accuracy of the interpolation method.
One common measure of interpolation error is the mean squared error (MSE), which is calculated by taking the average of the squared differences between the interpolated values and the true values. A lower MSE indicates a more accurate interpolation.
Optimizing interpolation involves finding the interpolation method and parameters that minimize the interpolation error. This can be done through trial and error or by using optimization algorithms. The goal is to find the best combination of interpolation method, data point distribution, and other factors that result in the lowest interpolation error.
In addition to optimizing the interpolation method itself, we can also optimize the data points used for interpolation. This can be done by selectively adding or removing data points to improve the accuracy of the interpolation. For example, if the function being interpolated has regions of high curvature, adding more data points in those regions can reduce interpolation error.
It is important to note that interpolation error cannot be completely eliminated, especially when dealing with complex functions or sparse data. However, by understanding the sources of interpolation error and optimizing the interpolation process, we can minimize the error and obtain more accurate estimates of the function values.
## Exercise
Consider a set of data points in one dimension:
```
x = [1.0, 2.0, 3.0, 4.0, 5.0]
y = [2.0, 4.0, 6.0, 8.0, 10.0]
```
Using the linear interpolation function `linearInterpolation1D` (similar to the `linearInterpolation2D` function but for one dimension), calculate the mean squared error (MSE) between the interpolated values and the true values at the data points.
### Solution
The MSE is calculated as follows:
```cpp
double mse = 0.0;
for (int i = 0; i < 5; i++) {
double interpolatedValue = linearInterpolation1D(x, y, 5, x[i]);
mse += pow(interpolatedValue - y[i], 2);
}
mse /= 5;
mse;
```
The MSE is 0.0, indicating that the interpolated values perfectly match the true values at the data points. This is because the data points form a straight line, and linear interpolation accurately captures the linear relationship between the independent and dependent variables.
# Real-world applications of data interpolation
1. **Geographic Information Systems (GIS)**: GIS is a field that deals with the collection, analysis, and interpretation of geographic data. Interpolation techniques are commonly used to estimate values at unobserved locations based on known data points. For example, in environmental studies, interpolation can be used to estimate pollution levels at locations where measurements are not available.
2. **Weather Forecasting**: Weather forecasting relies on accurate estimation of weather conditions at different locations and times. Interpolation is used to fill in the gaps between weather stations and provide continuous weather predictions. By interpolating weather data from multiple stations, meteorologists can create detailed weather maps and forecasts.
3. **Image Processing**: In image processing, interpolation is used to resize images, enhance image quality, and perform image registration. Interpolation techniques can be used to increase the resolution of an image or fill in missing pixels. This is particularly useful in medical imaging, where high-resolution images are required for accurate diagnosis.
4. **Finance and Economics**: Interpolation is widely used in finance and economics to estimate missing values in time series data. For example, in stock market analysis, interpolation can be used to estimate stock prices at specific time points based on available data. In economic forecasting, interpolation can be used to estimate economic indicators at different time periods.
5. **Environmental Modeling**: Interpolation techniques are used in environmental modeling to estimate variables such as temperature, precipitation, and air quality. These estimates are crucial for understanding the impact of environmental factors on ecosystems and human health. Interpolation can also be used to predict the spread of pollutants or the movement of wildlife based on observed data.
6. **Engineering and Manufacturing**: Interpolation is used in engineering and manufacturing to estimate values between measured data points. For example, in computer-aided design (CAD), interpolation can be used to create smooth curves and surfaces based on a limited set of data points. In robotics, interpolation can be used to control the movement of robotic arms and achieve smooth trajectories.
These are just a few examples of the many real-world applications of data interpolation. Interpolation techniques are versatile and can be applied to solve a wide range of problems in various fields. By accurately estimating values between known data points, interpolation enables us to make informed decisions and predictions based on limited information. | Textbooks |
Search Results: 1 - 10 of 6128 matches for " Stéphane Detournay "
Page 1 /6128
Black holes in symmetric spaces : anti-de Sitter spaces
Laurent Claessens,Stéphane Detournay
Mathematics , 2005,
Abstract: Using symmetric space techniques, we show that closed orbits of the Iwasawa subgroups of $SO(2,l-1)$ naturally define singularities of a black hole causal structure in anti-de Sitter spaces in $l \geq 3$ dimensions. In particular, we recover for $l=3$ the non-rotating massive BTZ black hole. The method presented here is very simple and in principle generalizable to any semi-simple symmetric space.
Boundary conditions for spacelike and timelike warped AdS_3 spaces in topologically massive gravity
Geoffrey Compère,Stéphane Detournay
Physics , 2009, DOI: 10.1088/1126-6708/2009/08/092
Abstract: We propose a set of consistent boundary conditions containing the spacelike warped black holes solutions of Topologically Massive Gravity. We prove that the corresponding asymptotic charges whose algebra consists in a Virasoro algebra and a current algebra are finite, integrable and conserved. A similar analysis is performed for the timelike warped AdS_3 spaces which contain a family of regular solitons. The energy of the boundary Virasoro excitations is positive while the current algebra leads to negative (for the spacelike warped case) and positive (for the timelike warped case) energy boundary excitations. We discuss the relationship with the Brown-Henneaux boundary conditions.
The deformation quantizations of the hyperbolic plane
Pierre Bieliavsky,Stéphane Detournay,Philippe Spindel
Physics , 2008, DOI: 10.1007/s00220-008-0697-9
Abstract: We describe the space of (all) invariant deformation quantizations on the hyperbolic plane as solutions of the evolution of a second order hyperbolic differential operator. The construction is entirely explicit and relies on non-commutative harmonic analytical techniques on symplectic symmetric spaces. The present work presents a unified method producing every quantization of the hyperbolic plane, and provides, in the 2-dimensional context, an exact solution to Weinstein's WKB quantization program within geometric terms. The construction reveals the existence of a metric of Lorentz signature canonically attached (or `dual') to the geometry of the hyperbolic plane through the quantization process.
Semi-classical central charge in topologically massive gravity
Physics , 2008, DOI: 10.1088/0264-9381/26/13/139801
Abstract: It is shown that the warped black holes geometries discussed recently in 0807.3040 admit an algebra of asymptotic symmetries isomorphic to the semi-direct product of a Virasoro algebra and an algebra of currents. The realization of this asymptotic symmetry by canonical charges allows one to find the central charge of the Virasoro algebra. The right-moving central charge $c_R = \frac{(5\hat{\nu}^2+3)l}{G\hat{\nu} (\hat{\nu}^2+3)}$ is obtained when the Virasoro generators are normalized in order to have a positive zero mode spectrum for the warped black holes. The current algebra is also shown to be centrally-extended.
Non-Einstein geometries in Chiral Gravity
Geoffrey Compère,Sophie de Buyl,Stéphane Detournay
Physics , 2010, DOI: 10.1007/JHEP10(2010)042
Abstract: We analyze the asymptotic solutions of Chiral Gravity (Topologically Massive Gravity at \mu l = 1 with Brown-Henneaux boundary conditions) focusing on non-Einstein metrics. A class of such solutions admits curvature singularities in the interior which are reflected as singularities or infinite bulk energy of the corresponding linear solutions. A non-linear solution is found exactly. The back-reaction induces a repulsion of geodesics and a shielding of the singularity by an event horizon but also introduces closed timelike curves.
Asymptotic symmetries of Schr?dinger spacetimes
Geoffrey Compère,Sophie de Buyl,Stéphane Detournay,Kentaroh Yoshida
Abstract: We discuss the asymptotic symmetry algebra of the Schrodinger-invariant metrics in d+3 dimensions and its realization on finite temperature solutions of gravity coupled to matter fields. These solutions have been proposed as gravity backgrounds dual to non-relativistic CFTs with critical exponent z in d space dimensions. It is known that the Schrodinger algebra possesses an infinite-dimensional extension, the Schrodinger-Virasoro algebra. However, we show that the asymptotic symmetry algebra of Schrodinger spacetimes is only isomorphic to the exact symmetry group of the background. It is possible to construct from first principles finite and integrable charges that infinite-dimensionally extend the Schrodinger algebra but these charges are not correctly represented via a Dirac bracket. We briefly comment on the extension of our analysis to spacetimes with Lifshitz symmetry.
The Curious Case of Null Warped Space
Dionysios Anninos,Geoffrey Compère,Sophie de Buyl,Stéphane Detournay,Monica Guica
Abstract: We initiate a comprehensive study of a set of solutions of topologically massive gravity known as null warped anti-de Sitter spacetimes. These are pp-wave extensions of three-dimensional anti-de Sitter space. We first perform a careful analysis of the linearized stability of black holes in these spacetimes. We find two qualitatively different types of solutions to the linearized equations of motion: the first set has an exponential time dependence, the second - a polynomial time dependence. The solutions polynomial in time induce severe pathologies and moreover survive at the non-linear level. In order to make sense of these geometries, it is thus crucial to impose appropriate boundary conditions. We argue that there exists a consistent set of boundary conditions that allows us to reject the above pathological modes from the physical spectrum. The asymptotic symmetry group associated to these boundary conditions consists of a centrally-extended Virasoro algebra. Using this central charge we can account for the entropy of the black holes via Cardy's formula. Finally, we note that the black hole spectrum is chiral and prove a Birkoff theorem showing that there are no other stationary axisymmetric black holes with the specified asymptotics. We extend most of the analysis to a larger family of pp-wave black holes which are related to Schr\"odinger spacetimes with critical exponent z.
Policy iteration algorithm for zero-sum multichain stochastic games with mean payoff and perfect information
Marianne Akian,Jean Cochet-Terrasson,Sylvie Detournay,Stéphane Gaubert
Abstract: We consider zero-sum stochastic games with finite state and action spaces, perfect information, mean payoff criteria, without any irreducibility assumption on the Markov chains associated to strategies (multichain games). The value of such a game can be characterized by a system of nonlinear equations, involving the mean payoff vector and an auxiliary vector (relative value or bias). We develop here a policy iteration algorithm for zero-sum stochastic games with mean payoff, following an idea of two of the authors (Cochet-Terrasson and Gaubert, C. R. Math. Acad. Sci. Paris, 2006). The algorithm relies on a notion of nonlinear spectral projection (Akian and Gaubert, Nonlinear Analysis TMA, 2003), which is analogous to the notion of reduction of super-harmonic functions in linear potential theory. To avoid cycling, at each degenerate iteration (in which the mean payoff vector is not improved), the new relative value is obtained by reducing the earlier one. We show that the sequence of values and relative values satisfies a lexicographical monotonicity property, which implies that the algorithm does terminate. We illustrate the algorithm by a mean-payoff version of Richman games (stochastic tug-of-war or discrete infinity Laplacian type equation), in which degenerate iterations are frequent. We report numerical experiments on large scale instances, arising from the latter games, as well as from monotone discretizations of a mean-payoff pursuit-evasion deterministic differential game.
Pricing and Hedging in Stochastic Volatility Regime Switching Models [PDF]
Stéphane Goutte
Journal of Mathematical Finance (JMF) , 2013, DOI: 10.4236/jmf.2013.31006
We consider general regime switching stochastic volatility models where both the asset and the volatility dynamics depend on the values of a Markov jump process. Due to the stochastic volatility and the Markov regime switching, this financial market is thus incomplete and perfect pricing and hedging of options are not possible. Thus, we are interested in finding formulae to solve the problem of pricing and hedging options in this framework. For this, we use the local risk minimization approach to obtain pricing and hedging formulae based on solving a system of partial differential equations. Then we get also formulae to price volatility and variance swap options on these general regime switching stochastic volatility models.
Analysis of Relationships between Port Activity and Other Sectors of the Economy: Evidence from Cote d'Ivoire [PDF]
Nomel Paul Stéphane Essoh
American Journal of Industrial and Business Management (AJIBM) , 2013, DOI: 10.4236/ajibm.2013.33042
This research paper aims to study the correlation between the port activity and the activity of the different services sectors. By comparing trends between them and analyzing the causality relationships between the port traffic and the other economic sectors, our study tends to present how the activity of the port of Abidjan could have a decisive effect on the local economy. To meet our objectives, correlation analysis and statistical test tools Eviews and other techniques have been run with data provided by local agencies and port authority. By doing so, our research study finds that there is existing correlation between port activity and activity generated by the other services sectors and its contribution can accelerate the economic growth. | CommonCrawl |
Zh. Vychisl. Mat. Mat. Fiz., 2010, Volume 50, Number 3, Pages 563–574 (Mi zvmmf4852)
A higher-order conservative method for computing the Poiseuille flow of a rarefied gas in a channel of arbitrary cross section
V. A. Titareva, E. M. Shakhovb
a Cranfield University, Cranfield, UK, MK43 0AL
b Dorodnicyn Computing Center, Russian Academy of Sciences, ul. Vavilova 40, Moscow, 119333, Russia
Abstract: A high-order accurate method is proposed for analyzing the isothermal rarefied gas flow in an infinitely long channel with an arbitrarily shaped cross section (Poiseuille flow). The basic idea behind the method is the use of hybrid unstructured meshes in physical space and the application of a conservative technique for computing the gas velocity. Examples of calculations are provided for channels of various cross sections in a wide range of Knudsen numbers. Schemes of the first-, second-, and third orders of accuracy in space are compared.
Key words: rarefied gas, unstructured mesh, Poiseuille flow, Krook kinetic equation, $S$-model, TVD scheme.
Full text: PDF file (445 kB)
UDC: 519.634
Revised: 07.10.2009
Citation: V. A. Titarev, E. M. Shakhov, "A higher-order conservative method for computing the Poiseuille flow of a rarefied gas in a channel of arbitrary cross section", Zh. Vychisl. Mat. Mat. Fiz., 50:3 (2010), 563–574; Comput. Math. Math. Phys., 50:3 (2010), 537–548
\Bibitem{TitSha10}
\by V.~A.~Titarev, E.~M.~Shakhov
\paper A higher-order conservative method for computing the Poiseuille flow of a rarefied gas in a channel of arbitrary cross section
\mathnet{http://mi.mathnet.ru/zvmmf4852}
\adsnasa{http://adsabs.harvard.edu/cgi-bin/bib_query?2010CMMPh..50..537T}
\crossref{https://doi.org/10.1134/S0965542510030152}
http://mi.mathnet.ru/eng/zvmmf4852
V. A. Titarev, E. M. Shakhov, "Kinetic analysis of an isothermal flow in a long microchannel with rectangular cross section", Comput. Math. Math. Phys., 50:7 (2010), 1221–1237
V. A. Titarev, "Implicit numerical method for computing three-dimensional rarefied gas flows on unstructured meshes", Comput. Math. Math. Phys., 50:10 (2010), 1719–1733
V. A. Titarev, E. M. Shakhov, "Nonisothermal gas flow in a long channel analyzed on the basis of the kinetic S-Model", Comput. Math. Math. Phys., 50:12 (2010), 2131–2144
Rykov V.A., Titarev V.A., Shakhov E.M., "Rarefied Poiseuille flow in elliptical and rectangular tubes", Fluid Dyn., 46:3 (2011), 456–466
V. A. Titarev, E. M. Shakhov, "Efficient method for computing rarefied gas flow in a long finite plane channel", Comput. Math. Math. Phys., 52:2 (2012), 269–284
Misdanitis S., Pantazis S., Valougeorgis D., "Pressure driven rarefied gas flow through a slit and an orifice", Vacuum, 86:11 (2012), 1701–1708
Titarev V.A. Shakhov E.M., "Poiseuille flow and thermal creep in a capillary tube on the basis of the kinetic R-model", Fluid Dyn., 47:5 (2012), 661–672
Pantazis S. Valougeorgis D. Sharipov F., "End Corrections for Rarefied Gas Flows Through Capillaries of Finite Length", Vacuum, 97 (2013), 26–29 | CommonCrawl |
Sabourin, Anne
Earlier than Jan-19-2020 (9)
Any in Technology (9)
arXiv.org Machine Learning (6)
Neural Information Processing Systems (3)
arXiv.org (6)
Achab, Mastane (1)
Chiapino, Maël (1)
Clémençon, Stephan (4)
Clémençon, Stéphan (4)
Drees, Holger (1)
Feuillard, Vincent (1)
Garivier, Aurélien (1)
Goix, Nicolas (3)
JALALZAI, Hamid (3)
Sabourin, Anne (9)
Vernade, Claire (1)
-second order pareto (1)
ad algorithm (1)
algorithm 1 (1)
angular measure (2)
anne sabourin (1)
anomaly ranking (1)
asymptotic risk (2)
bandit problem (1)
binary classification (3)
carpentier (1)
classification error (1)
classification rule (1)
classifier (2)
compact support (1)
conditional distribution (2)
constraint (1)
correspond (1)
criterion (1)
denote (3)
dependence structure (3)
dimension (2)
dirichlet model (1)
em algorithm (1)
em curve (1)
empirical risk (2)
empirical risk minimization (1)
empirical version (1)
estimator (1)
excess-mass curve (1)
extreme bandit (1)
extreme region (4)
extreme regret (1)
extreme value (2)
extreme value theory (1)
extremeetc (1)
extremehunter (1)
extremehunter algorithm (1)
generalization (1)
information technology software (1)
it software (1)
k-armed bandit (1)
largest observation (1)
leb (1)
max k-armed (1)
mines-telecom paris (1)
minimization (2)
mixture model (1)
multivariate extreme (4)
optimal arm (1)
order pareto (1)
order pareto distribution (1)
principal component analysis (1)
procedure (1)
risk minimization (1)
sabourin (1)
simulated dataset (1)
sparse representation (1)
spectral (1)
spectral measure (1)
standard estimator (1)
subspace (1)
sup em (1)
survey article (1)
telecom paristech cnr institut mines-telecom (1)
valkó (1)
Page 1 of 9 results
Information about AI from the News, Publications, and Conferences
Automatic Classification – Tagging and Summarization – Customizable Filtering and Analysis
If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
On Binary Classification in Extreme Regions
JALALZAI, Hamid, Clémençon, Stephan, Sabourin, Anne
Neural Information Processing Systems Jan-11-2020, 12:36:16 GMT
In pattern recognition, a random label Y is to be predicted based upon observing a random vector X valued in $\mathbb{R} d$ with d 1 by means of a classification rule with minimum probability of error. As a consequence, empirical risk minimizers generally perform very poorly in extreme regions. It is the purpose of this paper to develop a general framework for classification in the extremes. Precisely, under non-parametric heavy-tail assumptions for the class distributions, we prove that a natural and asymptotic notion of risk, accounting for predictive performance in extreme regions of the input space, can be defined and show that minimizers of an empirical version of a non-asymptotic approximant of this dedicated risk, based on a fraction of the largest observations, lead to classification rules with good generalization capacity, by means of maximal deviation inequalities in low probability regions. Beyond theoretical results, numerical experiments are presented in order to illustrate the relevance of the approach developed.
artificial intelligence, extreme region, machine learning, (4 more...)
Neural Information Processing Systems
Technology: Information Technology > Artificial Intelligence > Machine Learning (1.00)
A Multivariate Extreme Value Theory Approach to Anomaly Clustering and Visualization
Chiapino, Maël, Clémençon, Stéphan, Feuillard, Vincent, Sabourin, Anne
arXiv.org Machine Learning Jul-17-2019
In a wide variety of situations, anomalies in the behaviour of a complex system, whose health is monitored through the observation of a random vector X = (X1,. .. , X d) valued in R d , correspond to the simultaneous occurrence of extreme values for certain subgroups $\alpha$ $\subset$ {1,. .. , d} of variables Xj. Under the heavy-tail assumption, which is precisely appropriate for modeling these phenomena, statistical methods relying on multivariate extreme value theory have been developed in the past few years for identifying such events/subgroups. This paper exploits this approach much further by means of a novel mixture model that permits to describe the distribution of extremal observations and where the anomaly type $\alpha$ is viewed as a latent variable. One may then take advantage of the model by assigning to any extreme point a posterior probability for each anomaly type $\alpha$, defining implicitly a similarity measure between anomalies. It is explained at length how the latter permits to cluster extreme observations and obtain an informative planar representation of anomalies using standard graph-mining tools. The relevance and usefulness of the clustering and 2-d visual display thus designed is illustrated on simulated datasets and on real observations as well, in the aeronautics application domain.
algorithm, artificial intelligence, machine learning, (19 more...)
arXiv.org Machine Learning
Technology: Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
Principal Component Analysis for Multivariate Extremes
Drees, Holger, Sabourin, Anne
arXiv.org Machine Learning Jun-26-2019
The first order behavior of multivariate heavy-tailed random vectors above large radial thresholds is ruled by a limit measure in a regular variation framework. For a high dimensional vector, a reasonable assumption is that the support of this measure is concentrated on a lower dimensional subspace, meaning that certain linear combinations of the components are much likelier to be large than others. Identifying this subspace and thus reducing the dimension will facilitate a refined statistical analysis. In this work we apply Principal Component Analysis (PCA) to a re-scaled version of radially thresholded observations. Within the statistical learning framework of empirical risk minimization, our main focus is to analyze the squared reconstruction error for the exceedances over large radial thresholds. We prove that the empirical risk converges to the true risk, uniformly over all projection subspaces. As a consequence, the best projection subspace is shown to converge in probability to the optimal one, in terms of the Hausdorff distance between their intersections with the unit sphere. In addition, if the exceedances are re-scaled to the unit ball, we obtain finite sample uniform guarantees to the reconstruction error pertaining to the estimated projection sub-space. Numerical experiments illustrate the relevance of the proposed framework for practical purposes.
artificial intelligence, machine learning, subspace, (18 more...)
Technology: Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Principal Component Analysis (0.61)
Neural Information Processing Systems Dec-31-2018
In pattern recognition, a random label Y is to be predicted based upon observing a random vector X valued in $\mathbb{R}^d$ with d>1 by means of a classification rule with minimum probability of error. In a wide variety of applications, ranging from finance/insurance to environmental sciences through teletraffic data analysis for instance, extreme (i.e. very large) observations X are of crucial importance, while contributing in a negligible manner to the (empirical) error however, simply because of their rarity. As a consequence, empirical risk minimizers generally perform very poorly in extreme regions. It is the purpose of this paper to develop a general framework for classification in the extremes. Precisely, under non-parametric heavy-tail assumptions for the class distributions, we prove that a natural and asymptotic notion of risk, accounting for predictive performance in extreme regions of the input space, can be defined and show that minimizers of an empirical version of a non-asymptotic approximant of this dedicated risk, based on a fraction of the largest observations, lead to classification rules with good generalization capacity, by means of maximal deviation inequalities in low probability regions. Beyond theoretical results, numerical experiments are presented in order to illustrate the relevance of the approach developed.
artificial intelligence, classifier, machine learning, (18 more...)
Max K-armed bandit: On the ExtremeHunter algorithm and beyond
Achab, Mastane, Clémençon, Stephan, Garivier, Aurélien, Sabourin, Anne, Vernade, Claire
This paper is devoted to the study of the max K-armed bandit problem, which consists in sequentially allocating resources in order to detect extreme values. Our contribution is twofold. We first significantly refine the analysis of the ExtremeHunter algorithm carried out in Carpentier and Valko (2014), and next propose an alternative approach, showing that, remarkably, Extreme Bandits can be reduced to a classical version of the bandit problem to a certain extent. Beyond the formal analysis, these two approaches are compared through numerical experiments.
extremehunter, information technology software, it software, (22 more...)
Information Technology > Data Science > Data Mining > Big Data (0.88)
Sparse Representation of Multivariate Extremes with Applications to Anomaly Ranking
Goix, Nicolas, Sabourin, Anne, Clémençon, Stéphan
arXiv.org Machine Learning Mar-31-2016
Extremes play a special role in Anomaly Detection. Beyond inference and simulation purposes, probabilistic tools borrowed from Extreme Value Theory (EVT), such as the angular measure, can also be used to design novel statistical learning methods for Anomaly Detection/ranking. This paper proposes a new algorithm based on multivariate EVT to learn how to rank observations in a high dimensional space with respect to their degree of 'abnormality'. The procedure relies on an original dimension-reduction technique in the extreme domain that possibly produces a sparse representation of multivariate extremes and allows to gain insight into the dependence structure thereof, escaping the curse of dimensionality. The representation output by the unsupervised methodology we propose here can be combined with any Anomaly Detection technique tailored to non-extreme data. As it performs linearly with the dimension and almost linearly in the data (in O(dn log n)), it fits to large scale problems. The approach in this paper is novel in that EVT has never been used in its multivariate version in the field of Anomaly Detection. Illustrative experimental results provide strong empirical evidence of the relevance of our approach.
artificial intelligence, dependence structure, machine learning, (16 more...)
Industry: Education (0.34)
Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.54)
Information Technology > Artificial Intelligence > Machine Learning > Learning in High Dimensional Spaces (0.54)
Sparsity in Multivariate Extremes with Applications to Anomaly Detection
Capturing the dependence structure of multivariate extreme events is a major concern in many fields involving the management of risks stemming from multiple sources, e.g. portfolio monitoring, insurance, environmental risk management and anomaly detection. One convenient (non-parametric) characterization of extremal dependence in the framework of multivariate Extreme Value Theory (EVT) is the angular measure, which provides direct information about the probable 'directions' of extremes, that is, the relative contribution of each feature/coordinate of the 'largest' observations. Modeling the angular measure in high dimensional problems is a major challenge for the multivariate analysis of rare events. The present paper proposes a novel methodology aiming at exhibiting a sparsity pattern within the dependence structure of extremes. This is done by estimating the amount of mass spread by the angular measure on representative sets of directions, corresponding to specific sub-cones of $R^d\_+$. This dimension reduction technique paves the way towards scaling up existing multivariate EVT methods. Beyond a non-asymptotic study providing a theoretical validity framework for our method, we propose as a direct application a --first-- anomaly detection algorithm based on multivariate EVT. This algorithm builds a sparse 'normal profile' of extreme behaviours, to be confronted with new (possibly abnormal) extreme observations. Illustrative experimental results provide strong empirical evidence of the relevance of our approach.
artificial intelligence, dependence, survey article, (18 more...)
On Anomaly Ranking and Excess-Mass Curves
arXiv.org Machine Learning Feb-5-2015
Learning how to rank multivariate unlabeled observations depending on their degree of abnormality/novelty is a crucial problem in a wide range of applications. In practice, it generally consists in building a real valued "scoring" function on the feature space so as to quantify to which extent observations should be considered as abnormal. In the 1-d situation, measurements are generally considered as "abnormal" when they are remote from central measures such as the mean or the median. Anomaly detection then relies on tail analysis of the variable of interest. Extensions to the multivariate setting are far from straightforward and it is precisely the main purpose of this paper to introduce a novel and convenient (functional) criterion for measuring the performance of a scoring function regarding the anomaly ranking task, referred to as the Excess-Mass curve (EM curve). In addition, an adaptive algorithm for building a scoring function based on unlabeled data X1 , . . . , Xn with a nearly optimal EM is proposed and is analyzed from a statistical perspective.
artificial intelligence, assumption, machine learning, (14 more...) | CommonCrawl |
A multimodal deep learning framework using local feature representations for face recognition
Alaa S. Al-Waisy1,
Rami Qahwaji1,
Stanley Ipson1 &
Shumoos Al-Fahdawi1
Machine Vision and Applications volume 29, pages 35–54 (2018)Cite this article
The most recent face recognition systems are mainly dependent on feature representations obtained using either local handcrafted-descriptors, such as local binary patterns (LBP), or use a deep learning approach, such as deep belief network (DBN). However, the former usually suffers from the wide variations in face images, while the latter usually discards the local facial features, which are proven to be important for face recognition. In this paper, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the DBN is proposed to address the face recognition problem in unconstrained conditions. Firstly, a novel multimodal local feature extraction approach based on merging the advantages of the Curvelet transform with Fractal dimension is proposed and termed the Curvelet–Fractal approach. The main motivation of this approach is that the Curvelet transform, a new anisotropic and multidirectional transform, can efficiently represent the main structure of the face (e.g., edges and curves), while the Fractal dimension is one of the most powerful texture descriptors for face images. Secondly, a novel framework is proposed, termed the multimodal deep face recognition (MDFR) framework, to add feature representations by training a DBN on top of the local feature representations instead of the pixel intensity representations. We demonstrate that representations acquired by the proposed MDFR framework are complementary to those acquired by the Curvelet–Fractal approach. Finally, the performance of the proposed approaches has been evaluated by conducting a number of extensive experiments on four large-scale face datasets: the SDUMLA-HMT, FERET, CAS-PEAL-R1, and LFW databases. The results obtained from the proposed approaches outperform other state-of-the-art of approaches (e.g., LBP, DBN, WPCA) by achieving new state-of-the-art results on all the employed datasets.
Avoid the common mistakes
In the recent years, there has been a growing interest in highly secured and well-designed face recognition systems, due to their potentially wide applications in many sensitive places such as controlling access to physical as well as virtual places in both commercial and military associations, including ATM cash dispensers, e-learning, information security, intelligent surveillance, and other daily human applications [1]. In spite of the significant improvement in the performance of face recognition over previous decades, it still a challenging task for the research community, especially when face images are taken in unconstrained conditions due to the large intra-personal variations such as changes in facial expression, pose, illumination, aging, and the small interpersonal differences. Face recognition systems encompass two fundamental stages: feature extraction and classification. The second stage is dependent on the first. Therefore, the task of extracting and learning useful and highly discriminating facial features in order to minimize intra-personal variations and maximize interpersonal differences is complicated.
In this regard, a number of approaches have been proposed, implemented, and refined to address all these drawbacks and problems in the face recognition system. These approaches can be divided into two categories: local handcrafted-descriptor approaches and deep learning-based approaches. Local handcrafted-descriptor approaches can be further divided into four groups: feature-based, holistic-based, learning-based and hybrid-based approaches [2]. In the first category, a geometric vector representing the facial features is extracted by measuring and computing the locations and geometric relationships among facial features, such as the mouth, eyes and nose, and using it as an input to a structural classifier. The elastic bunch graph matching (EBGM) system is an example of a features-based method, which uses the responses of Gabor filters at different orientations and frequencies at each facial feature point to extract a set of local features [3, 4]. Compared with the feature-based approaches, the holistic methods usually extract the feature vector by operating on the whole face image instead of measuring the local geometric features. The eigenface methods are the best well-known examples of these approaches, which are represented by principal component analysis (PCA), independent component analysis (ICA), etc. [5]. The third learning-based approaches learn features from labeled training samples using machine learning techniques. Finally, the hybrid approaches are based on combinations of two or more of these categories. Some examples of the third and fourth categories can be found in [6,7,8]. Previous research has demonstrated the efficiency of local handcrafted-descriptor approaches used as robust and discriminative feature detectors to solve the face recognition problem even when relatively few training samples per person are available, as in [9,10,11]. However, the performance using local handcrafted-descriptors approaches declines dramatically in unconstrained conditions due to fact that the constructed face representations are very sensitive to the highly nonlinear intra-personal variations, such as expression, illumination, pose, and occlusion [12]. To address these drawbacks, considerable attention has been paid to the use of deep learning approaches (e.g., deep neural networks) to automatically learn a set of effective feature representations through hierarchical nonlinear mappings, which can robustly handle the nonlinear variations (intra- and interpersonal variations) of face images. Moreover, in contrast to handcrafted-descriptor approaches, the applications making use of deep learning approaches can generalize well to other new fields [13]. The DBN is one of the most popular unsupervised deep learning methods, which has been successfully applied to learn a hierarchical representations from unlabeled data in a wide range of fields, including face recognition [14], speech recognition [15], audio classification [16], and natural language understanding [17]. However, a key limitation of the DBN when the pixel intensity values are assigned directly to the visible units is that the feature representations of the DBN are sensitive to the local translations of the input image. This can lead to disregarding local features of the input image known to be important for face recognition. Furthermore, scaling the DBN to work with realistic-sized images (e.g., \(128\times 128\)) is computationally expensive and impractical. To improve the generalization ability and reduce the computational complexity of the DBN, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the DBN is proposed to address the face recognition problem in unconstrained conditions. We argue that applying the DBN on top of preprocessed image feature representations instead of the pixel intensity representations (raw data), as a way of guiding the learning process, can greatly improve the ability of the DBN to learn more discriminating features with less training time required to obtain the final trained model. To the authors' best knowledge, very few publications can be found in the literature that discuss the potential of applying the DBN on top of preprocessed image feature representations. Huang et al. [18] have demonstrated that applying the convolutional DBN on top of the output of LBP can increase the accuracy rate of the final system. Li et al. [19] have also reached to the same conclusion by applying the DBN on top of center-symmetric local binary pattern (CS-LBP). However, the work in [18] was applied only to the face verification task, while the work in [19] was evaluated on a very small face dataset where the face images were taken in controlled environments. The primary contributions of the work presented here can be summarized as follows:
1. :
A novel multimodal local feature extraction approach based on merging the advantages of multidirectional and anisotropy transforms, specifically the Curvelet transform, with Fractal dimension is proposed. Termed the Curvelet–Fractal approach, it is different from previously published Curvelet-based face recognition systems, which extract only the global features from the face image. The proposed method has managed to extract the local features along with the face texture roughness and fluctuations in the surface efficiently by exploiting the Fractal dimension properties, such as self-similarity. There are three main differences from the previous conference version [20] of this work. Firstly, unlike [20] which used only the coarse band of the Curvelet transform as an input to the Fractal dimension stage, here we also use the other Curvelet sub-bands features, which represent the most significant information in the face image (e.g., face curves), which are known to be crucial in the recognition process. Secondly, a new Fractal dimension method is proposed based on an improved differential box counting (IDBC) method in order to calculate the Fractal dimension values from the new added Curvelet sub-bands and handle their high dimensionality. Then, the outputs of the IDBC and fractional Brownian motion (FBM) are combined to build an elementary feature vector. Finally, we propose to use the quadratic discriminant classifier (QDC) instead of K-nearest neighbor (K-NN) because this improves the accuracy of the proposed system.
A novel framework is proposed, termed the multimodal deep face recognition (MDFR) framework, to learn additional and complementary features representations by training a deep neural network (e.g., DBN) on top of a Curvelet–Fractal approach instead of the pixel intensity representation. We demonstrate that, the proposed framework can represent large face images, with the time required to obtain the final trained model significantly reduced compared to the direct use of the raw data. Furthermore, the proposed framework is able to efficiently handle the nonlinear variations (intra- and interpersonal variations) of face images and is unlikely to over fit to the training data due to the nonlinearity of a DBN.
The performance of the proposed approaches has been evaluated by conducting a number of extensive experiments on four large-scale unconstrained face datasets: SDUMLA-HMT, FERET, CAS-PEAL-R1, and LFW. We are able to achieve a comparable recognition rate to state-of-the-art methods using the Curvelet–Fractal approach. We demonstrate that the feature representations acquired by the DBN as a deep learning approach is complementary to the feature representations acquired by the Curvelet–Fractal approach to handcrafted-descriptors. Thus, the state-of-the-art results on the employed datasets have been farther improved by combining these two representations.
This paper focuses mainly on two different problems in the face recognition system: face identification and face verification. In this paper, the term face recognition will be used in the general case to refer to these two problems. The remainder of the paper is organized as follows: Sect. 2 is devoted to providing an overview of the proposed handcrafted-descriptors and deep learning approaches. Section 3 shows the implementation details of the proposed approaches. The experimental results are presented in Sect. 4. Finally, conclusions and future research directions are stated in the last section.
In this section, a brief description of the proposed approaches is presented, including the Curvelet transform and Fractal dimension method used in the proposed multimodal local feature extraction approach. In addition, the proposed deep learning approaches includes the DBN and its building block the restricted Boltzmann machine (RBM) as well. The primary goal here is to review and recognize their strengths and shortcomings to empower the proposal of a novel face recognition framework that consolidates the strengths of these approaches.
Curvelet transform
In recent years, many multiresolution approaches have been proposed for facial feature extraction at different scales, aiming to improve face recognition performance. The wavelet transform is one of the most popular multiresolution feature extraction methods due to its ability to provide significant features in both space and transform domains. However, according to many studies in the human visual system and image analysis, the wavelet transform is not ideal for facial feature extraction. A feature extraction method cannot be optimal without satisfying conditions relating to the following: multiresolution, localization, critical sampling, directionality, and anisotropy. It is believed that the wavelet transform cannot fulfill the last two conditions due to limitations of its basis functions in specifying direction and the isotropic scale [21]. These restrictions lead to weak representation of the edges and curves which are considered to be the most important facial features. Thus, a novel transform was developed by Candes and Donoho in 1999 known as the Curvelet transform [22]. Their motivation was to overcome the drawbacks and limitations of widely used multiresolution methods such as the wavelet and ridgelet transforms. All the above five conditions can be fulfilled using Curvelet transform.
The Curvelet transform has been successfully applied to solve many problems in the image processing area such as texture classification [23], preserving edges and image enhancement [24], image compression [25], image fusion [26], and image de-noising [27]. Some work has been done to explore the potential of the Curvelet transform to help solve pattern recognition problems, for example by Lee and Chen [28], Mandal and Wu [29] and Xie [30]. These showed that the Curvelet transform can serve as a good feature extraction method for pattern recognition problems like fingerprint and face recognition due to its ability to represent crucial edges and curve features more efficiently than other transformation methods. However, the Curvelet transform suffers from the effects of significant variety in pose, lighting conditions, shadows, and occlusions from wearing glasses or hats. Hence, the Curvelet transform is not able to describe the face texture roughness and fluctuations in the surface efficiently, which will have a significant effect on the recognition rate. All these factors together were behind the adoption here of the Fractal dimension to provide a better description of the face texture under unconstrained environmental conditions.
Fractal dimension
The term Fractal dimension was first introduced by the mathematician Benoit Mandelbrot as a geometrical quantity to describe the complexity of objects that show self-similarity at different scales [31]. The Fractal dimension has some important properties such as a self-similarity, which means that an object has a similar representation to the original under different magnifications. This property can be used in reflecting the roughness and fluctuation of image's surface where increasing the scale of magnification provides more and more details of the imaged surface. In addition, the noninteger value of the Fractal dimension gives a quantitative measure of objects that have complex geometry and cannot be well described by an integral dimension (such as the length of a coastline) [32, 33]. Many methods have been proposed to calculate Fractal dimension, such as box counting (BC), differential box counting (DBC) and fractional Brownian motion (FBM), and other methods can be found here [31]. The Fractal dimension has been widely applied in many areas of image processing and computer vision, such as texture segmentation [34] medical imaging [35] face detection [36]. However, not much work has been done to explore and address the potential of using the Fractal dimension to resolve pattern recognition problems. Lin et al. [37] proposed an algorithm for human eye detection by exploiting the Fractal dimension as an efficient approach for representing the texture of facial features. Farhan et al. [38] developed a personal identification system based on fingerprint images using the Fractal dimension as a feature extraction method. Therefore, it appears that the texture of the facial image can be efficiently described by using the Fractal dimension. However, Fractal estimation methods are very time consuming and cannot meet real-time requirements. To address all the limitations and drawbacks in (Sects. 2.1, 2.2), a novel face recognition algorithm based on merging the advantages of a multidirectional and anisotropy transform, specifically the Curvelet transform, with Fractal dimension is proposed.
a A typical RBM structure, b A discriminate RBM modeling the joint distribution of input variables and target classes, c Greedy layer-wised training algorithm for the DBN composed of three stacked RBMs, and d Three layers of the DBN as a generative model, where the top-down generative path is represented by the P distributions (Sold arcs), and bottom-up inference and training path is represented by the Q distributions (Dashed arcs)
Deep learning approaches
In 2006, a new deep neural networks (DNN) was introduced called the deep belief network (DBNs) by Hinton et al. [39]. DBN is a generative probabilistic model that differs from conventional discriminative neural networks. DBNs are composed of one visible layer (observed data) and many hidden layers that have the ability to learn the statistical relationships between the units in the previous layer. As depicted in Fig. 1c, a DBN can be viewed as a composition of bipartite undirected graphical models each of which is a restricted Boltzmann machine (RBM). RBM is an energy-based bipartite graphical model composed of two fully connected layers via symmetric undirected edges, but there are no connections between units of the same layer. The first layer consists of m visible units \({{\varvec{v}}} = ({{\varvec{v}}}_{{{\varvec{1}}}} ,{{\varvec{v}}}_{{{\varvec{2}}}}{,\ldots , {\varvec{v}}}_{{{\varvec{m}}}})\) that represent observed data, while the second layer consists of n hidden units \({{\varvec{h}}} = ({{\varvec{h}}}_{{{\varvec{1}}}}{{\varvec{, h}}}_{{{\varvec{2}}}}{,\ldots , {\varvec{h}}}_{{{\varvec{n}}}})\) that can be viewed as nonlinear feature detectors to capture higher-order correlations in the observed data. In addition, \({{\varvec{W}}} = \{{{\varvec{w}}}_{{{\varvec{1}}}}{{\varvec{, w}}}_{{{\varvec{2}}}}{{\ldots , {{\varvec{w}}}}}_{{{\varvec{nm}}}}\}\) is the connecting weights matrix between the visible and hidden units. A typical RBM structure is shown in Fig. 1a. The standard RBM was designed to use only binary stochastic visible units, and is the so-called Bernoulli RBM (BRBM). However, using binary units is not suitable for real-valued data (e.g., pixel intensities values in images). Therefore, a new model has been developed called the Gaussian RBM (GRBM) to address this limitation of the standard RBM [40]. The energy function of the GRBM is defined as follows:
$$\begin{aligned} {{\varvec{E}}}\left( {{{\varvec{v}}},{{\varvec{h}}}} \right) =-\mathop {\sum }\limits _{{{\varvec{i}}}=\mathbf{1}}^{{{\varvec{m}}}} \mathop {\sum } \limits _{{{\varvec{j}}}=\mathbf{1}}^{{\varvec{n}}} {{\varvec{w}}}_{{{\varvec{i}}},{{\varvec{j}}}} {{\varvec{h}}}_{{\varvec{j}}} \frac{{{\varvec{v}}}_{{{\varvec{i}}}}}{{\varvec{\sigma }} _{{\varvec{i}}} }-\mathop {\sum } \limits _{{{\varvec{i}}}=\mathbf{1}}^{{{\varvec{m}}}} \frac{\left( {{{\varvec{v}}}_{{{\varvec{i}}}} -{{\varvec{b}}}_{{{\varvec{i}}}} } \right) ^{\mathbf{2}}}{{{\varvec{2}}}{\varvec{\sigma }} _{{\varvec{i}}}^{\mathbf{2}} }-\mathop {\sum }\limits _{{{\varvec{j}}}=\mathbf{1}}^{{\varvec{n}}} {{\varvec{c}}}_{{{\varvec{i}}}} {{\varvec{h}}}_{{{\varvec{j}}}}\nonumber \\ \end{aligned}$$
Here, \({\varvec{\sigma }}_{{{\varvec{i}}}}\) is the standard deviation of the Gaussian noise for the visible unit \({{\varvec{v}}}_{{{\varvec{i}}}}, {{\varvec{w}}}_{{{\varvec{ij}}}}\) represents the weights for the visible unit \({{\varvec{v}}}_{{{\varvec{i}}}}\) and the hidden unit \({{\varvec{h}}}_{{{\varvec{j}}}}\), and \({{\varvec{b}}}_{{{\varvec{i}}}}\) and \({{\varvec{c}}}_{{{\varvec{j}}}}\) are biases for the visible and hidden units, respectively. The conditional probabilities for the visible units given hidden units and vice versa are defined as follows:
$$\begin{aligned} {{\varvec{p}}}\left( {{{\varvec{v}}}_{{{\varvec{i}}}} ={{\varvec{v}}}|{{\varvec{h}}}} \right)= & {} {{\varvec{N}}}\left( {{{\varvec{v}}}|{{\varvec{b}}}_{{{\varvec{i}}}} +\mathop {\sum }\limits _{{{\varvec{j}}}} {{\varvec{w}}}_{{{\varvec{i,j}}}} {{\varvec{h}}}_{{{\varvec{j}}}} ,{\varvec{\sigma }}_{{{\varvec{i}}}}^{\mathbf{2}}} \right) \end{aligned}$$
$$\begin{aligned} {{\varvec{p}}}\left( {{{\varvec{h}}}_{{{\varvec{j}}}} =\mathbf{1}|{{\varvec{v}}}} \right)= & {} {{\varvec{f}}}\left( {{{\varvec{c}}}_{{{\varvec{j}}}} +\mathop {\sum }\limits _{{{\varvec{i}}}} {{\varvec{w}}}_{{{\varvec{i}}},{{\varvec{j}}}} \frac{{{\varvec{v}}}_{{{\varvec{i}}}}}{{\varvec{\sigma }}_{{{\varvec{i}}}}^{\mathbf{2 }} }}\right) \end{aligned}$$
Here, \({{\varvec{N}}}(\cdot {\vert } {\varvec{\mu }}, {\varvec{\sigma }}^{{{\varvec{2}}}})\) refers to the Gaussian probability density function with mean \({\varvec{\mu }}\) and standard deviation \({\varvec{\sigma }}\). During the training process, the log-likelihood of the training data is maximized using stochastic gradient descent and the update rules for the parameters are defined as follows:
$$\begin{aligned} \Delta {{\varvec{w}}}_{{{\varvec{i,j}}}} ={\varvec{\epsilon }} \left( \langle {{\varvec{v}}}_{{{\varvec{i}}}} {{\varvec{h}}}_{{\varvec{j}}}\rangle _{{{\varvec{data}}}} -\langle {{\varvec{v}}}_{{\varvec{i}}} {{\varvec{h}}}_{{\varvec{j}}}\rangle _{{{\varvec{model}}}} \right) \end{aligned}$$
Here, \({\varvec{\epsilon }}\) is the learning rate and \(\langle {{\varvec{v}}}_{{{\varvec{i}}}} {{\varvec{h}}}_{{\varvec{j}}}\rangle _{{\varvec{data}}}\) and \(\langle {{\varvec{v}}}_{{{\varvec{i}}}} {{\varvec{h}}}_{{{\varvec{j}}}}\rangle _{{{\varvec{model}}}}\) represent the expectations under the distribution specified by the input data and the internal representations of the RBM model, respectively. As reported in the literature, RBMs can be used in two different ways: either as generative models or as discriminative models, as shown in Fig. 1a, b.
Generally, DBNs can be efficiently trained using an unsupervised greedy layer-wised algorithm, in which the stacked RBMs are trained one at a time in a bottom to top manner. For instance, consider training a DBN composed of three hidden layers as shown in Fig. 1c. According to the greedy layer-wised training algorithm proposed by Hinton et al. [39], the first RBM is trained using the contrastive divergence (CD) algorithm to learn a layer \(({{\varvec{h}}}_{{{\varvec{1}}}})\) of feature representations from the visible units, as described in [39]. Then, the hidden layer units \(({{\varvec{h}}}_{{{\varvec{1}}}})\), of the first RBM, are used as visible units to train the second RBM. The whole DBN is trained when the learning of the final hidden layer is completed. A DBN with l layers can model the joint distribution between the observed data vector v and l hidden layers \({{\varvec{h}}}_{{{\varvec{k}}}}\) as follows:
$$\begin{aligned} {{\varvec{P}}}\left( {{{\varvec{v}}},{{\varvec{h}}}^{{{\varvec{1}}}},\ldots ,{{\varvec{h}}}^{{{\varvec{l}}}}} \right) =\left( {\mathop {\prod } \limits _{{{\varvec{k}}}={{\varvec{0}}}}^{{{\varvec{l}}}-{{\varvec{2}}}} {{\varvec{P}}}\left( {{{\varvec{h}}}^{{{\varvec{k}}}}|{{\varvec{h}}}^{{{\varvec{k}}}+{{\varvec{1}}}}} \right) } \right) {{\varvec{P}}}\left( {{{\varvec{h}}}^{{{\varvec{l}}}-{{\varvec{1}}}},{{\varvec{h}}}^{{{\varvec{l}}}}} \right) \end{aligned}$$
Here, \({{\varvec{v}}}={{\varvec{h}}}^{{{\varvec{0}}}}\), \({{\varvec{P}}}\left( {{{\varvec{h}}}^{{{\varvec{k}}}}|{{\varvec{h}}}^{{{\varvec{k}}}+{{\varvec{1}}}}} \right) \) is the conditional distribution for the visible units given hidden units of the RBM associated with level k of the DBN, and \({{\varvec{P}}}\left( {{{\varvec{h}}}^{{{\varvec{l}}}-{{\varvec{1}}}},{{\varvec{h}}}^{{{\varvec{l}}}}} \right) \) is the visible-hidden joint distribution in the top-level RBM. An example of a three layers DBN as a generative model is shown in Fig. 1d, where the symbol Q is introduced for exact or approximate posteriors of that model which are used for bottom-up inference. During the bottom-up inference, the Q posteriors are all approximate except for the top level \({{\varvec{P}}}({{\varvec{h}}}^{{{\varvec{l}}}}|{{\varvec{h}}}^{{{\varvec{l}}}-{{\varvec{1}}}})\), which is formed as an RBM and then exact inference is possible.
Like any deep learning approach, the DBN is usually applied directly on the pixel intensity representations. However, although DBN has been successfully applied in many different fields, scaling it to realistic-sized face images still remains a challenging task for several reasons. Firstly, the high dimensionality of the face image leads to increased computational complexity of the training algorithm. Secondly, the feature representations of the DBN are sensitive to the local translations of the input image. This can lead to a disregard of the local features of the input image, which are known to be important for face recognition. To address these issues of the DBN, a novel framework based on merging the advantages of the local handcrafted image descriptors and the DBN is proposed.
The proposed framework
As depicted in Fig. 2, a novel face recognition framework named the multimodal deep face recognition (MDFR) framework is proposed to learn high-level facial feature representations by training a DBN on top of a local Curvelet–Fractal representation instead of the pixel intensity representation. First, the main stage of the proposed Curvelet–Fractal approach is described in detail. This is followed by describing how to learn additional and complementary representations by applying a DBN on top of existing local representations.
Illustration of the proposed Curvelet–Fractal approach with the MDFR framework
The proposed Curvelet–Fractal approach
The proposed face recognition algorithm starts by detecting the face region using a Viola–Jones face detector [41]. Detecting the face region in a complex background is not one of our contributions in this paper. Then a simple preprocessing algorithm using a sigmoid function is applied. The advantage of the sigmoid function is to reduce the effect of illumination changes by expanding and compressing the range of values of the dark and bright pixels in the face image, respectively. In other words, compressing the dynamic range of the light intensity levels and spreading the pixel values more uniformly. This operation has increased average recognition rate by 6%. After that, the proposed Curvelet–Fractal approach is applied to the enhanced face image. As indicated above the Fractal dimension has many important properties, such as its ability to reflect the roughness and fluctuations of a face image's surface and represent the facial features under different environmental conditions (e.g., illumination changes). However, the Fractal estimation methods can be very time consuming, and the high dimensionality of the face image makes it less suited to meeting the real-time requirements. Therefore, the Fractal dimension approach is applied on the Curvelet's output to produce an illumination insensitive representation of the face image that can meet the real-time systems demands. Hence, the Curvelet transform is used here as a powerful technique for edge and curve representation and dimensionality reduction of the face image, to increase the speed of Fractal dimension estimation.
In this work, two different methods to estimate the Fractal dimension are proposed based on the FBM and IDBC methods. The FBM method is used to process only the approximation coefficients (Coarse band) of the Curvelet transform, while the IDBC method is used to process the newly added Curvelet sub-bands and handle their high dimensionality. Then, the output of the FBM and IDBC are combined to build an elementary feature vector of the input image. After the Fractal dimension feature vector \({{\varvec{FD}}}_{{{\varvec{Vector}}}}\) is obtained, a simple normalization procedure is applied to scale the obtained features to the common range (0, 1), as follow:
$$\begin{aligned} \widetilde{{{\varvec{FD}}}} _{{{\varvec{Vector}}}} =\frac{{{\varvec{FD}}}_{{{\varvec{Vector}}}} -{{\varvec{min}}}\left( {{{\varvec{FD}}}_{{{\varvec{Vector}}}}} \right) }{{{\varvec{max}}} \left( {{{\varvec{FD}}}_{{{\varvec{Vector}}}}} \right) - {{\varvec{min}}}\left( {{{\varvec{FD}}}_{{{\varvec{Vector}}}}} \right) } \end{aligned}$$
The main advantage of this scaling is to avoid features with greater numeric ranges dominating those with smaller numeric ranges, which can decrease the recognition accuracy. This procedure has increased average recognition rate by 5%. Finally, the quadratic discriminant classifier (QDC) and correlation coefficients (CC) classifiers are used in the recognition tasks. The main steps of the proposed Curvelet–Fractal approach for an input face image can be summarized as follows:
The sigmoid function is applied to enhance the face image illumination.
The Curvelet transform is applied to the image from 1, so the input image is decomposed into 4 scales and 8 orientations. In this work, the Curvelet sub-bands are divided into three sets, as explained in (Sect. 3.1.1).
The FBM method is applied to a contrast enhanced version of the coarse band produced in 2 and the result is then reshaped into a row feature vector \({{\varvec{FBM}}}_{{{\varvec{Vector}}}} \), as explained in (Sect. 3.1.2).
The IDBC method is applied to the middle frequency bands produced in 2 and a row feature vector \({{\varvec{IDBC}}}_{{{\varvec{Vector}}}}\) is constructed, as explained in (Sect. 3.1.3).
The final facial feature vector \({{\varvec{FD}}}_{{{\varvec{Vector}}}} =\left\{ {{\varvec{FBM}}}_{{{\varvec{Vector}}}},{\varvec{IDBC}}_{{\varvec{Vector}}}\right\} \) is constructed. To obtain a uniform feature vector, a normalization procedure is applied to obtain the normalized feature vector \(\widetilde{{{\varvec{FD}}}} _{{{\varvec{Vector}}}}\).
The QDC and CC classifiers are used in the final recognition tasks. The former is used for the identification task, while the latter is used for the verification task.
The next three subsections describe in more detail the Curvelet transform, FBM, and IDBC methods mentioned above.
Curvelet via wrapping transform
In this work, the wrapping based Curvelet transform described below is adopted, because it is faster to compute, more robust and less redundant than the alternative ridgelet- and USFFT-based forms of Curvelet transform. Its ability to reduce the dimensionality of the data and capture the most crucial information within face images, such as edges and curves plays a significant role in increasing the recognition power of the proposed system. The major steps implemented on a face image to obtain the Curvelet coefficients are clearly described in [20].
Based on domain knowledge from literature, suggesting that a higher scale decomposition would only increase the number of Curvelet sub-bands (coefficients) with very marginal or even no improvement in recognition accuracy, the Curvelet coefficients are generated at 4 scale and 8 orientation throughout this work. This maintains an acceptable balance between the speed and performance of the proposed system. Figure 3 shows the Curvelet decomposition coefficients of a face image of size \((128\times 128)\) pixel taken from the FERET dataset. As indicated in Fig. 3, the output of the Curvelet transform can be divided into three sets:
The coarse band, containing only the low frequency (approximation) coefficients, is stored at the center of the display \(({{\varvec{Scale}}}_{{{\varvec{1}}}})\). These coefficients represent the main structure of the face.
The Cartesian concentric coronae that represents the middle frequency bands of the Curvelet coefficients at different scales, where the outer coronae correspond to the higher frequencies \(({{\varvec{Scale}}}_{{{\varvec{2}}}}{,\ldots , {\varvec{Scale}}}_{{{\varvec{N-1}}}})\). Each corona is represented by four strips corresponding to the four cardinal points. These strips are further subdivided in angular panels, which represent the Curvelet coefficients at a specified scale and orientation. The coefficients in these bands represent the most significant information of the face, such as edges and curves.
The highest frequency band \(({{\varvec{Scale}}}_{{{N}}})\) of the face image, only indicated in Fig. 3, is at scale 4. This band has been discarded due to it being dominated by noise information.
From a practical point of view, the dimensionality of the Curvelet coefficients is extremely high due to the large amount of redundant and irrelevant information in each sub-band, especially in the middle frequency bands. Hence, working on such a large number of Curvelet coefficients is very expensive. A characteristic of the Curvelet transform is that it produces identical sub-bands coefficients at angle \({\varvec{\theta }}\) and \(\left( {{\varvec{\pi }} +{\varvec{\theta }}}\right) \) for the same scale. Thus, only half of the Curvelet sub-bands need to be considered. In this work, instead of the direct use of the Curvelet coefficients, we analyze and process these coefficients using other methods. For the coarse band (the lowest frequency band), an image contrast enhancement procedure is applied as shown in Fig. 4, to improve the illumination uniformity of the face image stored at the center of the display by stretching the overall contrast of the image between two pre-defined lower and upper cutoffs, which are empirically set to be 0.11, and 0.999, respectively. This is followed by extracting the face texture roughness and fluctuations in the surface using the FBM method. For the middle frequency bands, the IDBC method is applied to reflect the face texture information and reduce the high dimensionality of these bands.
Illustration of the Curvelet decomposition coefficients obtained from a face image decomposed at scale 4 and orientations 8
The top row shows coarse band Curvelet approximation coefficients of four images. The middle row shows the images after applying the contrast enhancement procedure, and the bottom row shows the FBM fractal-transformed images
Fractional Brownian motion method
As shown in Fig. 5, the 2D face image can be considered as a 3D spatial surface that reflects the gray-level intensity value at each pixel position where the neighborhood region around each pixel cross the face surface, which covering a varying range of gray levels, can be processed as an FBM surface. The FBM is a nonstationary model and is widely used in medical imaging [33, 42] due to its power to enhance the original image and make the statistical features more distinguishable.
The spatial surface corresponding to a grayscale face image
For example, in [43] it was found that employing the normalized FBM to extract the feature vectors from surfaces of five ultrasonic liver images improved the classification of the normal and abnormal liver tissues. Moreover, the Fractal dimension for each pixel, calculated over the whole medical image by the normalized FBM method, could be used as a powerful edge enhancement and a detection method, which can enhance the edge representation for the medical images without increasing the noise level. According to Mandelbrot [32], the FBM is statistically self-affine, which means that the Fractal dimension value of the FBM is not affected by linear transformations such as scaling. Therefore, the FBM is invariant under normally observed transformations of face images.
In this work, the face image of size \((\mathbf{M}\times \mathbf{N})\) is transformed to its Fractal dimension form by applying a kernel function fd(p,q) of size \((\mathbf{7}\times \mathbf{7})\) on the entire face image, using the algorithm summarized in Fig. 6. More information on the mathematical functions of the FBM method can be found in [20]. Figure 4 shows examples of the approximation coefficients of the Curvelet transform and the resulting fractal-transformed images. After, a fractal-transformed image of size \((\mathbf{M}\times \mathbf{N})\) has been obtained it is reordered into a row feature vector \({{\varvec{FBM}}}_{{{\varvec{Vector}}}}\) for further analysis.
A block diagram of the implementation of the FBM method
Improved differential box counting method
The main purpose of the second Fractal method is estimate the Fractal dimension features from the middle frequency bands of the Curvelet transform, reduce the high dimensionality of these bands, and increase the speed of the proposed system. Face recognition like other pattern recognition systems suffers from the so-called curse of high dimensionality. There are many possible reasons for reducing the feature vector size, such as providing a more efficient way for storing and processing the data related to the increasing number of training samples and increasing the discriminative power of the feature vectors.
Calculating the Fractal dimension by using the traditional (DBC)[46]
The second method to compute the Fractal dimension is based on the improved differential box counting (IDBC) algorithm. The basic approach of the traditional DBC is to treat any image of size \((\mathbf{M}\times \mathbf{M})\) as a 3D space where (x, y) denote the pixel position on the image surface, and the third coordinate (z) denotes the pixel intensity. The DBC starts by scaling the image down into nonoverlapping blocks of size \(({{\varvec{s}}}\times {{\varvec{s}}})\), where \({{\varvec{M}}}/{{\varvec{2}}}> {{\varvec{s}}} > {{\varvec{1}}}\) and \({{\varvec{s}}}\) is an integer, and then the Fractal dimension is calculated as follows:
$$\begin{aligned} {{\varvec{FD}}}= \mathop {{{\varvec{lim}}}}\limits _{{{\varvec{r}}}\rightarrow {{\varvec{0}}}} \frac{{{\varvec{log}}} \left( {{{\varvec{N}}}_{{{\varvec{r}}}} } \right) }{{{\varvec{log}}} \left( {{{\varvec{1}}}/{{\varvec{r}}}} \right) } \end{aligned}$$
where \({\varvec{r~=~s}}\) is the scale of each block and \({{\varvec{N}}}_{{{\varvec{r}}}}\) is the number of boxes required to entirely cover the object in the image, which is counted in the DBC method as follows: On each block there is a column of boxes of size \(({{\varvec{s}}}\times {{\varvec{s}}} \times {{\varvec{s}}}^{\prime })\), where \({{\varvec{s}}}^{\prime }={{\varvec{s}}}\) and each box assigned with a number \(({{\varvec{1,2,{\ldots },}}})\) starting from the lowest gray-level value, as shown in Fig. 7. Let the minimum and the maximum gray level of the image in the \(({{\varvec{i, j}}}){{\text {th}}}\) block fall in box number k and l, respectively. The contribution of \({{\varvec{n}}}_{{{\varvec{r}}}}\) in \(({{\varvec{i, j}}}){{\text {th}}}\) block is calculated as follows:
$$\begin{aligned} {{\varvec{n}}}_{{{\varvec{r}}}} \left( {{{\varvec{i}}},{{\varvec{j}}}} \right) ={{\varvec{l}}}-{{\varvec{k}}}+{{\varvec{1}}} \end{aligned}$$
The contributions from all blocks \({{\varvec{N}}}_{{{\varvec{r}}}}\) is counted for different values of r as follows:
$$\begin{aligned} {{\varvec{N}}}_{{\varvec{r}}} =\mathop {\sum }\limits _{{{\varvec{i,j}}}} {{\varvec{n}}}_{{{\varvec{r}}}} \left( {{{\varvec{i}}},{{\varvec{j}}}} \right) \end{aligned}$$
More information on this technique and its implementation can be found in [31]. The traditional DBC has many issues. The most important is how to choose the best size of the boxes that cover each block on the image surface. This can significantly affect the results of the curve fitting process, and result in inaccurate estimation of the Fractal dimension. Moreover, calculating the Fractal dimension using traditional DBC cannot accurately reflect the local and global facial features of different and similar classes. Finally, the traditional DBC method can suffer from over or under counting of the number of boxes that cover a specific block, which leads to calculating the Fractal dimension inaccurately [44, 45]. The Fractal dimension feature is estimated from each block using \(({{\varvec{log}}}_{\mathbf{2}} \mathbf{4})\) different sizes of boxes. Then, from each sub-image 16 Fractal dimension features are estimated. By combining the features obtained from the four sub-images \((\mathbf{4\times 16})\), we construct a sub-row feature vector \(\mathbf{V}_{\mathbf{i}}= \{{{\varvec{Fd}}}_{{{\varvec{1}}}}, {{\varvec{Fd}}}_{{{\varvec{2}}}} {,{\ldots }, {{\varvec{Fd}}}}_{{{\varvec{64}}}}\}\) for each Curvelet sub-band. As in Eq. (10), the final feature vector \({{\varvec{IDBC}}}_{{{\varvec{Vector}}}}\) of the middle frequency bands is constructed by combining the \(\mathbf{V}_{\mathbf{i}}\) from 4 and 8 sub-bands located at scale 2 and 3, respectively.
$$\begin{aligned} {{\varvec{IDBC}}}_{{{\varvec{Vector}}}} =\left\{ {{{\varvec{V}}}_{\mathbf{1}} ,{{\varvec{V}}}_{\mathbf{2}} ,\ldots ,{{\varvec{V}}}_{{\mathbf{12}}}}\right\} \end{aligned}$$
In this work, to ensure the correct division without losing any important information the Curvelet sub-bands at scale 2 and 3 have been resized from their original sizes to \((\mathbf{24 \times 24})\) and \((\mathbf{32 \times 32})\), respectively. The experimental results have demonstrated that calculating the Fractal dimension features using different sizes of boxes covering the same block can play a significant role in increasing the discriminative power of the final feature vector by efficiently reflecting the face texture information using the edges and curves of the face presented in the middle frequency bands.
Face matching
Classification and decision making are the final steps in the proposed Curvelet–Fractal approach. These refer to the process of either classifying the tested samples into N classes based on the identity of the training subjects or deciding whether two faces belong to the same subject or not. In this paper, the QDC and CC classifiers are used in the identification and verification tasks, respectively. The QDC from PRToolsFootnote 1 is a supervised learning algorithm commonly used for multiclassification tasks. It's a Bayes-Normal-2 classifier assuming Gaussian distributions, which aims to differentiate between two or more classes using a quadric surface. Using this Bayes rule, a separate covariance matrix is estimated for each class, yielding quadratic decision boundaries. This is done by estimating the covariance matrix C for the scatter matrix S as follows:
$$\begin{aligned} {{\varvec{C}}}=\left( {\mathbf{1}-{\varvec{\alpha }} -{\varvec{\beta }}} \right) {{\varvec{S}}}+{\varvec{\alpha }} {{\varvec{diag}}}\left( {{\varvec{S}}} \right) +\frac{{\varvec{\beta }}}{{{\varvec{n}}}}\sum {\varvec{diag}}\left( {{\varvec{S}}} \right) \end{aligned}$$
Here, n refers to the dimensionality of the feature space, \({\varvec{\alpha }} \) and \({\varvec{\beta }} \in \left[ {\mathbf{0,1}} \right] \) are regularization parameters. In this work, these parameters are determined empirically to be \({\varvec{\alpha }} =\mathbf{0.1}\) and \({\varvec{\beta }} =\mathbf{0.2}\), as explained in (Sect. 4.2.1). The decision making is based on calculating the similarity scores between the two face images using the CC classifier, which is defined as follows:
$$\begin{aligned} {{\varvec{C}}}\left( {{{\varvec{A,B}}}} \right) =\frac{\mathop {\sum }\nolimits _{{{\varvec{m}}}} \mathop \sum \nolimits _{{{\varvec{n}}}} \left( {{{\varvec{A}}}_{{{\varvec{mn}}}} -\bar{{{\varvec{A}}}}} \right) \left( {{{\varvec{B}}}_{{{\varvec{mn}}}} -\bar{{{\varvec{B}}}} } \right) }{\sqrt{\left( {\mathop \sum \nolimits _{{{\varvec{m}}}} \mathop \sum \nolimits _{{{\varvec{n}}}} \left( {{{\varvec{A}}}_{{{\varvec{mn}}}} -{\bar{{\varvec{A}}}}} \right) ^{\mathbf{2}}} \right) \left( {\mathop \sum \nolimits _{{{\varvec{m}}}} \mathop \sum \nolimits _{{{\varvec{n}}}} \left( {{{\varvec{B}}}_{{{\varvec{mn}}}} -\bar{{{\varvec{B}}}}} \right) ^{\mathbf{2}}} \right) }}\nonumber \\ \end{aligned}$$
Here, m and n are the dimensions of the sample, and \(\bar{{{\varvec{A}}}}\) and \(\bar{{{\varvec{B}}}} \) are the mean values of the testing and training samples, respectively.
Learning additional features representations
In this study, we argue that applying the DBN on top of local features representations instead of the pixel intensity representations (raw data), as a way of guiding the learning process, can greatly improve the ability of the DBN to learn more discriminating features with a shorter training time required to obtain the final trained model. As shown in Fig. 2, the local facial features are first extracted using the proposed Curvelet–Fractal approach. Then, the extracted local features are assigned to the feature extraction units of the DBN to learn additional and complementary representations. In this work, the DBN architecture stacks 3 RBMs (3 hidden layers). The first two RBMs are used as generative models, while the last one is used as a discriminative model associated with softmax units for the multiclass classification purpose. Finally, the hidden layers of the DBN are trained one at a time in a bottom-up manner, using a greedy layer-wised training algorithm.
In this work, the training methodology to train the DBN model can be divided into three stages: pre-training, supervised, and fine-tuning phases.
In the pre-training phase, the first two RBMs are trained in a purely unsupervised way, using a greedy training algorithm, in which each added hidden layer is trained as a RBM (e.g., using the CD algorithm). The activation outputs of a trained RBM can be viewed as feature representations extracted from its input data, which will be the input data (visible units) used to train the next RBM in the stack. The unsupervised pre-training phase is finished when the learning of the second hidden layer is completed. The main advantage of the greedy unsupervised pre-training procedure is the ability to train the DBN using a massive amount of unlabeled training data, which can improve the generalization ability and prevent overfitting. In addition, the degree of complexity is reduced and the speed of training is increased.
In the supervised phase, the last RBM is trained as a nonlinear classifier using the training and validation set along with their associated labels to observe its performance in each epoch.
Finally, the fine-tuning phase is performed in a top-down manner using the back-propagation algorithm to fine-tune parameters (weights) of the whole network for optimal classification.
A difference compared with conventional neural networks is that the DBNs require a massive amount of training data to avoid overfitting during the learning process and achieve satisfactory predictions. Hence, data augmentation is the simplest and most common method, of achieving this, which artificially enlarges the training dataset using techniques such as: random crops, intensity variations, and horizontal flipping. In contrast to previous works that randomly sample a large number of face image patches [12, 47], we propose to uniformly sample a small number of face image patches. To prevent background information from artificially boosting the results of the proposed Curvelet–Fractal Footnote 2 approach and to speed up experiments when the DBN is directly applied on the pixel intensity representations, the face region is detected, and the data augmentation procedureFootnote 3 is implemented on the detected face image. In this work, for a face image of size \((\mathbf{H}_{\mathbf{dim}} \times \mathbf{W}_{\mathbf{dim}})\), five images patches of the same size are cropped, four starting from the corner and one centered (and their horizontally flipped counterparts), which helps maximize the complementary information contained within the cropped patches. Figure 8 shows the ten image patches generated from a single input image.
Data augmentation procedure: a detected face image, b the normalized face patches used as input for the MDFR where the (top) row are patches sampled from (a), and the (bottom) row their horizontal flipped versions
In this section, comprehensive experiments are described using the proposed approaches for both face identification and verification tasks in order to demonstrate their effectiveness and compare their performance with other existing approaches. First a brief description of the face datasets used in these experiments is given. Then a detailed evaluation and comparison with the state-of-the-art approaches is presented in addition to some insights and findings about learning additional features representations by training a DBN on top of local feature representations.
Examples of face images in four face datasets: a SDUMLA-HMT, b FERET, c CAS-PEAL-R1, and d LFW
Face datasets
In this work, all the experiments were conducted on four large-scale unconstrained face datasets: SDUMLA-HMT [48], FacE REcognition Technology (FERET) [49], CAS-PEAL-R1 [50], and Labeled Faces in the Wild (LFW) [51]. Some examples of face images from each dataset are shown in Fig. 9.
SDUMLA-HMT dataset [48] This includes 106 subjects and each has 84 face images taken from 7 viewing angles and under different experimental conditions including, facial expressions, accessories, poses, and illumination. The main purpose of this dataset is to simulate real-world conditions during face image acquisition. The image size is \((\mathbf{640} \times \mathbf{480})\) pixel.
FERET dataset [49] This contains a total of 14,126 images taken from 1196 subjects, with at least 365 duplicate sets of images. This is one of the largest publicly available face datasets with a high degree of diversity of facial expression, gender, illumination conditions and age. The image size is \((\mathbf{256} \times \mathbf{384})\) pixel.
CAS-PEAL-R1 dataset [50] A subset of the CAS-PEAL face dataset has been released for research purposes and named CAS-PEAL-R1. This contains a total of 30,863 images taken from 1040 Chinese subjects (595 are males and 445 are females). The image size is \((\mathbf{360} \times \mathbf{480})\) pixel.
LFW dataset [51] This contains a total of 13,233 images taken from 5749 subjects where 1680 subjects appear in two or more images. In the LFW dataset, all images were collected from Yahoo! News articles on the Web, with a high degree of intra-personal variations in facial expression, illumination conditions, occlusion from wearing hats and glasses, etc. It has been used to address the problem of unconstrained face verification task in recent years. The image size is \((\mathbf{250} \times \mathbf{250})\) pixel.
Face identification experiments
This section describes the evaluation of the proposed approaches to the face identification problem on three different face datasets: SDUMLA-HMT, FERET, and CAS-PEAL-R1. In this work, the SDUMLA-HMT dataset is used as the main dataset to fine-tune the hyper-parameters of the proposed Curvelet–Fractal approach (e.g., regularization parameters of the QDC classifier) as well as the proposed MDFR framework (e.g., number of hidden units per layer), because it has more images per person in its image gallery than the other databases. This allowed more flexibility in dividing the face images into training, validation and testing sets.
The validation accuracy rate (VAR) generated throughout 121 experiments of finding the best regularization parameters
Parameter settings of the Curvelet–Fractal approach
In the proposed Curvelet–Fractal approach, the most important thing is to set the regularization parameters of the QDC classifier. In this work, these parameters are determined empirically by varying their values from 0 to 1 in steps of 0.1, starting with \({\varvec{\alpha }} ={\mathbf{0}}\) and \({\varvec{\beta }} ={\mathbf{0}}\). Hence, 121 experiments were conducted where each time we increase the former by 0.1 and test it with all the possible values of the latter. Figure 10 shows the validation accuracy rate (VAR) generated throughout these experiments. These experiments were carried out using 80% randomly selected samples for training set and the remaining 20% for testing set. In particular, the parameters optimization process is performed on the training set using the tenfold cross-validation procedure that divides the training set into k subsets of equal size. Sequentially, one subset is used to evaluate the performance of the classifier trained on the remaining \({{\varvec{k}}}~\varvec{-}~\mathbf{1}\) subsets. Then, average error rate (AER) over 10 trials is calculated as follows:
$$\begin{aligned} {{\varvec{AER}}}=\frac{{{\varvec{1}}}}{{{\varvec{K}}}}\mathop \sum \limits _{{{\varvec{i}}}={{\varvec{1}}}}^{{{\varvec{k}}}} {{\varvec{Error}}}_{{{\varvec{i}}}} \end{aligned}$$
Performance comparison between the Curvelet–Fractal and CT-FBM approaches on the SDUMLA-HMT Dataset
Here, \({{\varvec{Error}}}_{{\varvec{i}}} \) refers to the error rate per trial. After finding the best values of the regularization parameters, the QDC classifier is trained using the whole training set, and its performance in predicting unseen data properly is then evaluated using the testing set. Algorithm 1 shows pseudo-code of the procedure proposed to train the QDC classifier. Figure 11 shows results comparing the present Curvelet–Fractal approach with our previous Curvelet Transform-fractional Brownian motion (CT-FBM) approach described in [20] using the cumulative match characteristic (CMC) curve to visualize the performance of both approaches. It can be seen in Figure 11 that the Rank-1 identification rate has dramatically increased from 0.90 to 0.95 using the CT-FBM to more than 0.95 to 1.0 using the Curvelet–Fractal approach.
MDFR architecture and training details
The major challenge of using DNNs is the number of the model architectures and hyper-parameters that need to be evaluated, such as the number of layers, the number of units per layer, learning rate, the number of epochs. In a DBN, the value of a specific hyper-parameter may mainly depend on the values selected for other hyper-parameters. Moreover, the values of the hyper-parameters set in one hidden layer (RBM) may depend on the values of the hyper-parameters set in other hidden layers (RBMs). Therefore, hyper-parameter tuning in DBNs is very expensive. Given these findings, the best hyper-parameter values are found by performing a coarse search over all the possible values. In this section, all the experiments were carried out using 60% randomly selected samples for training and the remaining 40% samples were divided into two sets of equal size as validation and testing sets. In all experiments, the validation set is used to assess the generalization ability of the MDFR framework during the learning process before using the testing set. Following, the training methodology described in (Sect. 3.2), the MDFR framework was greedily trained using input data acquired from the Curvelet–Fractal approach. Once the training of a given hidden layer is accomplished, its weights matrix is frozen, and its activations are served as input to train the next layer in the stack.
As shown in Table 1, four different three-layer DBN models were greedily trained in a bottom-up manner using different numbers of hidden units. For the first two layers, each one was trained separately as an RBM model in an unsupervised way using the CD learning algorithm with 1 step of Gibbs sampling (CD-1). Each individual model was trained for 300 epochs, momentum of 0.9, a weight decay of 0.0002, and mini-batch size of 100. The weights of each model were initialized with small random values sampled from a zero-mean normal distribution and standard deviation of 0.02. Initially, the learning rate was to be 0.001 for each model as in [52], but we observed this was inefficient as each model took too long to converge due to the learning rate being too small. Therefore, for all the remaining experiments, the learning rate set to be 0.01. The last RBM model was trained in a supervised way as a nonlinear classifier using the training and validation set along with their associated labels to evaluate its discriminative performance. In this phase, the same values of the hyper-parameters used to train the first two models were used, except that the last model was trained for 400 epochs. Finally, in the fine-tuning phase, the whole network was trained in top-down manner using the back-propagation algorithm equipped for dropout compensation to find optimized parameters and to avoid overfitting. The dropout ratio is set to 0.5, and the number of epochs through the training set was determined using early stopping procedure, in which the training process is stopped as soon as the classification error on the validation set starts to rise again. In these experiments using the validation set, we found (see Table 1; Fig. 12) that hidden layers with sizes 800, 800, 1000 provided considerably better results than the other hidden layer sizes that we trained. This model trained on input data acquired from the Curvelet–Fractal approach is termed the MDFR framework. Table 1 shows the Rank-1 identification obtained from the four trained DBNs models over the validation set, while the CMC curves shown in Fig. 12 are used to visualize their performance on the validation set.
Table 1 Rank-1 identification rates obtained for different DBN architectures using Validation set
CMC curves for the four trained DBNs models over the validation set
Comparative study of fractal, Curvelet–Fractal, DBN and MDFR approaches
In this section, to evaluate the feature representations obtained from the MDFR framework, its recognition accuracy was compared with feature representations obtained by the Fractal, Curvelet–Fractal approach and DBN.Footnote 4 This comparison study was conducted for several reasons: firstly, to demonstrate the efficiency of the proposed Curvelet–Fractal approach compared with applying the Fractal dimension individually. Secondly, to demonstrate that the feature representations acquired by the MDFR framework as a deep learning approach is complementary to the feature representations acquired by Curvelet–Fractal approach as a handcrafted-descriptors; thirdly, to show that applying the DBN on top of the local feature representations instead of the pixel intensity representations can significantly improve the ability of the DBN to learn more discriminating features with less training time required. Finally, using these complementary feature representations, the MDFR framework was able to efficiently handle the nonlinear variations of face images due to the nonlinearity of a DBN. In this work, the input image rescaled to \((32 \times 32)\) pixel to speed up the experiments when the Fractal dimension approaches are directly applied on the face image. Here, \({{\varvec{Fractal}}}_{{{\varvec{Vector}}}}\) denotes applying both the FBM and IDBC approach directly on the input image.
As shown in Fig. 13, a higher identification rate was obtained using the proposed Curvelet–Fractal approach compared to only applying the Fractal dimension. Furthermore, we were able to further improve the recognition rate of the Curvelet–Fractal approach by learning additional feature representations through the MDFR framework as well as improve the performance of the DBN by forcing it to learn only the important facial features (e.g., edges and curves). To further examine the robustness of the proposed approaches, a number of experiments were conducted on FERET and CAS-PEAL-R1dataset, and the results obtained are compared with the state-of-the-art approaches. For a fair comparison, the performance of the Curvelet–Fractal approach was evaluated using the standard evaluation protocols of FERET and CAS-PEAL-R1dataset described in [49, 50], respectively. In this work, to prevent overfitting and increase the generalization ability of the MDFR framework, the data augmentation procedure as described in (Sect. 3.2) was applied only to the gallery set of these two datasets. Then, its performance during the learning process was observed on a separate validation set taken from the full augmented gallery set.
Performance comparison between the DBN, Curvelet–Fractal and MDFR methods on the SDUMLA-HMT Dataset
According to the standard evaluation protocol, the FERET dataset is divided into five distinct sets: Fa contains a total 1196 subjects with one image per subject, which is used as a gallery set. The Fb contains 1195 images are taken on the same day and under the same lighting conditions as a Fa set, but with different facial expressions. The Fc set has 194 images taken on the same day as the Fa set, but under different lighting conditions. The Dup I set contains 722 images acquired on different days after the Fa set. Finally, the Dup II set contains 234 images acquired at least 1 year after the Fa set. Following the standard evaluation protocol, the last four sets are used as probe sets to address the most challenging problems in the face identification task, such as facial expression variation, illumination changes, and facial aging. Table 2 lists the Rank-1 identification rates of the proposed approaches and the state-of-the-art face recognition approaches on all four probe sets of the FERET dataset.
The standard CAS-PEAL-R1 evaluation protocol divides the dataset into a gallery set and six frontal probe sets without overlap between the gallery set and any of the probe sets. The gallery set consists of 1040 images of 1040 subjects taken under the normal conditions. The six probe sets contain face images with the following basic types of variations: Expression (PE) consists of 1570 images, accessories (PA) consists of 2285 images, lighting (PL) consists of 2243 images, time (PT) consists of 66 images, background (PB) consists of 553 images, and distance (PS) consists of 275 images. Table 3 lists the Rank-1 identification rates of the proposed approaches and the state-of-the-art face recognition approaches on all six probe sets of the CAS-PEAL-R1dataset.
It can be seen from the results listed in Tables 2 and 3 that we were able to achieve competitive results with the state-of-the-art face identification results on FERET and CAS-PEAL-R1dataset using only the Curvelet–Fractal approach. Its performance was compared with popular and recent feature descriptors, such as G-LQP, LBP, WPCA. Although that some approaches, such as DFD(S=5)+WPCA [53], GOM [10], AMF [11], and DBN approach achieved a slightly higher identification rate on the Fc probe set, they obtained inferior results on the other probe sets of the FERET dataset. In addition, the Curvelet–Fractal approach achieved a higher identification rate on all the probe sets of the CAS-PEAL-R1dataset. Some of existing approaches, such as H-Groupwise MRF [54] and FHOGC [55] also achieved a 100% identification rate on the PB and PT probe set, respectively, but they obtained inferior results on the other probe sets of the CAS-PEAL-R1 dataset. Finally, a further improvements and a new state-of-the-art recognition accuracy was achieved using the MDFR framework on FERET and CAS-PEAL-R1 dataset, in particular, when the most challenging probe sets are under the consideration, such as Dup I and Dup II in FERET dataset and PE, PA,PL, and PS in the CAS-PEAL-R1dataset.
Table 2 The Rank-1 identification rates of different methods on the FERET probe sets
Face verification experiments
In this section, the robustness and the effectiveness of the proposed approaches was examined to address the unconstrained face verification problem using LFW dataset. The face images in the LFW dataset were divided into two distinct Views. "View 1" is used for selecting and tuning the parameters of the recognition model, while "View 2" is used to report the final performance of the selected model. In "View 2", the face images are paired into 6000 pairs, with 3000 pairs labeled as positive pairs and the rest as negative pairs. The final performance is reported as described in [51] by calculating the mean accuracy rate (\(\hat{\varvec{\mu }})\) and the standard error of the mean accuracy \(({{\varvec{S}}}_{{{\varvec{E}}}})\) over tenfold cross-validation, with 300 positive and 300 negative image pairs per each fold. For a fair comparison between all face recognition algorithms, the creators of LFW dataset have pre-defined six evaluation protocols, as described in [63]. In this work, the "Image-Restricted, Label-Free Outside Data" protocol is followed where only the outside data are used to train MDFR framework.
Table 3 The Rank-1 identification rates of different methods on the CAS-PEAL_R1 probe sets
Table 4 Performance comparison between the proposed approaches and the state-of-the-art approaches on LFW dataset under different evaluation protocols
Furthermore, the aligned LFW-aFootnote 5 dataset is used and the face images were resized to \((\mathbf{128} \times \mathbf{128})\) pixel after the face region has been detected using pre-trained Viola–JonesFootnote 6 face detector. For the proposed Curvelet–Fractal approach, the feature representation of each test sample is obtained first, and then the similarity score between each pair of face images is calculated using the CC classifier. In the training phase, the Curvelet–Fractal approach does not use any data augmentation or outside data (e.g., creating additional positive/negative pairs from any other source), just use of the pre-trained Viola–Jones face detector, which has been trained using outside data. The final results over tenfolds are reported where each of the 10 experiments is completely independent of the others and the decision threshold of the CC classifier is learnt from the training set according to the standard evaluation protocol. Then, the accuracy rate in each round of tenfold cross-validation is calculated as the number of correctly classified pairs of samples divided by the total number of test sample pairs. For further evaluation, the results obtained from the Curvelet–Fractal approach were compared to state-of-the-art approaches on LFW dataset, such as DDML [64], LBP, Gabor [65], and MSBSIF-SIEDA [66] using the same evaluation protocol (Restricted), as shown in Table 4. It can be seen that the accuracy rate, 0.9622 ± 0.0272, of the Curvelet–Fractal approach is higher than the best results reported on the LFW dataset, which is 0.9463 ± 0.0095. In this work, further improvements and a new state-of-the-art result was achieved by applying the MDFR framework on LFW dataset. This experiment can be considered as an examination of the MDFR's generalization ability to address the unconstrained face verification problem on LFW dataset. In this work, the final performance of two pre-trained DBNs models was evaluated, while the first model was applied directly on top of pixel intensity representations the second was applied on top of local features representations and referred to the MDFR framework. Following the same evaluation protocol mentioned above, the hyper-parameters of the MDFR framework were find-tuned using data from the SDUMLA-HMT dataset, as described in (Sect. 4.2.2).
ROC curves averaged over tenfolds of "View 2" of the LFW-a dataset. Performance comparison between the DBN, Curvelet–Fractal, and MDFR framework on the face verification task
Table 5 The average training time of the proposed approaches using different datasets
In the MDFR framework, the feature representations \(\mathbf{f}_{\mathbf{x}}\) and \(\mathbf{f}_{\mathbf{y}}\) of a pair of two images \(\mathbf{I}_{\mathbf{x}}\) and \(\mathbf{I}_{\mathbf{y}}\) are obtained firstly by applying Curvelet–Fractal approach, and then a feature vector F for this pair is formed using element-wise multiplication \((\mathbf{F} = \mathbf{f}_{\mathbf{x}} \, \odot \mathbf{f}_{\mathbf{x}})\). Finally, these feature vectors F (extracted from pairs of images) are used as input data to the DBN to learn additional features representations and perform face verification in the last layer. The performance of the MDFR framework is reported over tenfolds, each time onefold was used for testing and the other ninefold for training. For each round of the 10 experiments, the data augmentation procedure was applied for the training set to avoid overfitting and increase the generalization ability of the network. Table 4 lists the mean accuracy of the recent state-of-the-art methods on the LFW dataset and the corresponding ROC curves are shown in Fig. 14. Considering the results of the MDFR framework, it is significantly improved over the mean accuracy rate of the Curvelet–Fractal approach and the DBN model applied directly on top of pixel intensity representations by 2.6 and 5.3%, respectively. In this work, the performance of the proposed MDFR framework is also compared with several state-of-the-art deep learning approaches, including, DeepFace [67], DeepID [47], ConvNet-RBM [68], convolutional DBN [18] and DDML [64]. The first three approaches were mainly trained using "Unrestricted, Labeled Outside Data" protocol, in which a private dataset consisting of a large number of training images (>100 K) is employed. The accuracy rate has been improved by 1.38%, compared to the next highest results reported by DeepID [47]. These promising results demonstrate the good generalization ability of the MDFR framework and its feasibility for deployment in real applications.
In this section, the running time of the proposed approaches, including the Curvelet–Fractal, DBN, and MDFR framework was measured by implementing them on a personal computer with the Windows 8 operating system, a 3.60 GHz Core i7-4790 CPU and 24 GB of RAM. The system code was written to run in MATLAB R2015a. It should be noted that the running time of the proposed approaches is proportional to the number of subjects and their images in the dataset. The training time using the different datasets is given in Table 5. It is clear from the table that the training time of the proposed MDFR framework has significantly reduced the training time of the DBN from when it is applied directly on top of the pixel intensity representations. Moreover, the computational efficiency of the proposed MDFR framework can be further improved using graphic processing units (GPUs) and code optimization. The test time per image from image input until the recognition decision for both the Curvelet–Fractal approach and MDFR framework is about 1.3 and 1.80 ms, respectively, which is fast enough to be used for real-time applications.
Conclusions and future work
In this paper, a novel multimodal local feature extraction approach is proposed based on merging the advantages of multidirectional and anisotropy transforms like the Curvelet transform with Fractal dimension. The main contribution of this approach is to apply the Curvelet transform as a fast and powerful technique for representing edges and curves of the face structure, and then to process the Curvelet coefficients in different frequency bands using two different Fractal dimension approaches to efficiently reflect the face texture under unconstrained environmental conditions. The proposed approach were tested on four large-scale unconstrained face datasets (e.g., SDUMLA-HMT, FERET, CAS-PEAL-R1 and LFW dataset) with high diversity in facial expressions, lighting conditions, noise, etc. The results obtained demonstrated the reliability and efficiency of the Curvelet–Fractal approach by achieving competitive results with the state-of-the-art approaches (e.g., G-LQP, LBP, WPCA), especially when there is only one image in the gallery set. Furthermore, a novel MDFR framework is proposed to learn additional and complementary information by applying the DBN on top of the local feature representations obtained from the Curvelet–Fractal approach. Extensive experiments were conducted and a new state-of-the-art accuracy rate is achieved by applying the proposed MDFR framework on all the employed datasets. Based on the results, it can be concluded that the proposed Curvelet–Fractal approach and MDFR framework can be readily used in real face recognition system for both identification and verification task with different face variations. On the basis of the promising findings presented in this paper, work on the testing the proposed approaches on more challenging datasets is continuing and will be presented in future papers. In addition, further study of fusing the results obtained from the Curvelet–Fractal approach and MDFR framework would be of interest.
http://www.37steps.com/prhtml/prtools.html.
The data augmentation procedure is not implemented during the performance assessment of the proposed Curvelet–Fractal approach.
In this work, the data augmentation procedure is applied only for the training and validation set.
The DBN model was trained on the top of the pixel intensity representation using the same hyper-parameters of the MDFR framework.
http://www.openu.ac.il/home/hassner/data/lfwa/.
The incorrect face detection results have been detected manually to ensure that all the subjects are contributed in the subsequent evaluation of the proposed approaches.
Bhowmik, M.K., Bhattacharjee, D., Nasipuri, M., Basu, D.K., Kundu, M.: Quotient based multiresolution image fusion of thermal and visual images using daubechies wavelet transform for human face recognition. Int. J. Comput. Sci. 7(3), 18–27 (2010)
Parmar, D.N., Mehta, B.B.: Face recognition methods & applications. Comput. Technol. Appl. 4(1), 84–86 (2013)
Jafri, R., Arabnia, H.R.: A survey of face recognition techniques. J. Inf. Process. Syst. 5(2), 41–68 (2009)
Imtiaz, H., Fattah, S.A.: A curvelet domain face recognition scheme based on local dominant feature extraction. ISRN Signal Process. 2012, 1–13 (2012)
Shreeja, R., Shalini, B.: Facial feature extraction using statistical quantities of curve coefficients. Int. J. Eng. Sci. Technol. 2(10), 5929–5937 (2010)
Zhang, B., Qiao, Y.: Face recognition based on gradient gabor feature and efficient Kernel Fisher analysis. Neural Comput. Appl. 19(4), 617–623 (2010)
Cao, Z., Yin, Q., Tang, X., Sun, J.: Face recognition with learning-based descriptor. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2707–2714 (2010)
Simonyan, K., Parkhi, O., Vedaldi, A., Zisserman, A.: Fisher vector faces in the wild. In: Proceedings of British Machine Vision Conference 2013, pp. 8.1–8.11 (2013)
Al Ani, M.S., Al-Waisy, A.S.: Face recognition approach based on wavelet–curvelet technique. Signal Image Process. Int. J. 3(2), 21–31 (2012)
Chai, Z., Sun, Z., Mendez-Vazquez, H., He, R., Tan, T.: Gabor ordinal measures for face recognition. IEEE Trans. Inf. Forensics Secur. 9(1), 14–26 (2014)
Li, Z., Gong, D., Li, X., Tao, D.: Learning compact feature descriptor and adaptive matching framework for face recognition. IEEE Trans. Image Process. 24(9), 2736–2745 (2015)
Sun, Y., Wang, X., Tang, X.: Deep learning face representation by joint identification–verification. In: Advances in Neural Information Processing Systems, pp. 1988–1996 (2014)
Lee, H., Grosse, R., Ranganath, R., Andrew, Y.N.: Unsupervised learning of hierarchical representations with convolutional deep belief networks. Commun. ACM 54(10), 95–103 (2011)
Liu, J., Fang, C., Wu, C.: A fusion face recognition approach based on 7-layer deep learning neural network. J. Electr. Comput. Eng. 2016, 1–7 (2016). doi:10.1155/2016/8637260
Fousek, P., Rennie, S., Dognin, P., Goel, V.: Direct product based deep belief networks for automatic speech recognition. In: Proceedings of IEEE International Conference on Acoustics Speech and Signal Processing ICASSP 2013, pp. 3148–3152 (2013)
Lee, H., Pham, P., Largman, Y., Ng, A.: Unsupervised feature learning for audio classification using convolutional deep belief networks. In: Advances in Neural Information Processing Systems Conference, pp. 1096–1104 (2009)
Sarikaya, R., Hinton, G.E., Deoras, A.: Application of deep belief networks for natural language understanding. IEEE Trans. Audio, Speech Lang. Process. 22(4), 778–784 (2014)
Huang, G.B., Lee, H., Learned-Miller, E.: Learning hierarchical representations for face verification with convolutional deep belief networks. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2518–2525 (2012)
Li, C., Wei, W., Wang, J., Tang, W., Zhao, S.: Face recognition based on deep belief network combined with center-symmetric local binary pattern. Adv. Multimed. Ubiquitous Eng. Springer Singap. 354, 277–283 (2016)
Al-Waisy, A.S., Qahwaji, R., Ipson, S., Al-Fahdawi, S.: A robust face recognition system based on curvelet and fractal dimension transforms. In: 2015 IEEE International Conference on Computer and Information Technology, Ubiquitous Computing and Communications, Dependable, Autonomic and Secure Computing, Pervasive Intelligence and Computing, pp. 548–555 (2015)
Majumdar, A., Bhattacharya, A.: A comparative study in wavelets, curvelets and contourlets as feature sets for pattern recognition. Int. Arab J. Inf. Technol. 6(1), 47–51 (2009)
Candes, E., Demanet, L., Donoho, D., Lexing, Y.: Fast discrete curvelet transforms. Multiscale Model. Simul. 5(3), 861–899 (2006)
Article MathSciNet MATH Google Scholar
Arivazhagan, S., Ganesan, L., Subash Kumar, T.G.: Texture classification using curvelet statistical and co-occurrence features. In: 18th International Conference on Pattern Recognition (ICPR'06), pp. 938–941 (2006)
Bhutada, G.G., Anand, R.S., Saxena, S.C.: Edge preserved image enhancement using adaptive fusion of images denoised by wavelet and curvelet transform. Digit. Signal Process. 21(1), 118–130 (2011)
Li, Y., Yang, Q., Jiao, R.: Image compression scheme based on curvelet transform and support vector machine. Expert Syst. Appl. 37(4), 3063–3069 (2010)
Chen, M.-S., Lin, S.-D.: Image fusion based on curvelet transform and fuzzy logic. In: 2012 5th International Congress on Mage and Signal Processing (CISP), pp. 1063–1067. IEEE (2012)
Ruihong, Y., Liwei, T., Ping, W., Jiajun, Y.: Image denoising based on Curvelet transform and continuous threshold. In: 2010 First International Conference on Pervasive Computing Signal Processing and Applications, pp. 13–16 (2010)
Lee, Y.-C., Chen, C.-H.: Face recognition based on digital curvelet transform. In: 2008 Eighth International Conference on Hybrid Intelligent System Design and Engineering Application, pp. 341–345 (2008)
Mandal, T., Wu, Q.M.J.: Face recognition using curvelet based PCA. In: 19th International Conference on Pattern Recognition, ICPR 2008, pp. 1–4 (2008)
Xie, J.: Face recognition based on curvelet transform and LS-SVM. In: Proceedings of the International Symposium on Information Processing, pp. 140–143 (2009)
Lopes, R., Betrouni, N.: Fractal and multifractal analysis: a review. Med. Image Anal. 13(4), 634–49 (2009)
Mandelbrot, B.: The Fractal Geometry of Nature, 3rd Edn. W. H. Free. San Fr. Library of Congress Cataloging In publication Data, United States of America (1983)
Mandelbrot, B.: Self-affinity and fractal dimension. In: PHYSICA SCRIPTA, vol. 32, pp. 257–260 (1985)
Hsu, T., Hum K.-J.: Multi-resolution texture segmentation using fractal dimension. In: 2008 International Conference on Computer Science and Software Engineering, pp. 201–204 (2008)
Al-Kadi, O.S.: A multiresolution clinical decision support system based on fractal model design for classification of histological brain tumours. Comput. Med. Imaging Graph. 41(2015), 67–79 (2015)
Zhu, Z., Gao, J., Yu, H.: Face detection based on fractal and complexion model in the complex background. In: 2010 International Working Chaos–Fractals Theories and Applications, pp. 491–495 (2010)
Lin, K., Lam, K., Siu, W.: Locating the human eye using fractal dimensions. In: Proceedings International Conference on Image Processing (Cat. No.01CH37205), pp. 1079–1082 (2001)
Farhan, M.H., George, L.E., Hussein, A.T.: Fingerprint identification using fractal geometry. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 4(1), 52–61 (2014)
Hinton, G.E., Osindero, S., Teh, Y.-W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)
Hinton, G.: A practical guide to training restricted Boltzmann machines. Computer. (Long. Beach. Calif) 9(3), 1 (2010)
Viola, P., Way, O.M., Jones, M.J.: Robust real-time face detection. Int. J. Comput. Vis. 57(2), 137–154 (2004)
Chen, D.-R., Chang, R.-F., Chen, C.-J., Ho, M.-F., Kuo, S.-J., Chen, S.-T., Hung, S.-J., Moon, W.K.: Classification of breast ultrasound images using fractal feature. Clin. Imaging 29(4), 235–45 (2005)
Chen, C., Daponte, J.S., Fox, M.D.: Fractal feature analysis and classification in medical imaging. IEEE Trans. Med. Imaging 8(2), 133–142 (1989)
Liu, S.: An improved differential box-counting approach to compute fractal dimension of Gray-level image, In: 2008 International Symposium on Information Science and Engineering, pp. 303–306, (2008)
Long, M., Peng, F.: A box-counting method with adaptable box height for measuring the fractal feature of images. In: Radioengineering, pp. 208–213 (2013)
Li, J., Du, Q., Sun, C.: An improved box-counting method for image fractal dimension estimation. Pattern Recognit. 42(11), 2460–2469 (2009)
Article MATH Google Scholar
Sun, Y., Wang, X., Tang, X.: Deep learning face representation from predicting 10,000 classes. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1891–1898 (2014)
Yin, Y., Liu, L., Sun, X.: SDUMLA-HMT: a multimodal biometric database. In: Chinese Conference on Biometric Recognition, pp. 260–268. Springer-Verlag, Berlin, Heidelberg (2011)
Phillips, P.J., Moon, H., Rizvi, S.A., Rauss, P.J.: The FERET evaluation methodology for face-recognition algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1090–1104 (2000)
Gao, W., Cao, B., Shan, S., Zhou, D., Zhang, X., Zhao, D., Gao, W., Cao, B., Shan, S., Zhou, D., Zhang, X., Zhao, D.: The CAS-PEAL large-scale Chinese face database and baseline evaluations. IEEE Trans. Syst. Man Cybern. 38(May), 1–24 (2004)
Huang, G.B., Mattar, M., Berg, T., Learned-miller, E.: Labeled faces in the wild?: A database for studying face recognition in unconstrained environments (2007)
Campilho, A., Kamel, M.: Image Analysis and Recognition: 11th International Conference, ICIAR 2014 Vilamoura, Portugal, October 22–24, 2014 Proceedings, Part I. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 8814 (2014)
Lei, Z., Pietikainen, M., Li, S.Z.: Learning discriminant face descriptor. IEEE Trans. Pattern Anal. Mach. Intell. 36(2), 289–302 (2014)
Liao, S., Shen, D., Chung, A.C.S.: A Markov random field groupwise registration framework for face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 36(4), 657–669 (2014)
Tan, H., Ma, Z., Yang, B.: Face recognition based on the fusion of global and local HOG features of face images. IET Comput. Vis. 8(3), 224–234 (2014)
Maturana, D., Mery, D., Soto, A.: Learning discriminative local binary patterns for face recognition. In: 2011 IEEE International Conference on Automatic Face & Gesture Recognition Work. FG 2011, pp. 470–475 (2011)
Hussain, S.U., Triggs, B., Hussain, S.U., Triggs, B., Recognition, V., Patterns, Q., Lazebnik, S., Perona, P., Sato, Y., Eccv, C. S.: Face recognition using local quantized patterns. In: British Machine Vision Conference, Sep 2012, Guildford, United Kingdom (2012)
Lei, Z., Yi, D., Li, S. Z.: Local gradient order pattern for face representation and recognition. In: Proceedings of International Conference on Pattern Recognition, pp. 387–392 (2014)
Farajzadeh, N., Faez, K., Pan, G.: Study on the performance of moments as invariant descriptors for practical face recognition systems. IET Comput. Vis. 4(4), 272–285 (2010)
Maturana, D., Mery, D., Alvaro, S.: Face recognition with decision tree-based local binary patterns. In: European Conference on Computer Vision, pp. 469–481 (2004)
Yan, Y., Wang, H., Li, C., Yang, C., Zhong, B.: A novel unconstrained correlation filter and its application in face recognition. In: International Conference on Intelligent Science and Intelligent Data Engineering, pp. 32–39. Springer, Berlin, Heidelberg (2013)
Hu, J., Lu, J., Zhou, X., Tan, Y.P.: Discriminative transfer learning for single-sample face recognition. In: Proceedings of 2015 International Conference on Biometrics, ICB 2015, pp. 272–277 (2015)
Huang, G.B., Learned-miller, E.: Labeled Faces in the Wild?: Updates and New Reporting Procedures. University of Massachusetts, Amherst, UM-CS-2014-003, pp. 1–14 (2014)
Hu, J., Lu, J., Tan, Y.P.: Discriminative deep metric learning for face verification in the wild. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1875–1882 (2014)
Zhu, X., Lei, Z., Yan, J., Yi, D., Li, S.Z.: High-fidelity pose and expression normalization for face recognition in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 787–796 (2015)
Ouamane, A., Bengherabi, M., Hadid, A., Cheriet, M.: Side-information based exponential discriminant analysis for face verification in the wild. In: 11th IEEE International Conference and Workshops on, vol. 2, pp. 1–6 (2015)
Taigman, Y., Ranzato, M.A., Aviv, T., Park, M.: DeepFace: closing the gap to human-level performance in face verification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1701–1708 (2014)
Sun, Y., Wang, X., Tang, X.: Hybrid deep learning for face verification. IEEE Trans. Pattern Anal. Mach. Intell. 38(10), 1997–2009 (2016)
Barkan, O., Weill, J., Wolf, L., Aronowitz, H.: Fast high dimensional vector multiplication face recognition. In: Proceedings of IEEE International Conference on Computer Vision, pp. 1960–1967 (2013)
Hassner, T., Harel, S., Paz, E., Enbar, R.: Effective face frontalization in unconstrained images. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition, pp. 4295–4304 (2015)
School of Computing, Informatics and Media, University of Bradford, Bradford, UK
Alaa S. Al-Waisy, Rami Qahwaji, Stanley Ipson & Shumoos Al-Fahdawi
Alaa S. Al-Waisy
Rami Qahwaji
Stanley Ipson
Shumoos Al-Fahdawi
Correspondence to Alaa S. Al-Waisy.
Al-Waisy, A.S., Qahwaji, R., Ipson, S. et al. A multimodal deep learning framework using local feature representations for face recognition. Machine Vision and Applications 29, 35–54 (2018). https://doi.org/10.1007/s00138-017-0870-2
Revised: 19 July 2017
Issue Date: January 2018
Fractional Brownian motion
Deep belief network
SDUMLA-HMT database
FERET database
LFW database | CommonCrawl |
Atiyah conjecture on configurations
In mathematics, the Atiyah conjecture on configurations is a conjecture introduced by Atiyah (2000, 2001) stating that a certain n by n matrix depending on n points in R3 is always non-singular.
See also
• Berry–Robbins problem
References
• Atiyah, Michael (2000), "The geometry of classical particles", Surveys in differential geometry, Surv. Differ. Geom., VII, Int. Press, Somerville, MA, pp. 1–15, MR 1919420
• Atiyah, Michael (2001), "Configurations of points", Philosophical Transactions of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences, 359 (1784): 1375–1387, Bibcode:2001RSPTA.359.1375A, doi:10.1098/rsta.2001.0840, ISSN 1364-503X, MR 1853626, S2CID 55833332
| Wikipedia |
Event Study Methodology
Return Event Study
Introduction to the Event Study Methodology
Theoretical Assumptions
Expected Return Models
Significance Tests
Application Blueprint
Other Event Study Types
Intraday Event Studies
Long-Run Event Study
Volume Event Study
Volatiliy Event Study
Reverse Event Studies
Data Sources for Event Studies
Excel Workbook Example
SAS Code
Stata Code
Overview of Research Applications
Economy-Wide Events
Comparative Event Type Analyses
Alliances and Joint-Ventures
Competitive Dynamics
Overview of Business Applications
Tactical Asset Allocation Signals
Illustrative Use Cases
News Stock Overlay (Dieselgate)
Industry Clockspeed Monitoring
Microsoft's ClockSpeed Monitor
Research Apps
Advanced Event Study Calculators [ARC, AVC, AVyC]
ARC User Interface
AVC User Interface
AVyC User Interface
Basic Abnormal Return Calculator [bARC]
Data Files Compiler
R + API
R Package & RStudio Addin
Instructions and Access
Example: S&P 500 Reconstitution
Example: Dieselgate
EST RStudio IDE AddIn
Get Your API Key
API Implementation
Implementation Guide (AXC)
APIs on RapidAPI
EventStudy-as-a-Service
AVyC
Date Identifier
News Analytics
Background and Methods
Event Date Identifier [EDI]
Text Analyzer [CATA]
Filter CATA input CSVs
Case Study Guide
Mission, Team, and Updates
Updates / Release Changelog
How to Cite Our Apps
Breadcrumb links
Significance Tests for Event Studies
The abnormal and cumulative abnormal returns from event studies are typically used in two ways. Either they are used as dependent variables in subsequent regression analyses or they are interpreted as such. This latter direct interpretation seeks to answer the question whether the distribution of the abnormal returns is systematically different from predicted. In the relevant literature, the focus is almost always on the mean of the distrubtion of abnormal returns and, specifically, one seeks to answer the questions whether this mean is different from zero (with statistical significance).
The answer about statistical significance is given by means of hypothesis testing, where the null hypothesis ($H_0$) states that the mean of the abnormal returns within the event window is zero and the alternative hypothesis ($H_1$) states the opposite. Formally, the testing framework reads as follows:
\begin{equation}H_0: μ = 0 \end{equation}
\begin{equation}H_1: μ \neq 0 \end{equation}
Note that μ may not only represent the mean of simple abnormal returns (ARs). Event studies are oftentimes muti-level calculations, where ARs are compounded to obtain cumulative abnormal returns (CARs), and CARs are 'averaged' to obtain cumulative average abnormal returns (CAARs) in cross-sectional studies (sometimes also called 'sample studies'). In long-run event studies, the buy-and-hold abnormal return (BHAR) is often used to replace CAR. Futhermore, BHARs can then again be 'averaged' to obtain ABHAR for cross-sectional studies. Significance testing can be applied to the mean of any of these returns, meaning that μ in the above testing framework can represent the mean of ARs, CARs, BHARs, AARs, CAARs, and ABHARs. Let us shortly revisit these six different forms of abnormal return calculations, as presented in the introduction:
\begin{equation}AR_{i,t}=R_{i,t}-E[R_{i,t}|\Omega_{i,t-1}], \end{equation}
where the term $E[R_{i,t}|\Omega_{i,t-1}]$ denotes the expected value of $R_{i,t}$ condtional on the information set at time $t-1$, which thus serves as the predicted return. Hence, if the event does not have any effect, the abnormal return will not systematically differ from zero, that is, it has mean zero. The information set at time $t-1$ can be of different nature and represent, for example, the constant-expected-return model, the market model, or a Fama-French factor model.
\begin{equation}AAR_{t}= \frac{1}{N} \sum\limits_{i=1}^{N}AR_{i,t} \end{equation}
\begin{equation}CAR_{i}=\sum\limits_{t=T_1 + 1}^{T_2} AR_{i,t} \end{equation}
\begin{equation}BHAR_{i}=\prod\limits_{t=T_1 + 1}^{T_2} (1 + R_{i,t}) -\prod\limits_{t=T_1 + 1}^{T_2} (1 + E[R_{i,t}|\Omega_{i,t}])\end{equation}
\begin{equation}CAAR=\frac{1}{N}\sum\limits_{i=1}^{N}CAR_i\end{equation}
\begin{equation}ABHAR=\frac{1}{N}\sum\limits_{i=1}^{N}BHAR_{i}\end{equation}
For grouped observations, be it along the firm or the event dimension, we provide a precision-weighted CAAR, which offers a similar standardization as the Patell Test:
\begin{equation}PWCAAR=\sum\limits_{i=1}^{N}\sum\limits_{t=T_1 + 1}^{T_2}\omega_i AR_{i, t}\end{equation}
where $$\omega_i = \frac{\left(\sum\limits_{t=T_1 + 1}^{T_2} S^2_{AR_{i,t}}\right)^{-0.5}}{ \sum\limits_{1=1}^{N}\left(\sum\limits_{t=T_1 + 1}^{T2}S^2_{AR_{i,t}}\right)^{-0.5}}$$
and $S^2_{AR_{i,t}}$ denotes the forecast-error-corrected variance.
The literature on event-study hypthesis testing covers a wide range of tests and is thus very comprehensive. Generally, significance tests can be classified into parametric and nonparametric tests. Parametric tests assume that the individual firm's abnormal returns are normally distributed, whereas nonparametric tests do not rely on any such assumptions. Applied researches typically carray out both parametric and nonparametric tests to verify that the research findings are not driven by outliers, which tend to affect the results of parametric tests but not the results of nonparametric tests; for example, see Schipper and Smith (1983). Table 1 provides an overview toghether with links to the formulas of the different test statistics.
Table 1: Significance tests
Null hypothesis
Parametric tests
Nonparametric tests
$H_0: E(AR) = 0$ AR Test Individual Event
$H_0: E(AAR) = 0$ Cross-Sectional Test, Crude Dependence Adjustment Test, Patell Test, Adjusted Patell Test, Standardized Cross-Sectional Test, Adjusted Standardized Cross-Sectional Test, and Skewness Corrected Test Generalized Sign Test, Generalized Rank T Test, and Generalized Rank Z Test Sample of Events
$H_0: E(CAR) = 0$ CAR t-test Individual Event
$H_0: E(CAAR) = 0$ Cross-Sectional Test, Time-Series Standard Deviation Test, Patell Test, Adjusted Patell Test, Standardized Cross-Sectional Test, Adjusted Standardized Cross-Sectional Test, and Skewness Corrected Test Generalized Sign Test, Generalized Rank T Test, and Generalized Rank Z Test Sample of Events
$H_0: E(BHAR) = 0$ BHAR Test Individual Event
$H_0: E(ABHAR) = 0$ ABHAR Test and Skewness Corrected Test Sample of Events
Among the most widely used parametric tests are those developed by Patell (1976) and Boehmer, Musumeci and Poulsen (1991), whereas among the most widely used nonparametric tests are the rank-test of Corrado (1989), and the sign-test of Cowan (1992).
Why different test statistics are needed
An informed choice of test statistic should be based on the research setting and the statistical issues pertaining to the observed data. Specifically, event-date clustering poses a problem leading to (i) cross-sectional correlation of abnormal returns and (ii) distortions from event-induced volatility changes. A cross-sectional correlation arises when sample studies focus on (an) event(s) that happened for multiple firms at the same day(s). Event-induced changes of volatility, on the other hand, is a phenomenon common to many event types (e.g., M&A transactions) that becomes problematic when events are clustered. As a consequence, both issues impact the standard error which appears in the numerator of a t-test statistic (that is, of the test statistic of a parametric test). If this impact is igorned, the test statistic becomes inflated (in absoluate value), leading to liberal inference: If the null hypothesis is true, it will be rejected with probability greater than the nominal signficance level of the test; that is, there is an unduly large chance to `find' something in the data, even though nothing happened in reality.
Comparison of test statistics
There have been several attempts to address these statistical issues. Patell (1976, 1979), for example, tried to overcome the t-test's sensitivity to event-induced volatility by standardizing the event window's ARs. He used the dispersion of the estimation interval's ARs to limit the impact of stocks with large return volatilities. Unfortunately, the test can still be liberal (that is, reject true null hypotheses too often), particularly when samples are characterized by non-normal returns, low prices, or illiquidity. Also, the test has been found to be still affected by event-induced volatility changes (Campbell and Wasley, 1993; Cowan and Sergeant, 1996; Maynes and Rumsey, 1993, Kolari and Pynnonen, 2010). Boehmer, Musumeci and Poulsen (1991) resolved this latter issue and developed a test statistic robust against volatility-changing events. Furthermore, the simulation study of Kolari and Pynnonen (2010) indicates an over-rejection of true null hypotheses for both the Patell and the BMP test if the cross-sectional correlation is ignored. Kolari and Pynnonen (2010) developed an adjusted version for both test statistics that accounts for such cross-sectional correlation.
The nonparametric rank test of Corrado and Zivney (1992) (RANK) is based on re-standardized event-window returns and has proven robust against induced volatility and cross-correlation. Sign tests are another type of nonparametric tests. One advantage over parametric t-tests (claimed by the authors of sign tests) is that they are able to also identify small levels of abnormal returns. Moreover, the use of nonparametric sign and rank tests has long been promoted by statisticians for applications that require robustness against non-normally distributed data. Past research (e.g., Fama, 1976) has argued that daily stock returns have distributions that are more fat-tailed (exhibit larger skewness or kurtosis) than normal distributions, which then suggests the use of nonparametric tests.
Several authors have further advanced the sign and rank tests pioneered by Cowan (1992) and Corrado and Zivney (1992). Campbell and Wasley (1993), for example, improved the RANK test by introducing an adjustment to the standard error for longer CARs, creating the Campbell-Wasley test statistic (CUM-RANK). Another nonparametric test is the generalized rank test (GRANK), which seems to have good properties for both shorter and longer CAR windows.
The Cowan (1992) sign test (SIGN) is also used for testing CARs by comparing the proportion of positive ARs close to an event to a proporiton of postive ARs from a `normal' (that is, event-free) period. Because this test only uses the sign of the difference between abnormal returns, but not its magnitude, associated event-induced (excess) volatility does not inflate the null-rejection rates of the test; furthermore, the test is robust against asymmetric return distributions.
Overall, when comparing the different test statistics, the relevant literature has come to the following findings and recommendations (see Table 2 for further details):
Parametric tests based on standardized abnormal returns perform better than those based on non-standardized returns.
Generally, nonparametric tests tend to be more powerful than parametric tests.
The generalized rank test (GRANK) is one of the most powerful tests for both shorter and and longer CAR windows.
Table 2: Comparison of the main test statistics (1-9 are parametric, 10-15 non-parametric)
Key Reference
Abbreviation in EST Results
1 T test
Sensitive to cross-sectional correlation and volatility changes
2 Cross-Sectional Test CSect T
3 Crude Dependence Adjustment Test CDA T
4 Patell Test Patell (1976) Patell Z
Robust against the way in which ARs are distributed across the (cumulated) event window
Sensitive to cross-sectional correlation and event-induced volatility
5 Adjusted Patell Test Kolari and Pynnönen (2010) Adjusted Patell Z
Same as Patell
Accounts for cross-sectional correlation
6 Standardized Cross-Sectional Test Boehmer, Musumeci and Poulsen (1991) StdCSect T
Accounts for event-induced volatility
Accounts for serial correlation
Sensitive to cross-sectional correlation
7 Adjusted Standardized Cross-Section Test Kolari and Pynnönen (2010) Adjusted StdCSect T
Accounts additionally for cross-correlation
8 Skewness Corrected Test Hall (1992) Skewness Corrected T
Corrects the test statistics for potential skewness (in the return distribution)
9 Jackknife Test Giaccotto and Sfiridis (1996) Jackknife T
10 Corrado Rank Test Corrado and Zivney (1992) Rank Z
Loses power for wider CARs (e.g., [-10,10])
11 Generalized Rank Test Kolari and Pynnönen (2011) Generalized Rank T
Accounts for cross-correlation of returns
Accounts for returns serial correlation
12 Generalized Rank Test Kolari and Pynnönen (2011) Generalized Rank Z
see GRANKT
13 Sign Test Cowan (1992) not available
Robust against skewness (in the return distributioin)
Inferior performance for longer event windows
14 Cowan Generalized Sign Test Cowan (1992) Generalized Sign Z
15 Wilcoxon signed-rank Test Wilcoxon (1945)
Takes into account both the sign and the magnitude of ARs
Source: Thesw strengths and weaknesses were compiled from Kolari and Pynnonen (2011).
Formulas, acronyms, and decisions rule applicable to test statistics
Let $L_1 = T_1 - T_0 + 1$ denote estimation-window length with $T_0$ denoting the 'earliest' day of the estimation window and $T_1$ denoting the 'latest' day of the estimation window; furthermore, let $L_2 = T_2 - T_1$ denote the event-window length with $T_2$ denoting the 'latest day' of the event window. This notation implies that the estimation window, given by $\{T_0, \ldots, T_1\}$ ends immediately before the event window, given by $\{T_1+1, \ldots, T_2\}$ begins. We will stick to this convention for simplicity in all the formulas below, but note that our methodology also allows for an arbitrary gap between the the windows, as specified by the user. Let $N$ denote the sample size (i.e., the number of observations); finally, $S_{AR_i}$ denotes the (sample) standard deviation over the estimation window based on the formula
$$S^2_{AR_i} = \frac{1}{M_i - 2} \sum\limits_{t=T_0}^{T_1}(AR_{i,t})^2$$
Here, $M_{i}$ denotes to the number of non-missing (i.e., matched) returns. This formula is based on the market model (whtere two parameters need to estimated to compute abnormal returns). For other models, the numerator needs to be changed from $M_i -2$ to $M_i -k$, where $k$ denotes the number of paramaters that need to be estimated to compute abnormal returns; for example, in the constant-expected-return model, we have $k=1$ whereas in the (standard) Fama-French factor model, we have $k=4$ (one constant and three factors).
[1] T test
Our research app provides test statistics for single firms at each point of time $t$. The null hypothesis is: $H_0: E(AR_{i, t}) = 0,$ and the test statistic is given by
$$t_{AR_{i,t}}=\frac{AR_{i,t}}{S_{AR_i}}, $$
where $S_{AR_i}$ is the standard deviation of the abnormal returns in the estimation window based on
$$S^2_{AR_i} = \frac{1}{M_i-2} \sum\limits_{t=T_0}^{T_1}(AR_{i,t})^2.$$
Second, we provide t-statistics of the cumulative abnormal returns for each firm. The t-statistic for the null $H_0: E(CAR_{i}) = 0$ is defined as
$$t_{CAR}=\frac{CAR_i}{S_{CAR}},$$
$$S^2_{CAR} = L_2 S^2_{AR_i}.$$
[2] Cross-Sectional Test (Abbr.: CSect T)
A simple test statistic for testing $H_0: E(AAR) = 0$ is given by
$$t_{AAR_t}=\sqrt{N}\frac{AAR_t}{S_{AAR_t}},$$
where $S_{AAR_t}$ denotes the standard deviation across firms at time $t$ based on:
$$S^2_{AAR_t} =\frac{1}{N-1} \sum\limits_{i=1}^{N}(AR_{i, t} - AAR_t)^2.$$
The test statistic for testing $H_0: E(CAAR) = 0$ is given by
$$t_{CAAR}=\sqrt{N}\frac{CAAR}{S_{CAAR}},$$
where $S_{CAAR}$ denotes the standard deviation of the cumulative abnormal returns across the sample based on:
$$S^2_{CAAR} =\frac{1}{N-1} \sum\limits_{i=1}^{N}(CAR_{i} - CAAR)^2.$$
Brown and Warner (1985) showed that the cross-sectional test is sensitive to event-induced volatility, which can result in low power of the test.
[3] Crude Dependence Adjustment Test (Abbr.: CDA T)
The crude dependence adjustment test by Brown and Warner (1980, 1985) uses the entire sample for variance estimation. According to this construction, the time-series dependence test does not account for (possibly) unequal variances across observations. We have for the variance estimation:
$$S^2_{AAR} =\frac{1}{M-2} \sum\limits_{t=T_0}^{T_1}(AAR_{t} - \overline{AAR})^2,$$
where $[T_0, T_1]$ denotes the estimation window and
$$\overline{AAR} = \frac{1}{M} \sum\limits_{t=T_0}^{T_1}AAR_{t}.$$
The test statistic for testing $H_0: E(AAR_t) = 0$ is given by$$t_{AAR_t}=\sqrt{N}\frac{AAR_t}{S_{AAR}}.$$
The test statistic for testing $H_0: E(CAAR = 0)$ is given by
$$t_{CAAR}=\frac{CAAR}{\sqrt{T_2 - T_1}S_{AAR}}.$$
[4] Patell or Standardized Residual Test (Abbr.: Patell Z)
The Patell test is a widely used test statistic in event studies. In the first step, Patell (1976, 1979) suggested to standardize each $AR_i$ by the forecast-error-corrected standard deviation before calculating the test statistic.
\begin{equation}SAR_{i,t} = \frac{AR_{i,t}}{S_{AR_{i,t}}} \label{eq:sar}\end{equation}
As the event-window abnormal returns are out-of-sample predictions, Patell adjusts the standard error by the forecast-error:
\begin{equation}S^2_{AR_{i,t}} = S^2_{AR_i} \left(1+\frac{1}{M_i}+\frac{(R_{m,t}-\overline{R}_{m})^2} {\sum\limits_{t=T_0}^{T_1}(R_{m,t}-\overline{R}_{m})^2}\right)\label{EQ:FESD}\end{equation}
with $\overline{R}_{m}$ denoting the average of the market returns in the estimation window.
The test statistic for testing $H_0:E( AAR = 0)$ isgiven by
$$z_{Patell, t} = \frac{ASAR_t}{S_{ASAR_t}},$$
where $ASAR_t$ denotes the sum of the standardized abnormal returns over the sample
$$ASAR_t = \sum\limits_{i=1}^N SAR_{i,t},$$
with expectation zero and variance
$$S^2_{ASAR_t} = \sum\limits_{i=1}^N \frac{M_i-2}{M_i-4}$$
under the null.
$$z_{Patell}=\frac{1}{\sqrt{N}}\sum\limits_{i=1}^{N}\frac{CSAR_i}{S_{CSAR_i}},$$
with $CSAR$ denoting the cumulative standardized abnormal returns
$$CSAR_{i} = \sum\limits_{t=T_1+1}^{T_2} SAR_{i,t}$$
$$S^2_{CSAR_i} = L_2\frac{M_i-2}{M_i-4}$$
Under the assumption of cross-sectional independence and some other conditions (Patell, 1976), $z_{Patell}$ has a limiting standard normal distribution under the null.
[5] Adjusted Patell or Standardized Residual Test (Abbr.: Adjusted Patell Z)
Kolari and Pynnönen (2010) propose a modification of the Patell test to account for cross-correlation of the abnormal returns. Using the standardized abnormal returns ($SAR_{i,t}$) defined as above in Section [4], and defining $\overline r$ as the average of the sample cross-correlations of the estimation-period abnormal returns, the test statistic for $H_0: E(AAR) = 0$ is given by
$$z_{Patell, t}=z_{Patell, t} \sqrt{\frac{1}{1 + (N - 1) \overline r}},$$
where $z_{patell, t}$ denotes the Patell test statistic. It is easily seen that if the term $\overline r$ is zero, the adjusted test statistic reduces to the original Patell test statistic. Assuming the square-root rule holds for the standard deviation of different return periods, this test can be used when considering Cumulated Abnormal Returns ($H_0: E(CAAR) = 0$):
$$z_{Patell}=z_{Patell} \sqrt{\frac{1}{1 + (N - 1) \overline r}}.$$
[6] Standardized Cross-Sectional or BMP Test (Abbr.: StdCSect T)
Similarly, Boehmer, Musumeci and Poulsen (1991) proposed a standardized cross-sectional method that is robust against any (additional) variance induced by the event. The test statistic on day $t$ ($H_0: E(AAR) = 0$) in the event window is given by
$$z_{BMP, t}= \frac{ASAR_t}{\sqrt{N}S_{ASAR_t}},$$
with $ASAR_t$ defined as for Patell-test [2] and with standard deviation based on
$$S^2_{ASAR_t} = \frac{1}{N-1}\sum\limits_{i=1}^{N}\left(SAR_{i, t} - \frac{1}{N} \sum\limits_{l=1}^N SAR_{l, t} \right)^2.$$
Furthermore, the EST API provides the test statistic for testing $H_0: E(CAAR) = 0$ given by
$$z_{BMP}=\sqrt{N}\frac{\overline{SCAR}}{S_{\overline{SCAR}}},$$
where $\overline{SCAR}$ denotes the averaged standardized cumulated abnormal returns across the $N$ firms, with standard deviation based on
$$S^2_{\overline{SCAR}} = \frac{1}{N-1} \sum\limits_{i=1}^{N} \left(SCAR_i - \overline{SCAR}\right)^2,$$
$$\overline{SCAR} = \frac{1}{N}\sum\limits_{i=1}^{N}SCAR_i$$
with $SCAR_i = \frac{CAR_i}{S_{CAR_i}}$ and $S_{CAR_i}$ denoting the forecast-error-corrected standard deviation from Mikkelson and Partch (1988). The Mikkelson-and-Partch correction adjusts for each firm the test statistic for serial correlation in the returns. The correction terms are
Market Model:
$$S^2_{CAR_i} = S_{AR_i}^2\left(L_i + \frac{L^2_i}{M_i} + \frac{\left(\sum\limits_{t=T_1+1}^{T_2}(R_{m,t}-\overline{R}_{m})\right)^2} {\sum\limits_{t=T_0}^{T_1}(R_{m,t}-\overline{R}_{m})^2}\right)$$
Comparison Period Mean Adjusted Model:
$$S^2_{CAR_i} = S_{AR_i}^2\left(L_i + \frac{L^2_i}{M_i}\right)$$
Market Adjusted Model:
$$S^2_{CAR_i} = S_{AR_i}^2L_i,$$
where $L_i$ denotes the number of non-missing returns in the event window and $M_i$ denotes the number of non-missing returns in the estimation window for firm $i$. Finally, $\overline{R}_{m}$ denotes the average of the market returns in the estimation window; for example, see Patell Test.
[7] Adjusted Standardized Cross-Sectional or Adjusted BMP Test (Abbr.: Adjusted StdCSect T)
Kolari and Pynnönen (2010) proposed a modification of the BMP test to account for cross-correlation of the abnormal returns. Using the standardized abnormal returns ($SAR_{i,t}$) defined as in the previous section, and defining $\overline r$ as the average of the sample cross-correlations of the estimation-period abnormal returns, the test statistic for $H_0: E(AAR) = 0$ of the adjusted BMP test is
$$z_{BMP, t}=z_{BMP, t} \sqrt{\frac{1- \overline r}{1 + (N - 1) \overline r}},$$
where $z_{bmp, t}$ denotes the BMP test statistic. It is easily seen that if the term $\overline r$ is zero, the adjusted test statistic reduces to the original BMP test statistic. Assuming the square-root rule holds for the standard deviation of different return periods, this test can be used when considering Cumulated Abnormal Returns ($H_0: E(CAAR) = 0$):
$$z_{BMP}=z_{BMP} \sqrt{\frac{1- \overline r}{1 + (N - 1) \overline r}}.$$
[8] Skewness-Corrected Test (Abbr.: Skewness Corrected T)
The skewness-adjusted t-test, introduced by Hall 1992, corrects the cross-sectional t-test for a (possibly) skewed abnormal-return distribution. This test is applicable for averaged abnormal return ($H_0: E(AAR) = 0$), the cumulative averaged abnormal return ($H_0: E(CAAR) = 0$), and the averaged buy-and-hold abnormal return ($H_0: E(ABHAR) = 0$). In what follows, we will folcus on cumulative averaged abnormal returns. First, recall the (unbiased) cross-sectional sample variance:
$$S^2_{CAAR} = \frac{1}{N-1} \sum\limits_{i=1}^{N}(CAR_i - CAAR)^2.$$
Next, the (unbiased) sample skewness is given by:
$$\gamma = \frac{N}{(N-2)(N-1)} \sum\limits_{i=1}^{N}(CAR_i - CAAR)^3S^{-3}_{CAAR} .$$
Finallyt, let
$$S = \frac{CAAR}{S_{CAAR}}.$$
Then the skewness-adjusted test statistic for CAAR is given by
$$t_{skew} = \sqrt{N}\left(S + \frac{1}{3}\gamma S^2 + \frac{1}{27}\gamma^2S^3 + \frac{1}{6N}\gamma\right),$$
which is asymptotically standard normal distributed under the null. For a further discussion on skewness transformation we refer to Hall (1992) and for further discussion on unbiased estimation of the second and third moments we refer to Cramer (1961) or Rimoldini (2013).
[9] Jackknife Test (Abbr.: Jackknife T)
This test will be added in a future version.
[10] Corrado Rank Test (Abbr.: Rank Z)
In a first step, the Corrado's (1989) rank test transforms abnormal returns into ranks. Ranking is done for all abnormal returns of both the event and the estimation period. If ranks are tied, the midrank is used. For adjusting on missing values Corrado and Zyvney (1992) suggested a standardization of the ranks by the number of non-missing values $M_i$ plus 1
$$K_{i, t}=\frac{rank(AR_{i, t})}{1 + M_i + L_i} $$,
where $L_i$ denotes to the number of non-missing (i.e., matched) returns in the event window. The rank statistic for testing on a single day ($H_0: E(AAR) = 0$) is then given by
$$t_{rank, t} = \frac{\overline{K}_t - 0.5}{S_{\overline{K}}},$$
where $\overline{K}_t = \frac{1}{N_t}\sum\limits_{i=1}^{N_t}K_{i, t}$, $N_t$ denotes the number of non-missing returns across firms and
$$S^2_{\overline{K}} = \frac{1}{L_1 + L_2} \sum\limits_{t=T_0}^{T_2} \frac{N_t}{N}\left(\overline{K}_t - 0.5 \right)^2$$.
When analyzing a multi-day event period, Campell and Wasley (1993) defined the RANK test considering the sum of the mean excess rank for the event window as follows ($H_0: E(CAAR) = 0$):
$$t_{rank} =\sqrt{L_2} \left(\frac{\overline{K}_{T_1, T_2} - 0.5}{S_{\overline{K}}}\right),$$
where $\overline{K}_{T_1, T_2} = \frac{1}{L_2} \sum\limits_{t=T_1 + 1}^{T_2}\overline{K}_t$ denotes the mean rank across firms and time in the event window. By adjusting the last day in the event window $T_2$, one can get a series of test statistics as definded by Campell and Wasley (1993).
Note 1: The adjustment for event-induced variance as done by Campell and Wasley (1993) is omitted here and may be implemented in a future version. As an alternative for such an application, we recommend the GRANK-T or GRANK-Z test.
[11] Generalized Rank T Test (Abbr.: Generalized Rank T) - this section 11 is under revision-
In the following steps we assume, for the sake of simplicity, that there are no missing values in either the estimation or the event window for any firm. In order to account for possible event-induced volatility, the GRANK test squeezes the whole event window into a single observation, the so-called 'cumulative event day'. First, define the standardized cumulative abnormal returns of firm $i$ in the event window as
$$SCAR_{i}=\frac{CAR_{i}}{S_{CAR_{i}}},$$
where $S_{CAR_{i}}$ denotes the standard deviation of the prediction errors in the cumulative abnormal returns of firm $i$, based on
\begin{equation}S^2_{CAR_{i}} = S^2_{AR_i} \left(L+\frac{L_2}{L_1}+\frac{\sum\limits_{t=T_1+1}^{T_2}(R_{m,t}-\overline{R}_{m})^2} {\sum\limits_{t=T_0}^{T_1}(R_{m,t}-\overline{R}_{m})^2}\right).\end{equation}
$$S^2_{CAR_i} = S_{AR_i}^2\left(L_i + \frac{L^2_i}{M_i} + \frac{\sum\limits_{t=T_1+1}^{T_2}(R_{m,t}-\overline{R}_{m})^2} {\sum\limits_{t=T_0}^{T_1}(R_{m,t}-\overline{R}_{m})^2}\right)$$
Under the null, the standardized CAR value $SCAR_{i}$ has an expectation of zero and approximately unit variance. To account for event-induced volatility $S_{CAR_{i}}$ is re-standardized by the cross-sectional standard deviation
$$SCAR^*_{i}=\frac{SCAR_{i}}{S_{SCAR}},$$
$$S^2_{SCAR}=\frac{1}{N-1} \sum\limits_{i=1}^N \left(SCAR_{i} - \overline{SCAR} \right)^2 \quad \text{ and } \quad \overline{SCAR} = \frac{1}{N} \sum\limits_{i=1}^N SCAR_{i}.$$
By construction, $SCAR^*_{i}$ has again an expectation of zero and approximately unit variance under the null. Now, let us define the generalized standardized abnormal returns ($GSAR$):
$$GSAR_{i, t} = \left\{ \eqalign{ SCAR^*_i &\text{ for $t$ in event window} \ SAR_{i ,t} &\text{ for $t$ in estimation window}} \right \}.$$
The CAR window is also considered as one time point, the other time points are considered GSAR is equal to the standardized abnormal returns. Define on this $L_1 + 1$ points the standardized ranks:
$$K_{i, t}=\frac{rank(GSAR_{i, t})}{L_1 + 2}-0.5$$
Then the generalized rank t-statistic for testing $H_0: E(CAAR) = 0$ is defined as:
$$t_{grank}=Z\left(\frac{L_1 - 1}{L_1 - Z^2}\right)^{1/2}$$
$$Z=\frac{\overline{K_{0}}}{S_{\overline{K}}},$$
where $t=0$ indicates the cumulative event day and
$$S^2_{\overline{K}}=\frac{1}{L_1 + 1}\sum\limits_{t \in CW}\frac{N_t}{N}\overline{K}_t^2$$
with CW representing the combined window consisting of the estimation window and the cumulative event day, and
$$\overline{K}_t=\frac{1}{N_t}\sum\limits_{i=1}^{N_t}K_{i, t}.$$
$t_{grank}$ is t-distributed with $L_1 - 1$ degrees of freedom under the null.
Formulas for testing on a single day ($H_0: E(AAR) = 0$) are straightforward modifications of the ones shown above.
[12] Generalized Rank Z Test (Generalized Rank Z)
Using some facts about statistics on ranks, we get the standard deviation of $\overline{K_{0}}$ based on
$$S^2_{\overline{K_{0}}} =\frac{L_1}{12N(L_1 + 2)}.$$
By this calculation, the following test statistic can be defined
$$z_{grank} = \frac{ \overline{K_{0}} }{ S_{\overline{ K_{0} } } } = \sqrt{ \frac{12N(L_1+ 2)}{L_1}} \overline{K_{0}},$$
which converges under null hypothesis quickly to the standard normal distribution as number of the firms $N$ increases.
[13] Sign Test
This sign test has been proposed by Cowan (1991) and builds on the ratio of positive cumulative abnormal returns $\hat{p}$ present in the event window. Under the null hypothesis, this ratio should not significantly differ from 0.5.
$$t_{sign}= \sqrt{N}\left(\frac{\hat{p}-0.5}{\sqrt{0.5(1-0.5)}}\right)$$
[14] Cowan Generalized Sign Test (Abbr.: Generalized Sign Z)
Under the null hypothesis, the number of stocks with positive abnormal cumulative returns ($CAR$) is expected to be consistent with the fraction $\hat{p}$ of positive $CAR$ from the estimation period. When the number of positive $CAR$ is significantly higher than the expected number derived from the estimation-period fraction, then null hypothesis is rejected.
The estimation-period fraction is given by
$$\hat{p}=\frac{1}{N}\sum\limits_{i=1}^{N}\frac{1}{L_1}\sum\limits_{t=T_0}^{T_1}\varphi_{i, t},$$
where $\varphi_{i,t}$ equals $1$ if the sign is positive and equals $0$ otherwise. The generalized sign test statistic ($H_0: E(CAAR) = 0$) is given by
$$z_{gsign}=\frac{(w-N\hat{p})}{\sqrt{N\hat{p}(1-\hat{p})}},$$
where $w$ is the number of stocks with positive cumulative abnormal returns during the event period. To compute the p-value, a normal approximation to the binomial distribution with the parameters $\hat{p}$ and $N$, is used.
Note 1: This test is based on Cowan, A. R. (1992).
Note 2: the EST API provides GSIGN test statistics also for single days ($H_0: E(AAR) = 0$) in the event time period.
Note 3: The GSIGN test is based on the traditional SIGN test where under the null hypothesis a binomial distribution Bin(0.5, $N$) is used for the distribution of the test statistics.
Note 4: If $N$ is small, the normal approximation is inaccurate for calculating the p-value; in such a case we recommend to use the binomial distribution for calculating the p-value.
[15] Wilcoxon Test (Abbr.: Wilcoxon Z)
Impress Terms of Use Privacy Policy
© Copyright 2021. Eventstudytools | CommonCrawl |
List of geometers
A geometer is a mathematician whose area of study is geometry.
Geometry
Projecting a sphere to a plane
• Outline
• History (Timeline)
Branches
• Euclidean
• Non-Euclidean
• Elliptic
• Spherical
• Hyperbolic
• Non-Archimedean geometry
• Projective
• Affine
• Synthetic
• Analytic
• Algebraic
• Arithmetic
• Diophantine
• Differential
• Riemannian
• Symplectic
• Discrete differential
• Complex
• Finite
• Discrete/Combinatorial
• Digital
• Convex
• Computational
• Fractal
• Incidence
• Noncommutative geometry
• Noncommutative algebraic geometry
• Concepts
• Features
Dimension
• Straightedge and compass constructions
• Angle
• Curve
• Diagonal
• Orthogonality (Perpendicular)
• Parallel
• Vertex
• Congruence
• Similarity
• Symmetry
Zero-dimensional
• Point
One-dimensional
• Line
• segment
• ray
• Length
Two-dimensional
• Plane
• Area
• Polygon
Triangle
• Altitude
• Hypotenuse
• Pythagorean theorem
Parallelogram
• Square
• Rectangle
• Rhombus
• Rhomboid
Quadrilateral
• Trapezoid
• Kite
Circle
• Diameter
• Circumference
• Area
Three-dimensional
• Volume
• Cube
• cuboid
• Cylinder
• Dodecahedron
• Icosahedron
• Octahedron
• Pyramid
• Platonic Solid
• Sphere
• Tetrahedron
Four- / other-dimensional
• Tesseract
• Hypersphere
Geometers
by name
• Aida
• Aryabhata
• Ahmes
• Alhazen
• Apollonius
• Archimedes
• Atiyah
• Baudhayana
• Bolyai
• Brahmagupta
• Cartan
• Coxeter
• Descartes
• Euclid
• Euler
• Gauss
• Gromov
• Hilbert
• Huygens
• Jyeṣṭhadeva
• Kātyāyana
• Khayyám
• Klein
• Lobachevsky
• Manava
• Minkowski
• Minggatu
• Pascal
• Pythagoras
• Parameshvara
• Poincaré
• Riemann
• Sakabe
• Sijzi
• al-Tusi
• Veblen
• Virasena
• Yang Hui
• al-Yasamin
• Zhang
• List of geometers
by period
BCE
• Ahmes
• Baudhayana
• Manava
• Pythagoras
• Euclid
• Archimedes
• Apollonius
1–1400s
• Zhang
• Kātyāyana
• Aryabhata
• Brahmagupta
• Virasena
• Alhazen
• Sijzi
• Khayyám
• al-Yasamin
• al-Tusi
• Yang Hui
• Parameshvara
1400s–1700s
• Jyeṣṭhadeva
• Descartes
• Pascal
• Huygens
• Minggatu
• Euler
• Sakabe
• Aida
1700s–1900s
• Gauss
• Lobachevsky
• Bolyai
• Riemann
• Klein
• Poincaré
• Hilbert
• Minkowski
• Cartan
• Veblen
• Coxeter
Present day
• Atiyah
• Gromov
Some notable geometers and their main fields of work, chronologically listed, are:
1000 BCE to 1 BCE
Further information: History of geometry
• Baudhayana (fl. c. 800 BC) – Euclidean geometry, geometric algebra
• Manava (c. 750 BC–690 BC) – Euclidean geometry
• Thales of Miletus (c. 624 BC – c. 546 BC) – Euclidean geometry
• Pythagoras (c. 570 BC – c. 495 BC) – Euclidean geometry, Pythagorean theorem
• Zeno of Elea (c. 490 BC – c. 430 BC) – Euclidean geometry
• Hippocrates of Chios (born c. 470 – 410 BC) – first systematically organized Stoicheia – Elements (geometry textbook)
• Mozi (c. 468 BC – c. 391 BC)
• Plato (427–347 BC)
• Theaetetus (c. 417 BC – 369 BC)
• Autolycus of Pitane (360–c. 290 BC) – astronomy, spherical geometry
• Euclid (fl. 300 BC) – Elements, Euclidean geometry (sometimes called the "father of geometry")
• Apollonius of Perga (c. 262 BC – c. 190 BC) – Euclidean geometry, conic sections
• Archimedes (c. 287 BC – c. 212 BC) – Euclidean geometry
• Eratosthenes (c. 276 BC – c. 195/194 BC) – Euclidean geometry
• Katyayana (c. 3rd century BC) – Euclidean geometry
1–1300 AD
• Hero of Alexandria (c. AD 10–70) – Euclidean geometry
• Pappus of Alexandria (c. AD 290–c. 350) – Euclidean geometry, projective geometry
• Hypatia of Alexandria (c. AD 370–c. 415) – Euclidean geometry
• Brahmagupta (597–668) – Euclidean geometry, cyclic quadrilaterals
• Vergilius of Salzburg (c.700–784) – Irish bishop of Aghaboe, Ossory and later Salzburg, Austria; antipodes, and astronomy
• Al-Abbās ibn Said al-Jawharī (c. 800–c. 860)
• Thabit ibn Qurra (826–901) – analytic geometry, non-Euclidean geometry, conic sections
• Abu'l-Wáfa (940–998) – spherical geometry, spherical triangles
• Alhazen (965–c. 1040)
• Omar Khayyam (1048–1131) – algebraic geometry, conic sections
• Ibn Maḍāʾ (1116–1196)
1301–1800 AD
Leonardo da Vinci
Johannes Kepler
Girard Desargues
René Descartes
Blaise Pascal
Isaac Newton
Leonhard Euler
Carl Gauss
August Möbius
Nikolai Lobachevsky
John Playfair
Jakob Steiner
• Piero della Francesca (1415–1492)
• Leonardo da Vinci (1452–1519) – Euclidean geometry
• Jyesthadeva (c. 1500 – c. 1610) – Euclidean geometry, cyclic quadrilaterals
• Marin Getaldić (1568–1626)
• Jacques-François Le Poivre (1652–1710), projective geometry
• Johannes Kepler (1571–1630) – (used geometric ideas in astronomical work)
• Edmund Gunter (1581–1686)
• Girard Desargues (1591–1661) – projective geometry; Desargues' theorem
• René Descartes (1596–1650) – invented the methodology of analytic geometry, also called Cartesian geometry after him
• Pierre de Fermat (1607–1665) – analytic geometry
• Blaise Pascal (1623–1662) – projective geometry
• Christiaan Huygens (1629-1695) - evolute
• Giordano Vitale (1633–1711)
• Philippe de La Hire (1640–1718) – projective geometry
• Isaac Newton (1642–1727) – 3rd-degree algebraic curve
• Giovanni Ceva (1647–1734) – Euclidean geometry
• Johann Jacob Heber (1666–1727) – surveyor and geometer
• Giovanni Gerolamo Saccheri (1667–1733) – non-Euclidean geometry
• Leonhard Euler (1707–1783)
• Tobias Mayer (1723–1762)
• Johann Heinrich Lambert (1728–1777) – non-Euclidean geometry
• Gaspard Monge (1746–1818) – descriptive geometry
• John Playfair (1748–1819) – Euclidean geometry
• Lazare Nicolas Marguerite Carnot (1753–1823) – projective geometry
• Joseph Diaz Gergonne (1771–1859) – projective geometry; Gergonne point
• Carl Friedrich Gauss (1777–1855) – Theorema Egregium
• Louis Poinsot (1777–1859)
• Siméon Denis Poisson (1781–1840)
• Jean-Victor Poncelet (1788–1867) – projective geometry
• Augustin-Louis Cauchy (1789 – 1857)
• August Ferdinand Möbius (1790–1868) – Euclidean geometry
• Nikolai Ivanovich Lobachevsky (1792–1856) – hyperbolic geometry, a non-Euclidean geometry
• Germinal Dandelin (1794–1847) – Dandelin spheres in conic sections
• Jakob Steiner (1796–1863) – champion of synthetic geometry methodology, projective geometry, Euclidean geometry
1801–1900 AD
Julius Plücker
Arthur Cayley
Bernhard Riemann
Richard Dedekind
Max Noether
Felix Klein
Hermann Minkowski
Henri Poincaré
Evgraf Fedorov
• Karl Wilhelm Feuerbach (1800–1834) – Euclidean geometry
• Julius Plücker (1801–1868)
• János Bolyai (1802–1860) – hyperbolic geometry, a non-Euclidean geometry
• Christian Heinrich von Nagel (1803–1882) – Euclidean geometry
• Johann Benedict Listing (1808–1882) – topology
• Hermann Günther Grassmann (1809–1877) – exterior algebra
• Ludwig Otto Hesse (1811–1874) – algebraic invariants and geometry
• Ludwig Schlafli (1814–1895) – Regular 4-polytope
• Pierre Ossian Bonnet (1819–1892) – differential geometry
• Arthur Cayley (1821–1895)
• Joseph Bertrand (1822–1900)
• Delfino Codazzi (1824–1873) – differential geometry
• Bernhard Riemann (1826–1866) – elliptic geometry (a non-Euclidean geometry) and Riemannian geometry
• Julius Wilhelm Richard Dedekind (1831–1916)
• Ludwig Burmester (1840–1927) – theory of linkages
• Edmund Hess (1843–1903)
• Albert Victor Bäcklund (1845–1922)
• Max Noether (1844–1921) – algebraic geometry
• Henri Brocard (1845–1922) – Brocard points
• William Kingdon Clifford (1845–1879) – geometric algebra
• Pieter Hendrik Schoute (1846–1923)
• Felix Klein (1849–1925)
• Sofia Vasilyevna Kovalevskaya (1850–1891)
• Evgraf Fedorov (1853–1919)
• Henri Poincaré (1854–1912)
• Luigi Bianchi (1856–1928) – differential geometry
• Alicia Boole Stott (1860–1940)
• Hermann Minkowski (1864–1909) – non-Euclidean geometry
• Henry Frederick Baker (1866–1956) – algebraic geometry
• Élie Cartan (1869–1951)
• Dmitri Egorov (1869–1931) – differential geometry
• Veniamin Kagan (1869–1953)
• Raoul Bricard (1870–1944) – descriptive geometry
• Ernst Steinitz (1871–1928) – Steinitz's theorem
• Marcel Grossmann (1878–1936)
• Oswald Veblen (1880–1960) – projective geometry, differential geometry
• Emmy Noether (1882–1935) – algebraic topology
• Harry Clinton Gossard (1884–1954)
• Arthur Rosenthal (1887–1959)
• Helmut Hasse (1898–1979) – algebraic geometry
1901–present
H. S. M. Coxeter
Ernst Witt
Benoit Mandelbrot
Branko Grünbaum
Michael Atiyah
J. H. Conway
William Thurston
Mikhail Gromov
George W. Hart
Shing-Tung Yau
Károly Bezdek
Grigori Perelman
Denis Auroux
• Denis Auroux (1977–)
• William Vallance Douglas Hodge (1903–1975)
• Patrick du Val (1903–1987)
• Beniamino Segre (1903–1977) – combinatorial geometry
• J. C. P. Miller (1906–1981)
• André Weil (1906–1998) – Algebraic geometry
• H. S. M. Coxeter (1907–2003) – theory of polytopes, non-Euclidean geometry, projective geometry
• J. A. Todd (1908–1994)
• Daniel Pedoe (1910–1998)
• Shiing-Shen Chern (1911–2004) – differential geometry
• Ernst Witt (1911–1991)
• Rafael Artzy (1912–2006)
• Aleksandr Danilovich Aleksandrov (1912–1999)
• László Fejes Tóth (1915–2005)
• Edwin Evariste Moise (1918–1998)
• Aleksei Pogorelov (1919–2002) – differential geometry
• Magnus Wenninger (1919–2017) – polyhedron models
• Jean-Louis Koszul (1921–2018)
• Isaak Yaglom (1921–1988)
• Benoit Mandelbrot (1924–2010) – fractal geometry
• Katsumi Nomizu (1924–2008) – affine differential geometry
• Michael S. Longuet-Higgins (1925–2016)
• John Leech (1926–1992)
• Alexander Grothendieck (1928–2014) – algebraic geometry
• Branko Grünbaum (1929–2018) – discrete geometry
• Michael Atiyah (1929–2019)
• Lev Semenovich Pontryagin (1908–1988)
• Geoffrey Colin Shephard (1927–2016)
• Norman W. Johnson (1930–2017)
• John Milnor (1931–)
• Roger Penrose (1931–)
• Yuri Manin (1937–2023) – algebraic geometry and diophantine geometry
• Vladimir Arnold (1937–2010) – algebraic geometry
• Ernest Vinberg (1937–2020)
• J. H. Conway (1937–2020) – sphere packing, recreational geometry
• Robin Hartshorne (1938–) – geometry, algebraic geometry
• Phillip Griffiths (1938–) – algebraic geometry, differential geometry
• Enrico Bombieri (1940–) – algebraic geometry
• Robert Williams (1942–)
• Peter McMullen (1942–)
• Richard S. Hamilton (1943–) – differential geometry, Ricci flow, Poincaré conjecture
• Mikhail Gromov (1943–)
• Rudy Rucker (1946–)
• William Thurston (1946–2012)
• Shing-Tung Yau (1949–)
• Michael Freedman (1951–)
• Egon Schulte (1955–) – polytopes
• George W. Hart (1955–) – sculptor
• Károly Bezdek (1955–) – discrete geometry, sphere packing, Euclidean geometry, non-Euclidean geometry
• Simon Donaldson (1957–)
• Kenji Fukaya (1959–) – symplectic geometry
• Oh Yong-Geun (1961–)
• Toshiyuki Kobayashi (1962–)
• Hiraku Nakajima (1962–) – representation theory and geometry
• Hwang Jun-Muk (1963–) – algebraic geometry, differential geometry
• Grigori Perelman (1966–) – Poincaré conjecture
• Maryam Mirzakhani (1977–2017)
Geometers in art
God as architect of the world, 1220–1230, from Bible moralisée
Kepler's Platonic solid model of planetary spacing in the Solar System from Mysterium Cosmographicum (1596)
The Ancient of Days, 1794, by William Blake, with the compass as a symbol for divine order
Newton (1795), by William Blake; here, Newton is depicted critically as a "divine geometer".[2]
References
1. Bill Casselman. "One of the Oldest Extant Diagrams from Euclid". University of British Columbia. Retrieved 2008-09-26.
2. "Newton, object 1 (Butlin 306) "Newton"". William Blake Archive. September 25, 2013.
| Wikipedia |
# Chapter 1
## Units and Vectors: Tools for Physics
### The Important Stuff
#### The SI System
Physics is based on measurement. Measurements are made by comparisons to well-defined standards which define the units for our measurements.
The SI system (popularly known as the metric system) is the one used in physics. Its unit of length is the meter, its unit of time is the second and its unit of mass is the kilogram. Other quantities in physics are derived from these. For example the unit of energy is the joule, defined by $1 \mathrm{~J}=1 \frac{\mathrm{kg} \cdot \mathrm{m}^{2}}{\mathrm{~s}^{2}}$.
As a convenience in using the SI system we can associate prefixes with the basic units to represent powers of 10 . The most commonly used prefixes are given here:
| Factor | Prefix | Symbol |
| :---: | :---: | :---: |
| $10^{-12}$ | pico- | $\mathrm{p}$ |
| $10^{-9}$ | nano- | $\mathrm{n}$ |
| $10^{-6}$ | micro- | $\mu$ |
| $10^{-3}$ | milli- | $\mathrm{m}$ |
| $10^{-2}$ | centi- | $\mathrm{c}$ |
| $10^{3}$ | kilo- | $\mathrm{k}$ |
| $10^{6}$ | mega- | $\mathrm{M}$ |
| $10^{9}$ | giga- | $\mathrm{G}$ |
Other basic units commonly used in physics are:
Time : $\quad 1$ minute $=60 \mathrm{~s} \quad 1$ hour $=60 \mathrm{~min} \quad$ etc.
Mass : $\quad 1$ atomic mass unit $=1 \mathrm{u}=1.6605 \times 10^{-27} \mathrm{~kg}$
#### Changing Units
In all of our mathematical operations we must always write down the units and we always treat the unit symbols as multiplicative factors. For example, if me multiply $3.0 \mathrm{~kg}$ by $2.0 \frac{\mathrm{m}}{\mathrm{s}}$ we get
$$
(3.0 \mathrm{~kg}) \cdot\left(2.0 \frac{\mathrm{m}}{\mathrm{s}}\right)=6.0 \frac{\mathrm{kg} \cdot \mathrm{m}}{\mathrm{s}}
$$
We use the same idea in changing the units in which some physical quantity is expressed. We can multiply the original quantity by a conversion factor, i.e. a ratio of values for which the numerator is the same thing as the denominator. The conversion factor is then equal to 1 , and so we do not change the original quantity when we multiply by the conversion factor.
Examples of conversion factors are:
$$
\left(\frac{1 \mathrm{~min}}{60 \mathrm{~s}}\right) \quad\left(\frac{100 \mathrm{~cm}}{1 \mathrm{~m}}\right) \quad\left(\frac{1 \mathrm{yr}}{365.25 \text { day }}\right) \quad\left(\frac{1 \mathrm{~m}}{3.28 \mathrm{ft}}\right)
$$
#### Density
A quantity which will be encountered in your study of liquids and solids is the density of a sample. It is usually denoted by $\rho$ and is defined as the ratio of mass to volume:
$$
\rho=\frac{m}{V}
$$
The SI units of density are $\frac{\mathrm{kg}}{\mathrm{m}^{3}}$ but you often see it expressed in $\frac{\mathrm{g}}{\mathrm{cm}^{3}}$.
#### Dimensional Analysis
Every equation that we use in physics must have the same type of units on both sides of the equals sign. Our basic unit types (dimensions) are length $(L)$, time $(T)$ and mass $(M)$. When we do dimensional analysis we focus on the units of a physics equation without worrying about the numerical values.
#### Vectors; Vector Addition
Many of the quantities we encounter in physics have both magnitude ("how much") and direction. These are vector quantities.
We can represent vectors graphically as arrows and then the sum of two vectors is found (graphically) by joining the head of one to the tail of the other and then connecting head to tail for the combination, as shown in Fig. 1.1. The sum of two (or more) vectors is often called the resultant.
We can add vectors in any order we want: $\mathbf{A}+\mathbf{B}=\mathbf{B}+\mathbf{A}$. We say that vector addition is "commutative".
We express vectors in component form using the unit vectors $\mathbf{i}, \mathbf{j}$ and $\mathbf{k}$, which each have magnitude 1 and point along the $x, y$ and $z$ axes of the coordinate system, respectively.
(a)
(b)
Figure 1.1: Vector addition. (a) shows the vectors $\mathbf{A}$ and $\mathbf{B}$ to be summed. (b) shows how to perform the sum graphically.
Figure 1.2: Addition of vectors by components (in two dimensions).
Any vector can be expressed as a sum of multiples of these basic vectors; for example, for the vector $\mathbf{A}$ we would write:
$$
\mathbf{A}=A_{x} \mathbf{i}+A_{y} \mathbf{j}+A_{z} \mathbf{k}
$$
Here we would say that $A_{x}$ is the $x$ component of the vector $\mathbf{A}$; likewise for $y$ and $z$.
In Fig. 1.2 we illustrate how we get the components for a vector which is the sum of two other vectors. If
$$
\mathbf{A}=A_{x} \mathbf{i}+A_{y} \mathbf{j}+A_{z} \mathbf{k} \quad \text { and } \quad \mathbf{B}=B_{x} \mathbf{i}+B_{y} \mathbf{j}+B_{z} \mathbf{k}
$$
then
$$
\mathbf{A}+\mathbf{B}=\left(A_{x}+B_{x}\right) \mathbf{i}+\left(A_{y}+B_{y}\right) \mathbf{j}+\left(A_{z}+B_{z}\right) \mathbf{k}
$$
Once we have found the (Cartesian) component of two vectors, addition is simple; just add the corresponding components of the two vectors to get the components of the resultant vector.
When we multiply a vector by a scalar, the scalar multiplies each component; If $\mathbf{A}$ is a vector and $n$ is a scalar, then
$$
c \mathbf{A}=c A_{x} \mathbf{i}+c A_{y} \mathbf{j}+c A_{z} \mathbf{k}
$$
In terms of its components, the magnitude ("length") of a vector A (which we write as $A)$ is given by:
$$
A=\sqrt{A_{x}^{2}+A_{y}^{2}+A_{z}^{2}}
$$
Many of our physics problems will be in two dimensions ( $x$ and $y$ ) and then we can also represent it in polar form. If $\mathbf{A}$ is a two-dimensional vector and $\theta$ as the angle that $\mathbf{A}$ makes with the $+x$ axis measured counter-clockwise then we can express this vector in terms of components $A_{x}$ and $A_{y}$ or in terms of its magnitude $A$ and the angle $\theta$. These descriptions are related by:
$$
\begin{array}{ll}
A_{x}=A \cos \theta & A_{y}=A \sin \theta \\
A=\sqrt{A_{x}^{2}+A_{y}^{2}} & \tan \theta=\frac{A_{y}}{A_{x}}
\end{array}
$$
When we use Eq. 1.6 to find $\theta$ from $A_{x}$ and $A_{y}$ we need to be careful because the inverse tangent operation (as done on a calculator) might give an angle in the wrong quadrant; one must think about the signs of $A_{x}$ and $A_{y}$.
#### Multiplying Vectors
There are two ways to "multiply" two vectors together.
The scalar product (or dot product) of the vectors $\mathbf{a}$ and $\mathbf{b}$ is given by
$$
\mathbf{a} \cdot \mathbf{b}=a b \cos \phi
$$
where $a$ is the magnitude of $\mathbf{a}, b$ is the magnitude of $\mathbf{b}$ and $\phi$ is the angle between $\mathbf{a}$ and $\mathbf{b}$.
The scalar product is commutative: $\mathbf{a} \cdot \mathbf{b}=\mathbf{b} \cdot \mathbf{a}$. One can show that $\mathbf{a} \cdot \mathbf{b}$ is related to the components of $\mathbf{a}$ and $\mathbf{b}$ by:
$$
\mathbf{a} \cdot \mathbf{b}=a_{x} b_{x}+a_{y} b_{y}+a_{z} b_{z}
$$
If two vectors are perpendicular then their scalar product is zero.
The vector product (or cross product) of vectors $\mathbf{a}$ and $\mathbf{b}$ is a vector $\mathbf{c}$ whose magnitude is given by
$$
c=a b \sin \phi
$$
where $\phi$ is the smallest angle between $\mathbf{a}$ and $\mathbf{b}$. The direction of $\mathbf{c}$ is perpendicular to the plane containing $\mathbf{a}$ and $\mathbf{b}$ with its orientation given by the right-hand rule. One way of using the right-hand rule is to let the fingers of the right hand bend (in their natural direction!) from $\mathbf{a}$ to $\mathbf{b}$; the direction of the thumb is the direction of $\mathbf{c}=\mathbf{a} \times \mathbf{b}$. This is illustrated in Fig. 1.3.
The vector product is anti-commutative: $\mathbf{a} \times \mathbf{b}=-\mathbf{b} \times \mathbf{a}$.
Relations among the unit vectors for vector products are:
$$
\mathbf{i} \times \mathbf{j}=\mathbf{k} \quad \mathbf{j} \times \mathbf{k}=\mathbf{i} \quad \mathbf{k} \times \mathbf{i}=\mathbf{j}
$$
(a)
(b)
Figure 1.3: (a) Finding the direction of $\mathbf{A} \times \mathbf{B}$. Fingers of the right hand sweep from $\mathbf{A}$ to $\mathbf{B}$ in the shortest and least painful way. The extended thumb points in the direction of $\mathbf{C}$. (b) Vectors $\mathbf{A}, \mathbf{B}$ and $\mathbf{C}$. The magnitude of $\mathbf{C}$ is $C=A B \sin \phi$.
The vector product of $\mathbf{a}$ and $\mathbf{b}$ can be computed from the components of these vectors by:
$$
\mathbf{a} \times \mathbf{b}=\left(a_{y} b_{z}-a_{z} b_{y}\right) \mathbf{i}+\left(a_{z} b_{x}-a_{x} b_{z}\right) \mathbf{j}+\left(a_{x} b_{y}-a_{y} b_{x}\right) \mathbf{k}
$$
which can be abbreviated by the notation of the determinant:
$$
\mathbf{a} \times \mathbf{b}=\left|\begin{array}{ccc}
\mathbf{i} & \mathbf{j} & \mathbf{k} \\
a_{x} & a_{y} & a_{z} \\
b_{x} & b_{y} & b_{z}
\end{array}\right|
$$
### Worked Examples
#### Changing Units
1. The Empire State Building is $1472 \mathrm{ft}$ high. Express this height in both meters and centimeters. [FGT 1-4]
To do the first unit conversion (feet to meters), we can use the relation (see the Conversion Factors in the back of this book):
$$
1 \mathrm{~m}=3.281 \mathrm{ft}
$$
We set up the conversion factor so that "ft" cancels and leaves meters:
$$
1472 \mathrm{ft}=(1472 \mathrm{ft})\left(\frac{1 \mathrm{~m}}{3.281 \mathrm{ft}}\right)=448.6 \mathrm{~m} .
$$
So the height can be expressed as $448.6 \mathrm{~m}$. To convert this to centimeters, use:
$$
1 \mathrm{~m}=100 \mathrm{~cm}
$$
and get:
$$
448.6 \mathrm{~m}=(448.6 \mathrm{~m})\left(\frac{100 \mathrm{~cm}}{1 \mathrm{~m}}\right)=4.486 \times 10^{4} \mathrm{~cm}
$$
The Empire State Building is $4.486 \times 10^{4} \mathrm{~cm}$ high!
2. A rectangular building lot is $100.0 \mathrm{ft}$ by $150.0 \mathrm{ft}$. Determine the area of this lot in $\mathrm{m}^{2}$. [Ser4 1-19]
The area of a rectangle is just the product of its length and width so the area of the lot is
$$
A=(100.0 \mathrm{ft})(150.0 \mathrm{ft})=1.500 \times 10^{4} \mathrm{ft}^{2}
$$
To convert this to units of $\mathrm{m}^{2}$ we can use the relation
$$
1 \mathrm{~m}=3.281 \mathrm{ft}
$$
but the conversion factor needs to be applied twice so as to cancel " $\mathrm{ft}^{2}$ " and get " $\mathrm{m}^{2}$ ". We write:
$$
1.500 \times 10^{4} \mathrm{ft}^{2}=\left(1.500 \times 10^{4} \mathrm{ft}^{2}\right) \cdot\left(\frac{1 \mathrm{~m}}{3.281 \mathrm{ft}}\right)^{2}=1.393 \times 10^{3} \mathrm{~m}^{2}
$$
The area of the lot is $1.393 \times 10^{3} \mathrm{~m}^{2}$.
3. The Earth is approximately a sphere of radius $6.37 \times 10^{6} \mathrm{~m}$. (a) What is its circumference in kilometers? (b) What is its surface area in square kilometers? (c) What is its volume in cubic kilometers? [HRW5 1-6]
(a) The circumference of the sphere of radius $R$, i.e. the distance around any "great circle" is $C=2 \pi R$. Using the given value of $R$ we find:
$$
C=2 \pi R=2 \pi\left(6.37 \times 10^{6} \mathrm{~m}\right)=4.00 \times 10^{7} \mathrm{~m} .
$$
To convert this to kilometers, use the relation $1 \mathrm{~km}=10^{3} \mathrm{~m}$ in a conversion factor:
$$
C=4.00 \times 10^{7} \mathrm{~m}=\left(4.00 \times 10^{7} \mathrm{~m}\right) \cdot\left(\frac{1 \mathrm{~km}}{10^{3} \mathrm{~m}}\right)=4.00 \times 10^{4} \mathrm{~km}
$$
The circumference of the Earth is $4.00 \times 10^{4} \mathrm{~km}$.
(b) The surface area of a sphere of radius $R$ is $A=4 \pi R^{2}$. So we get
$$
A=4 \pi R^{2}=4 \pi\left(6.37 \times 10^{6} \mathrm{~m}\right)^{2}=5.10 \times 10^{14} \mathrm{~m}^{2}
$$
Again, use $1 \mathrm{~km}=10^{3} \mathrm{~m}$ but to cancel out the units " $\mathrm{m}^{2}$ " and replace them with " $\mathrm{km}^{2}$ " it must be applied twice:
$$
A=5.10 \times 10^{14} \mathrm{~m}^{2}=\left(5.10 \times 10^{14} \mathrm{~m}^{2}\right) \cdot\left(\frac{1 \mathrm{~km}}{10^{3} \mathrm{~m}}\right)^{2}=5.10 \times 10^{8} \mathrm{~km}^{2}
$$
The surface area of the Earth is $5.10 \times 10^{8} \mathrm{~km}^{2}$.
(c) The volume of a sphere of radius $R$ is $V=\frac{4}{3} \pi R^{3}$. So we get
$$
V=\frac{4}{3} \pi R^{3}=\frac{4}{3} \pi\left(6.37 \times 10^{6} \mathrm{~m}\right)^{3}=1.08 \times 10^{21} \mathrm{~m}^{3}
$$
Again, use $1 \mathrm{~km}=10^{3} \mathrm{~m}$ but to cancel out the units " $\mathrm{m}^{3}$ " and replace them with " $\mathrm{km}$ " it must be applied three times:
$$
V=1.08 \times 10^{21} \mathrm{~m}^{3}=\left(1.08 \times 10^{21} \mathrm{~m}^{3}\right) \cdot\left(\frac{1 \mathrm{~km}}{10^{3} \mathrm{~m}}\right)^{3}=1.08 \times 10^{12} \mathrm{~km}^{3}
$$
The volume of the Earth is $1.08 \times 10^{12} \mathrm{~km}^{3}$.
4. Calculate the number of kilometers in $20.0 \mathrm{mi}$ using only the following conversion factors: $1 \mathrm{mi}=5280 \mathrm{ft}, 1 \mathrm{ft}=12 \mathrm{in}, 1 \mathrm{in}=2.54 \mathrm{~cm}, 1 \mathrm{~m}=100 \mathrm{~cm}, 1 \mathrm{~km}=1000 \mathrm{~m}$. [HRW5 1-7]
Set up the "factors of 1 " as follows:
$$
\begin{aligned}
20.0 \mathrm{mi} & =(20.0 \mathrm{mi}) \cdot\left(\frac{5280 \mathrm{ft}}{1 \mathrm{mi}}\right) \cdot\left(\frac{12 \mathrm{in}}{1 \mathrm{ft}}\right) \cdot\left(\frac{2.54 \mathrm{~cm}}{1 \mathrm{in}}\right) \cdot\left(\frac{1 \mathrm{~m}}{100 \mathrm{~cm}}\right) \cdot\left(\frac{1 \mathrm{~km}}{1000 \mathrm{~m}}\right) \\
& =32.2 \mathrm{~km}
\end{aligned}
$$
Setting up the "factors of 1" in this way, all of the unit symbols cancel except for km (kilometers) which we keep as the units of the answer.
5. One gallon of paint (volume $=3.78 \times 10^{-3} \mathrm{~m}^{3}$ ) covers an area of $25.0 \mathrm{~m}^{3}$. What is the thickness of the paint on the wall? [Ser4 1-31]
We will assume that the volume which the paint occupies while it's covering the wall is the same as it has when it is in the can. (There are reasons why this may not be true, but let's just do this and proceed.)
The paint on the wall covers an area $A$ and has a thickness $\tau$; the volume occupied is the area time the thickness:
$$
V=A \tau .
$$
We have $V$ and $A$; we just need to solve for $\tau$ :
$$
\tau=\frac{V}{A}=\frac{3.78 \times 10^{-3} \mathrm{~m}^{3}}{25.0 \mathrm{~m}^{2}}=1.51 \times 10^{-4} \mathrm{~m} .
$$
The thickness is $1.51 \times 10^{-4} \mathrm{~m}$. This quantity can also be expressed as $0.151 \mathrm{~mm}$.
6. A certain brand of house paint claims a coverage of $460 \frac{\mathrm{ft}^{2}}{\mathrm{gal}}$. (a) Express this quantity in square meters per liter. (b) Express this quantity in SI base units. (c) What is the inverse of the original quantity, and what is its physical significance? [HRW5 1-15]
(a) Use the following relations in forming the conversion factors: $1 \mathrm{~m}=3.28 \mathrm{ft}$ and 1000 liter $=$ 264 gal. To get proper cancellation of the units we set it up as:
$$
460 \frac{\mathrm{ft}^{2}}{\text { gal }}=\left(460 \frac{\mathrm{ft}^{2}}{\text { gal }}\right) \cdot\left(\frac{1 \mathrm{~m}}{3.28 \mathrm{ft}}\right)^{2} \cdot\left(\frac{264 \mathrm{gal}}{1000 \mathrm{~L}}\right)=11.3 \frac{\mathrm{m}^{2}}{\mathrm{~L}}
$$
(b) Even though the units of the answer to part (a) are based on the metric system, they are not made from the base units of the SI system, which are m, s, and kg. To make the complete conversion to SI units we need to use the relation $1 \mathrm{~m}^{3}=1000 \mathrm{~L}$. Then we get:
$$
11.3 \frac{\mathrm{m}^{2}}{\mathrm{~L}}=\left(11.3 \frac{\mathrm{m}^{2}}{\mathrm{~L}}\right) \cdot\left(\frac{1000 \mathrm{~L}}{1 \mathrm{~m}^{3}}\right)=1.13 \times 10^{4} \mathrm{~m}^{-1}
$$
So the coverage can also be expressed (not so meaningfully, perhaps) as $1.13 \times 10^{4} \mathrm{~m}^{-1}$.
(c) The inverse (reciprocal) of the quantity as it was originally expressed is
$$
\left(460 \frac{\mathrm{ft}^{2}}{\mathrm{gal}^{-1}}\right)^{-1}=2.17 \times 10^{-3} \frac{\mathrm{gal}}{\mathrm{ft}^{2}} .
$$
Of course when we take the reciprocal the units in the numerator and denominator also switch places!
Now, the first expression of the quantity tells us that $460 \mathrm{ft}^{2}$ are associated with every gallon, that is, each gallon will provide $460 \mathrm{ft}^{2}$ of coverage. The new expression tells us that $2.17 \times 10^{-3}$ gal are associated with every $\mathrm{ft}^{2}$, that is, to cover one square foot of surface with paint, one needs $2.17 \times 10^{-3}$ gallons of it.
7. Express the speed of light, $3.0 \times 10^{8} \frac{\mathrm{m}}{\mathrm{s}}$ in (a) feet per nanosecond and (b) millimeters per picosecond. [HRW5 1-19]
(a) For this conversion we can use the following facts:
$$
1 \mathrm{~m}=3.28 \mathrm{ft} \quad \text { and } \quad 1 \mathrm{~ns}=10^{-9} \mathrm{~s}
$$
to get:
$$
\begin{aligned}
3.0 \times 10^{8} \frac{\mathrm{m}}{\mathrm{s}} & =\left(3.0 \times 10^{8} \frac{\mathrm{m}}{\mathrm{s}}\right) \cdot\left(\frac{3.28 \mathrm{ft}}{1 \mathrm{~m}}\right) \cdot\left(\frac{10^{-9} \mathrm{~s}}{1 \mathrm{~ns}}\right) \\
& =0.98 \frac{\mathrm{ft}}{\mathrm{ns}}
\end{aligned}
$$
In these new units, the speed of light is $0.98 \frac{\mathrm{ft}}{\mathrm{ns}}$.
(b) For this conversion we can use:
$$
1 \mathrm{~mm}=10^{-3} \mathrm{~m} \quad \text { and } \quad 1 \mathrm{ps}=10^{-12} \mathrm{~s}
$$
and set up the factors as follows:
$$
\begin{aligned}
3.0 \times 10^{8} \frac{\mathrm{m}}{\mathrm{s}} & =\left(3.0 \times 10^{8} \frac{\mathrm{m}}{\mathrm{s}}\right) \cdot\left(\frac{1 \mathrm{~mm}}{10^{-3} \mathrm{~m}}\right) \cdot\left(\frac{10^{-12} \mathrm{~s}}{1 \mathrm{ps}}\right) \\
& =3.0 \times 10^{-1} \frac{\mathrm{mm}}{\mathrm{ps}}
\end{aligned}
$$
In these new units, the speed of light is $3.0 \times 10^{-1} \frac{\mathrm{mm}}{\mathrm{ps}}$.
8. One molecule of water $\left(\mathrm{H}_{2} \mathrm{O}\right)$ contains two atoms of hydrogen and one atom of oxygen. A hydrogen atom has a mass of $1.0 \mathrm{u}$ and an atom of oxygen has a mass of $16 \mathrm{u}$, approximately. (a) What is the mass in kilograms of one molecule of water? (b) How many molecules of water are in the world's oceans, which have an estimated total mass of $1.4 \times 10^{21} \mathrm{~kg}$ ? [HRW5 1-33]
(a) We are given the masses of the atoms of $\mathrm{H}$ and $\mathrm{O}$ in atomic mass units; using these values, one molecule of $\mathrm{H}_{2} \mathrm{O}$ has a mass of
$$
m_{\mathrm{H}_{2} \mathrm{O}}=2(1.0 \mathrm{u})+16 \mathrm{u}=18 \mathrm{u}
$$
Use the relation between $\mathrm{u}$ (atomic mass units) and kilograms to convert this to kg:
$$
m_{\mathrm{H}_{2} \mathrm{O}}=(18 \mathrm{u})\left(\frac{1.6605 \times 10^{-27} \mathrm{~kg}}{1 \mathrm{u}}\right)=3.0 \times 10^{-26} \mathrm{~kg}
$$
One water molecule has a mass of $3.0 \times 10^{-26} \mathrm{~kg}$.
(b) To get the number of molecules in all the oceans, divide the mass of all the oceans' water by the mass of one molecule:
$$
N=\frac{1.4 \times 10^{21} \mathrm{~kg}}{3.0 \times 10^{-26} \mathrm{~kg}}=4.7 \times 10^{46} .
$$
... a large number of molecules!
#### Density
9. Calculate the density of a solid cube that measures $5.00 \mathrm{~cm}$ on each side and has a mass of $350 \mathrm{~g}$. [Ser4 1-1]
The volume of this cube is
$$
V=(5.00 \mathrm{~cm}) \cdot(5.00 \mathrm{~cm}) \cdot(5.00 \mathrm{~cm})=125 \mathrm{~cm}^{3}
$$
So from Eq. 1.1 the density of the cube is
$$
\rho=\frac{m}{V}=\frac{350 \mathrm{~g}}{125 \mathrm{~cm}^{3}}=2.80 \frac{\mathrm{g}}{\mathrm{cm}^{3}}
$$
Figure 1.4: Cross-section of copper shell in Example 11.
10. The mass of the planet Saturn is $5.64 \times 10^{26} \mathrm{~kg}$ and its radius is $6.00 \times 10^{7} \mathrm{~m}$. Calculate its density. [Ser4 1-2]
The planet Saturn is roughly a sphere. (But only roughly! Actually its shape is rather distorted.) Using the formula for the volume of a sphere, we find the volume of Saturn:
$$
V=\frac{4}{3} \pi R^{3}=\frac{4}{3} \pi\left(6.00 \times 10^{7} \mathrm{~m}\right)^{3}=9.05 \times 10^{23} \mathrm{~m}^{3}
$$
Now using the definition of density we find:
$$
\rho=\frac{m}{V}=\frac{5.64 \times 10^{26} \mathrm{~kg}}{9.05 \times 10^{23} \mathrm{~m}^{3}}=6.23 \times 10^{2} \frac{\mathrm{kg}}{\mathrm{m}^{3}}
$$
While this answer is correct, it is useful to express the result in units of $\frac{\mathrm{g}}{\mathrm{cm}^{3}}$. Using our conversion factors in the usual way, we get:
$$
6.23 \times 10^{2} \frac{\mathrm{kg}}{\mathrm{m}^{3}}=\left(6.23 \times 10^{2} \frac{\mathrm{kg}}{\mathrm{m}^{3}}\right) \cdot\left(\frac{10^{3} \mathrm{~g}}{1 \mathrm{~kg}}\right) \cdot\left(\frac{1 \mathrm{~m}}{100 \mathrm{~cm}}\right)^{3}=0.623 \frac{\mathrm{g}}{\mathrm{cm}^{3}}
$$
The average density of Saturn is $0.623 \frac{\mathrm{g}}{\mathrm{cm}^{3}}$. Interestingly, this is less than the density of water.
11. How many grams of copper are required to make a hollow spherical shell with an inner radius of $5.70 \mathrm{~cm}$ and an outer radius of $5.75 \mathrm{~cm}$ ? The density of copper is $8.93 \mathrm{~g} / \mathrm{cm}^{3}$. [Ser4 1-3]
A cross-section of the copper sphere is shown in Fig. 1.4. The outer and inner radii are noted as $r_{2}$ and $r_{1}$, respectively. We must find the volume of space occupied by the copper metal; this volume is the difference in the volumes of the two spherical surfaces:
$$
V_{\text {copper }}=V_{2}-V_{1}=\frac{4}{3} \pi r_{2}^{3}-\frac{4}{3} \pi r_{1}^{3}=\frac{4}{3} \pi\left(r_{2}^{3}-r_{1}^{3}\right)
$$
With the given values of the radii, we find:
$$
V_{\text {copper }}=\frac{4}{3} \pi\left((5.75 \mathrm{~cm})^{3}-(5.70 \mathrm{~cm})^{3}\right)=20.6 \mathrm{~cm}^{3}
$$
Now use the definition of density to find the mass of the copper contained in the shell:
$$
\rho=\frac{m_{\text {copper }}}{V_{\text {copper }}} \quad \Longrightarrow \quad m_{\text {copper }}=\rho V_{\text {copper }}=\left(8.93 \frac{\mathrm{g}}{\mathrm{cm}^{3}}\right)\left(20.6 \mathrm{~cm}^{3}\right)=184 \mathrm{~g}
$$
184 grams of copper are required to make the spherical shell of the given dimensions.
12. One cubic meter $\left(1.00 \mathrm{~m}^{3}\right)$ of aluminum has a mass of $2.70 \times 10^{3} \mathrm{~kg}$, and $1.00 \mathrm{~m}^{3}$ of iron has a mass of $7.86 \times 10^{3} \mathrm{~kg}$. Find the radius of a solid aluminum sphere that will balance a solid iron sphere of radius $2.00 \mathrm{~cm}$ on an equal-arm balance. [Ser4 1-39]
In the statement of the problem, we are given the densities of aluminum and iron:
$$
\rho_{\mathrm{Al}}=2.70 \times 10^{3} \frac{\mathrm{kg}}{\mathrm{m}^{3}} \quad \text { and } \quad \rho_{\mathrm{Fe}}=7.86 \times 10^{3} \frac{\mathrm{kg}}{\mathrm{m}^{3}} .
$$
A solid iron sphere of radius $R=2.00 \mathrm{~cm}=2.00 \times 10^{-2} \mathrm{~m}$ has a volume
$$
V_{\mathrm{Fe}}=\frac{4}{3} \pi R^{3}=\frac{4}{3} \pi\left(2.00 \times 10^{-2} \mathrm{~m}\right)^{3}=3.35 \times 10^{-5} \mathrm{~m}^{3}
$$
so that from $M_{\mathrm{Fe}}=\rho_{\mathrm{Fe}} V_{\mathrm{Fe}}$ we find the mass of the iron sphere:
$$
M_{\mathrm{Fe}}=\rho_{\mathrm{Fe}} V_{\mathrm{Fe}}=\left(7.86 \times 10^{3} \frac{\mathrm{kg}}{\mathrm{m}^{3}}\right)\left(3.35 \times 10^{-5} \mathrm{~m}^{3}\right)=2.63 \times 10^{-1} \mathrm{~kg}
$$
If this sphere balances one made from aluminum in an "equal-arm balance", then they have the same mass. So $M_{\mathrm{Al}}=2.63 \times 10^{-1} \mathrm{~kg}$ is the mass of the aluminum sphere. From $M_{\mathrm{Al}}=\rho_{\mathrm{Al}} V_{\mathrm{Al}}$ we can find its volume:
$$
V_{\mathrm{Al}}=\frac{M_{\mathrm{Al}}}{\rho_{\mathrm{Al}}}=\frac{2.63 \times 10^{-1} \mathrm{~kg}}{2.70 \times 10^{3} \frac{\mathrm{kg}}{\mathrm{m}^{3}}}=9.76 \times 10^{-5} \mathrm{~m}^{3}
$$
Having the volume of the sphere, we can find its radius:
$$
V_{\mathrm{Al}}=\frac{4}{3} \pi R^{3} \quad \Longrightarrow \quad R=\left(\frac{3 V_{\mathrm{Al}}}{4 \pi}\right)^{\frac{1}{3}}
$$
This gives:
$$
R=\left(\frac{3\left(9.76 \times 10^{-5} \mathrm{~m}^{3}\right)}{4 \pi}\right)^{\frac{1}{3}}=2.86 \times 10^{-2} \mathrm{~m}=2.86 \mathrm{~cm}
$$
The aluminum sphere must have a radius of $2.86 \mathrm{~cm}$ to balance the iron sphere.
#### Dimensional Analysis
## The period $T$ of a simple pendulum is measured in time units and is
$$
T=2 \pi \sqrt{\frac{\ell}{g}} .
$$
where $\ell$ is the length of the pendulum and $g$ is the free-fall acceleration in units of length divided by the square of time. Show that this equation is dimensionally correct. [Ser4 1-14]
The period $(T)$ of a pendulum is the amount of time it takes to makes one full swing back and forth. It is measured in units of time so its dimensions are represented by $T$.
On the right side of the equation we have the length $\ell$, whose dimensions are represented by $L$. We are told that $g$ is a length divided by the square of a time so its dimensions must be $L / T^{2}$. There is a factor of $2 \pi$ on the right side, but this is a pure number and has no units. So the dimensions of the right side are:
$$
\sqrt{\frac{L}{\left(\frac{L}{T^{2}}\right)}}=\sqrt{T^{2}}=T
$$
so that the right hand side must also have units of time. Both sides of the equation agree in their units, which must be true for it to be a valid equation!
14. The volume of an object as a function of time is calculated by $V=A t^{3}+B / t$, where $t$ is time measured in seconds and $V$ is in cubic meters. Determine the dimension of the constants $A$ and $B$. [Ser4 1-15]
Both sides of the equation for volume must have the same dimensions, and those must be the dimensions of volume where are $L^{3}$ (SI units of $\mathrm{m}^{3}$ ). Since we can only add terms with the same dimensions, each of the terms on right side of the equation $\left(A t^{3}\right.$ and $\left.B / t\right)$ must have the same dimensions, namely $L^{3}$.
Suppose we denote the units of $A$ by $[A]$. Then our comment about the dimensions of the first term gives us:
$$
[A] T^{3}=L^{3} \quad \Longrightarrow \quad[A]=\frac{L^{3}}{T^{3}}
$$
so $A$ has dimensions $L^{3} / T^{3}$. In the SI system, it would have units of $\mathrm{m}^{3} / \mathrm{s}^{3}$.
Suppose we denote the units of $B$ by $[B]$. Then our comment about the dimensions of the second term gives us:
$$
\frac{[B]}{T}=L^{3} \quad \Longrightarrow \quad[B]=L^{3} T
$$
so $B$ has dimensions $L^{3} T$. In the SI system, it would have units of $\mathrm{m}^{3} \mathrm{~s}$.
## Newton's law of universal gravitation is
$$
F=G \frac{M m}{r^{2}}
$$
Here $F$ is the force of gravity, $M$ and $m$ are masses, and $r$ is a length. Force has the SI units of $\mathrm{kg} \cdot \mathrm{m} / \mathrm{s}^{2}$. What are the SI units of the constant $G$ ? [Ser4 1-17]
If we denote the dimensions of $F$ by $[F]$ (and the same for the other quantities) then then dimensions of the quantities in Newton's Law are:
$$
[M]=M(\text { mass }) \quad[m]=M \quad[r]=L \quad[F]: \frac{M L}{T^{2}}
$$
What we don't know (yet) is $[G]$, the dimensions of $G$. Putting the known dimensions into Newton's Law, we must have:
$$
\frac{M L}{T^{2}}=[G] \frac{M \cdot M}{L^{2}}
$$
since the dimensions must be the same on both sides. Doing some algebra with the dimensions, this gives:
$$
[G]=\left(\frac{M L}{T^{2}}\right) \frac{L^{2}}{M^{2}}=\frac{L^{3}}{M T^{2}}
$$
so the dimensions of $G$ are $L^{3} /\left(M T^{2}\right)$. In the SI system, $G$ has units of
$$
\frac{\mathrm{m}^{3}}{\mathrm{~kg} \cdot \mathrm{s}^{3}}
$$
16. In quantum mechanics, the fundamental constant called Planck's constant, $h$, has dimensions of $\left[M L^{2} T^{-1}\right]$. Construct a quantity with the dimensions of length from $h$, a mass $m$, and $c$, the speed of light. [FGT 1-54]
The problem suggests that there is some product of powers of $h, m$ and $c$ which has dimensions of length. If these powers are $r, s$ and $t$, respectively, then we are looking for values of $r, s$ and $t$ such that
$$
h^{r} m^{s} c^{t}
$$
has dimensions of length.
What are the dimensions of this product, as written? We were given the dimensions of $h$, namely $\left[M L^{2} T^{-1}\right]$; the dimensions of $m$ are $M$, and the dimensions of $c$ are $\frac{L}{T}=L T^{-1}$ (it is a speed). So the dimensions of $h^{r} m^{s} c^{t}$ are:
$$
\left[M L^{2} T^{-1}\right]^{r}[M]^{s}\left[L T^{-1}\right]^{t}=M^{r+s} L^{2 r+t} T^{-r-t}
$$
where we have used the laws of combining exponents which we all remember from algebra. Now, since this is supposed to have dimensions of length, the power of $L$ must be 1 but the other powers are zero. This gives the equations:
$$
\begin{aligned}
r+s & =0 \\
2 r+t & =1 \\
-r-t & =0
\end{aligned}
$$
which is a set of three equations for three unknowns. Easy to solve!
The last of them gives $r=-t$. Substituting this into the second equation gives
$$
2 r+t=2(-t)+t=-t=1 \quad \Longrightarrow \quad t=-1
$$
Then $r=+1$ and the first equation gives us $s=-1$. With these values, we can confidently say that
$$
h^{r} m^{s} c^{t}=h^{1} m^{-1} c^{-1}=\frac{h}{m c}
$$
has units of length.
#### Vectors; Vector Addition
17. (a) What is the sum in unit-vector notation of the two vectors $\mathbf{a}=4.0 \mathbf{i}+3.0 \mathbf{j}$ and $\mathbf{b}=-13.0 \mathbf{i}+7.0 \mathbf{j}$ ? (b) What are the magnitude and direction of $\mathbf{a}+\mathbf{b}$ ? [HRW5 $3-20]$
(a) Summing the corresponding components of vectors $\mathbf{a}$ and $\mathbf{b}$ we find:
$$
\begin{aligned}
\mathbf{a}+\mathbf{b} & =(4.0-13.0) \mathbf{i}+(3.0+7.0) \mathbf{j} \\
& =-9.0 \mathbf{i}+10.0 \mathbf{j}
\end{aligned}
$$
This is the sum of the two vectors is unit-vector form.
(b) Using our results from (a), the magnitude of $\mathbf{a}+\mathbf{b}$ is
$$
|\mathbf{a}+\mathbf{b}|=\sqrt{(-9.0)^{2}+(10.0)^{2}}=13.4
$$
and if $\mathbf{c}=\mathbf{a}+\mathbf{b}$ points in a direction $\theta$ as measured from the positive $x$ axis, then the tangent of $\theta$ is found from
$$
\tan \theta=\left(\frac{c_{y}}{c_{x}}\right)=-1.11
$$
If we naively take the arctangent using a calculator, we are told:
$$
\theta=\tan ^{-1}(-1.11)=-48.0^{\circ}
$$
which is not correct because (as shown in Fig. 1.5), with $c_{x}$ negative, and $c_{y}$ positive, the
Figure 1.5: Vector c, found in Example 17. With $c_{x}=-9.0$ and $c_{y}=+10.0$, the direction of $\mathbf{c}$ is in the second quadrant.
Figure 1.6: Vectors $\mathbf{a}$ and $\mathbf{b}$ as given in Example 18.
correct angle must be in the second quadrant. The calculator was fooled because angles which differ by multiples of $180^{\circ}$ have the same tangent. The direction we really want is
$$
\theta=-48.0^{\circ}+180.0^{\circ}=132.0^{\circ}
$$
18. Vector a has magnitude $5.0 \mathrm{~m}$ and is directed east. Vector $\mathrm{b}$ has magnitude $4.0 \mathrm{~m}$ and is directed $35^{\circ}$ west of north. What are (a) the magnitude and (b) the direction of $a+b$ ? What are (c) the magnitude and (d) the direction of $b-a$ ? Draw a vector diagram for each combination. [HRW6 3-15]
(a) The vectors are shown in Fig. 1.6. (On the axes are shown the common directions N, S, $\mathrm{E}, \mathrm{W}$ and also the $x$ and $y$ axes; "North" is the positive $y$ direction, "East" is the positive $x$ direction, etc.) Expressing the vectors in $\mathbf{i}, \mathbf{j}$ notation, we have:
$$
\mathbf{a}=(5.00 \mathrm{~m}) \mathbf{i}
$$
and
$$
\begin{aligned}
\mathbf{b} & =-(4.00 \mathrm{~m}) \sin 35^{\circ}+(4.00 \mathrm{~m}) \cos 35^{c} i r c \\
& =(-2.29 \mathrm{~m}) \mathbf{i}+(3.28 \mathrm{~m}) \mathbf{j}
\end{aligned}
$$
So if vector $\mathbf{c}$ is the sum of vectors $\mathbf{a}$ and $\mathbf{b}$ then:
$$
\begin{gathered}
c_{x}=a_{x}+b_{x}=(5.00 \mathrm{~m})+(-2.29 \mathrm{~m})=2.71 \mathrm{~m} \\
c_{y}=a_{y}+b_{y}=(0.00 \mathrm{~m})+(3.28 \mathrm{~m})=3.28 \mathrm{~m}
\end{gathered}
$$
(a)
(b)
Figure 1.7: (a) Vector diagram showing the addition $\mathbf{a}+\mathbf{b}$. (b) Vector diagram showing $\mathbf{b}-\mathbf{a}$.
The magnitude of $\mathbf{c}$ is
$$
c=\sqrt{c_{x}^{2}+c_{y}^{2}}=\sqrt{(2.71 \mathrm{~m})^{2}+(3.28 \mathrm{~m})^{2}}=4.25 \mathrm{~m}
$$
(b) If the direction of $\mathbf{c}$, as measured counterclockwise from the $+x$ axis is $\theta$ then
$$
\tan \theta=\frac{c_{y}}{c_{x}}=\frac{3.28 \mathrm{~m}}{2.71 \mathrm{~m}}=1.211
$$
then the $\tan ^{-1}$ operation on a calculator gives
$$
\theta=\tan ^{-1}(1.211)=50.4^{\circ}
$$
and since vector c must lie in the first quadrant this angle is correct. We note that this angle is
$$
90.0^{\circ}-50.4^{\circ}=39.6^{\circ}
$$
just shy of the $+y$ axis (the "North" direction). So we can also express the direction by saying it is " $39.6^{\circ}$ East of North".
A vector diagram showing $\mathbf{a}, \mathbf{b}$ and $\mathbf{c}$ is given in Fig. 1.7(a).
(c) If the vector $\mathbf{d}$ is given by $\mathbf{d}=\mathbf{b}-\mathbf{a}$ then the components of $\mathbf{d}$ are given by
$$
\begin{array}{r}
d_{x}=b_{x}-a_{x}=(-2.29 \mathrm{~m})-(5.00 \mathrm{~m})=-7.29 \mathrm{~m} \\
c_{y}=a_{y}+b_{y}=(3.28 \mathrm{~m})-(0.00 \mathrm{~m})+(3.28 \mathrm{~m})=3.28 \mathrm{~m}
\end{array}
$$
The magnitude of $\mathbf{c}$ is
$$
d=\sqrt{d_{x}^{2}+d_{y}^{2}}=\sqrt{(-7.29 \mathrm{~m})^{2}+(3.28 \mathrm{~m})^{2}}=8.00 \mathrm{~m}
$$
(d) If the direction of $\mathbf{d}$, as measured counterclockwise from the $+x$ axis is $\theta$ then
$$
\tan \theta=\frac{d_{y}}{d_{x}}=\frac{3.28 \mathrm{~m}}{-7.29 \mathrm{~m}}=-0.450
$$
Figure 1.8: Vectors for Example 19.
Naively pushing buttons on the calculator gives
$$
\theta=\tan ^{-1}(-0.450)=-24.2^{\circ}
$$
which can't be right because from the signs of its components we know that $\mathbf{d}$ must lie in the second quadrant. We need to add $180^{\circ}$ to get the correct answer for the $\tan ^{-1}$ operation:
$$
\theta=-24.2^{\circ}+180.0^{\circ}=156^{\circ}
$$
But we note that this angle is
$$
180^{\circ}-156^{\circ}=24^{\circ}
$$
shy of the $-y$ axis, so the direction can also be expressed as " $24^{\circ}$ North of West".
A vector diagram showing $\mathbf{a}, \mathbf{b}$ and $\mathbf{d}$ is given in Fig. 1.7(b).
19. The two vectors $\mathrm{a}$ and $\mathrm{b}$ in Fig. 1.8 have equal magnitudes of $10.0 \mathrm{~m}$. Find (a) the $x$ component and (b) the $y$ component of their vector sum $\mathbf{r}$, (c) the magnitude of $\mathrm{r}$ and (d) the angle $\mathrm{r}$ makes with the positive direction of the $x$ axis. [HRW6 3-21]
(a) First, find the $x$ and $y$ components of the vectors a and b. The vector a makes an angle of $30^{\circ}$ with the $+x$ axis, so its components are
$$
\begin{aligned}
& a_{x}=a \cos 30^{\circ}=(10.0 \mathrm{~m}) \cos 30^{\circ}=8.66 \mathrm{~m} \\
& a_{y}=a \sin 30^{\circ}=(10.0 \mathrm{~m}) \sin 30^{\circ}=5.00 \mathrm{~m}
\end{aligned}
$$
The vector $\mathbf{b}$ makes an angle of $135^{\circ}$ with the $+x$ axis $\left(30^{\circ}\right.$ plus $105^{\circ}$ more) so its components are
$$
\begin{aligned}
& b_{x}=b \cos 135^{\circ}=(10.0 \mathrm{~m}) \cos 135^{\circ}=-7.07 \mathrm{~m} \\
& b_{y}=b \sin 135^{\circ}=(10.0 \mathrm{~m}) \sin 135^{\circ}=7.07 \mathrm{~m}
\end{aligned}
$$
Then if $\mathbf{r}=\mathbf{a}+\mathbf{b}$, the $x$ and $y$ components of the vector $\mathbf{r}$ are:
$$
\begin{aligned}
& r_{x}=a_{x}+b_{x}=8.66 \mathrm{~m}-7.07 \mathrm{~m}=1.59 \mathrm{~m} \\
& r_{y}=a_{y}+b_{y}=5.00 \mathrm{~m}+7.07 \mathrm{~m}=12.07 \mathrm{~m}
\end{aligned}
$$
Figure 1.9: Vectors $\mathbf{A}$ and $\mathbf{C}$ as described in Example 20.
So the $x$ component of the sum is $r_{x}=1.59 \mathrm{~m}$, and. .
(b) ... the $y$ component of the sum is $r_{y}=12.07 \mathrm{~m}$.
(c) The magnitude of the vector $\mathbf{r}$ is
$$
r=\sqrt{r_{x}^{2}+r_{y}^{2}}=\sqrt{(1.59 \mathrm{~m})^{2}+(12.07 \mathrm{~m})^{2}}=12.18 \mathrm{~m}
$$
(d) To get the direction of the vector $\mathbf{r}$ expressed as an angle $\theta$ measured from the $+x$ axis, we note:
$$
\tan \theta=\frac{r_{y}}{r_{x}}=7.59
$$
and then take the inverse tangent of 7.59:
$$
\theta=\tan ^{-1}(7.59)=82.5^{\circ}
$$
Since the components of $\mathbf{r}$ are both positive, the vector does lie in the first quadrant so that the inverse tangent operation has (this time) given the correct answer. So the direction of $\mathbf{r}$ is given by $\theta=82.5^{\circ}$.
20. In the sum $\mathbf{A}+\mathbf{B}=\mathbf{C}$, vector $\mathbf{A}$ has a magnitude of $12.0 \mathrm{~m}$ and is angled $40.0^{\circ}$ counterclockwise from the $+x$ direction, and vector $C$ has magnitude of $15.0 \mathrm{~m}$ and is angled $20.0^{\circ}$ counterclockwise from the $-x$ direction. What are (a) the magnitude and (b) the angle (relative to $+x$ ) of B? [HRW6 3-22]
(a) Vectors $\mathbf{A}$ and $\mathbf{C}$ are diagrammed in Fig. 1.9. From these we can get the components of $\mathbf{A}$ and $\mathbf{C}$ (watch the signs on vector $\mathbf{C}$ from the odd way that its angle is given!):
$$
\begin{array}{cl}
A_{x}=(12.0 \mathrm{~m}) \cos \left(40.0^{\circ}\right)=9.19 \mathrm{~m} & A_{y}=(12.0 \mathrm{~m}) \sin \left(40.0^{\circ}\right)=7.71 \mathrm{~m} \\
C_{x}=-(15.0 \mathrm{~m}) \cos \left(20.0^{\circ}\right)=-14.1 \mathrm{~m} & C_{y}=-(15.0 \mathrm{~m}) \sin \left(20.0^{\circ}\right)=-5.13 \mathrm{~m}
\end{array}
$$
(Note, the vectors in this problem have units to go along with their magnitudes, namely $\mathrm{m}$ (meters).) Then from the relation $\mathbf{A}+\mathbf{B}=\mathbf{C}$ it follows that $\mathbf{B}=\mathbf{C}-\mathbf{A}$, and from this we find the components of $\mathbf{B}$ :
$$
B_{x}=C_{x}-A_{x}=-14.1 \mathrm{~m}-9.19 \mathrm{~m}=-23.3 \mathrm{~m}
$$
$$
B_{y}=C_{y}-A_{y}=-5.13 \mathrm{~m}-7.71 \mathrm{~m}=-12.8 \mathrm{~m}
$$
Then we find the magnitude of vector $\mathbf{B}$ :
$$
B=\sqrt{B_{x}^{2}+B_{y}^{2}}=\sqrt{(-23.3)^{2}+(-12.8)^{2}} \mathrm{~m}=26.6 \mathrm{~m}
$$
(b) We find the direction of $\mathbf{B}$ from:
$$
\tan \theta=\left(\frac{B_{y}}{B_{x}}\right)=0.551
$$
If we naively press the "atan" button on our calculators to get $\theta$, we are told:
$$
\theta=\tan ^{-1}(0.551)=28.9^{\circ}
$$
which cannot be correct because from the components of $\mathbf{B}$ (both negative) we know that vector $\mathbf{B}$ lies in the third quadrant. So we need to ad $180^{\circ}$ to the naive result to get the correct answer:
$$
\theta=28.9^{\circ}+180.0^{\circ}=208.9^{\circ} .
$$
This is the angle of $\mathbf{B}$, measured counterclockwise from the $+x$ axis.
21. If $\mathbf{a}-\mathbf{b}=2 \mathbf{c}, \mathbf{a}+\mathbf{b}=4 \mathbf{c}$ and $\mathbf{c}=3 \mathbf{i}+4 \mathbf{j}$, then what are $\mathbf{a}$ and $\mathbf{b}$ ? [HRW5 3-24]
We notice that if we add the first two relations together, the vector $\mathbf{b}$ will cancel:
$$
(\mathbf{a}-\mathbf{b})+(\mathbf{a}+\mathbf{b})=(2 \mathbf{c})+(4 \mathbf{c})
$$
which gives:
$$
2 \mathbf{a}=6 \mathbf{c} \quad \Longrightarrow \quad \mathbf{a}=3 \mathbf{c}
$$
and we can use the last of the given equations to substitute for $\mathbf{c}$; we get
$$
\mathbf{a}=3 \mathbf{c}=3(3 \mathbf{i}+4 \mathbf{j})=9 \mathbf{i}+12 \mathbf{j}
$$
Then we can rearrange the first of the equations to solve for $\mathbf{b}$ :
$$
\begin{aligned}
\mathbf{b} & =\mathbf{a}-2 \mathbf{c}=(9 \mathbf{i}+12 \mathbf{j})-2(3 \mathbf{i}+4 \mathbf{j}) \\
& =(9-6) \mathbf{i}+(12-8) \mathbf{j} \\
& =3 \mathbf{i}+4 \mathbf{j}
\end{aligned}
$$
So we have found:
$$
\mathbf{a}=9 \mathbf{i}+12 \mathbf{j} \quad \text { and } \quad \mathbf{b}=3 \mathbf{i}+4 \mathbf{j}
$$
22. If $\mathbf{A}=(6.0 \mathbf{i}-8.0 \mathbf{j})$ units, $\mathbf{B}=(-8.0 \mathbf{i}+3.0 \mathbf{j})$ units, and $\mathbf{C}=(26.0 \mathbf{i}+19.0 \mathbf{j})$ units, determine $a$ and $b$ so that $a \mathbf{A}+b \mathbf{B}+\mathbf{C}=0$. [Ser4 3-46]
Figure 1.10: Vectors for Example 23
The condition on the vectors given in the problem:
$$
a \mathbf{A}+b \mathbf{B}+\mathbf{C}=0
$$
is a condition on the individual components of the vectors. It implies:
$$
a A_{x}+b B_{x}+C_{x}=0 \quad \text { and } \quad a A_{y}+b B_{y}+C_{y}=0
$$
So that we have the equations:
$$
\begin{aligned}
6.0 a-8.0 b+26.0 & =0 \\
-8.0 a+3.0 b+19.0 & ==0
\end{aligned}
$$
We have two equations for two unknowns so we can find $a$ and $b$. The are lots of ways to do this; one could multiply the first equation by 4 and the second equation by 3 to get:
$$
\begin{aligned}
24.0 a-32.0 b+104.0 & =0 \\
-24.0 a+9.0 b+57.0 & ==0
\end{aligned}
$$
Adding these gives
$$
-23.0 b+161=0 \quad \Longrightarrow \quad b=\frac{-161.0}{-23.0}=7.0
$$
and then the first of the original equations gives us $a$ :
$$
6.0 a=8.0 b-26.0=8.0(7.0)-26.0=30.0 \quad \Longrightarrow \quad a=\frac{30.0}{6.0}=5.0
$$
and our solution is
$$
a=7.0 \quad b=5.0
$$
23. Three vectors are oriented as shown in Fig. 1.10, where $|\mathrm{A}|=20.0$ units, $|\mathbf{B}|=40.0$ units, and $|\mathbf{C}|=30.0$ units. Find (a) the $x$ and $y$ components of the resultant vector and (b) the magnitude and direction of the resultant vector. [Ser4 3-47]
(a) Let's first put these vectors into "unit-vector notation":
$$
\begin{aligned}
& \mathbf{A}=20.0 \mathbf{j} \\
& \mathbf{B}=\left(40.0 \cos 45^{\circ}\right) \mathbf{i}+\left(40.0 \sin 45^{\circ}\right) \mathbf{j}=28.3 \mathbf{i}+28.3 \mathbf{j} \\
& \mathbf{C}=\left(30.0 \cos \left(-45^{\circ}\right)\right) \mathbf{i}+\left(30.0 \sin \left(-45^{\circ}\right)\right) \mathbf{j}=21.2 \mathbf{i}-21.2 \mathbf{j}
\end{aligned}
$$
Adding the components together, the resultant (total) vector is:
$$
\begin{aligned}
\text { Resultant } & =\mathbf{A}+\mathbf{B}+\mathbf{C} \\
& =(28.3+21.2) \mathbf{i}+(20.0+28.3-21.2) \mathbf{j} \\
& =49.5 \mathbf{i}+27.1 \mathbf{j}
\end{aligned}
$$
So the $x$ component of the resultant vector is 49.5 and the $y$ component of the resultant is 27.1 .
(b) If we call the resultant vector $\mathbf{R}$, then the magnitude of $\mathbf{R}$ is given by
$$
R=\sqrt{R_{x}^{2}+R_{y}^{2}}=\sqrt{(49.5)^{2}+(27.1)^{2}}=56.4
$$
To find its direction (given by $\theta$, measured counterclockwise from the $x$ axis), we find:
$$
\tan \theta=\frac{R_{y}}{R_{x}}=\frac{27.1}{49.5}=0.547
$$
and then taking the inverse tangent gives a possible answer for $\theta$ :
$$
\theta=\tan ^{-1}(0.547)=28.7^{\circ} .
$$
Is this the right answer for $\theta$ ? Since both components of $\mathbf{R}$ are positive, it must lie in the first quadrant and so $\theta$ must be between $0^{\circ}$ and $90^{\circ}$. So the direction of $\mathbf{R}$ is given by $28.7^{\circ}$.
24. A vector $B$, when added to the vector $C=3.0 \mathbf{i}+4.0 \mathbf{j}$, yields a resultant vector that is in the positive $y$ direction and has a magnitude equal to that of $\mathrm{C}$. What is the magnitude of B? [HRW5 3-26]
If the vector $\mathbf{B}$ is denoted by $\mathbf{B}=B_{x} \mathbf{i}+B_{y} \mathbf{j}$ then the resultant of $\mathbf{B}$ and $\mathbf{C}$ is
$$
\mathbf{B}+\mathbf{C}=\left(B_{x}+3.0\right) \mathbf{i}+\left(B_{y}+4.0\right) \mathbf{j} .
$$
We are told that the resultant points in the positive $y$ direction, so its $x$ component must be zero. Then:
$$
B_{x}+3.0=0 \quad \Longrightarrow \quad B_{x}=-3.0
$$
Now, the magnitude of $\mathbf{C}$ is
$$
C=\sqrt{C_{x}^{2}+C_{y}^{2}}=\sqrt{(3.0)^{2}+(4.0)^{2}}=5.0
$$
so that if the magnitude of $\mathbf{B}+\mathbf{C}$ is also 5.0 then we get
$$
|\mathbf{B}+\mathbf{C}|=\sqrt{(0)^{2}+\left(B_{y}+4.0\right)^{2}}=5.0 \quad \Longrightarrow \quad\left(B_{y}+4.0\right)^{2}=25.0 .
$$
The last equation gives $\left(B_{y}+4.0\right)= \pm 5.0$ and apparently there are two possible answers
$$
B_{y}=+1.0 \quad \text { and } \quad B_{y}=-9.0
$$
but the second case gives a resultant vector $\mathbf{B}+\mathbf{C}$ which points in the negative $y$ direction so we omit it. Then with $B_{y}=1.0$ we find the magnitude of $\mathbf{B}$ :
$$
B=\sqrt{\left(B_{x}\right)^{2}+\left(B_{y}\right)^{2}}=\sqrt{(-3.0)^{2}+(1.0)^{2}}=3.2
$$
The magnitude of vector $\mathbf{B}$ is 3.2 .
#### Multiplying Vectors
25. Vector A extends from the origin to a point having polar coordinates $\left(7,70^{\circ}\right)$ and vector $\mathbf{B}$ extends from the origin to a point having polar coordinates $\left(4,130^{\circ}\right)$. Find A B B. [Ser4 7-13]
We can use Eq. 1.7 to find $\mathbf{A} \cdot \mathbf{B}$. We have the magnitudes of the two vectors (namely $A=7$ and $B=4$ ) and the angle $\phi$ between the two is
$$
\phi=130^{\circ}-70^{\circ}=60^{\circ} .
$$
Then we get:
$$
\mathbf{A} \cdot \mathbf{B}=A B \cos \phi=(7)(4) \cos 60^{\circ}=14
$$
26. Find the angle between $\mathbf{A}=-5 \mathbf{i}-3 \mathbf{j}+2 \mathbf{k}$ and $\mathbf{B}=-2 \mathbf{j}-2 \mathbf{k}$. [Ser4 7-20]
Eq. 1.7 allows us to find the cosine of the angle between two vectors as long as we know their magnitudes and their dot product. The magnitudes of the vectors $\mathbf{A}$ and $\mathbf{B}$ are:
$$
\begin{aligned}
& A=\sqrt{A_{x}^{2}+A_{y}^{2}+A_{z}^{2}}=\sqrt{(-5)^{2}+(-3)^{2}+(2)^{2}}=6.164 \\
& B=\sqrt{B_{x}^{2}+B_{y}^{2}+B_{z}^{2}}=\sqrt{(0)^{2}+(-2)^{2}+(-2)^{2}}=2.828
\end{aligned}
$$
and their dot product is:
$$
\mathbf{A} \cdot \mathbf{B}=A_{x} B_{x}+A_{y} B_{y}+A_{z} B_{z}=(-5)(0)+(-3)(-2)+(2)(-2)=2
$$
Then from Eq. 1.7, if $\phi$ is the angle between $\mathbf{A}$ and $\mathbf{B}$, we have
$$
\cos \phi=\frac{\mathbf{A} \cdot \mathbf{B}}{A B}=\frac{2}{(6.164)(2.828)}=0.114
$$
which then gives
$$
\phi=83.4^{\circ} .
$$
27. Two vectors $\mathbf{a}$ and $\mathrm{b}$ have the components, in arbitrary units, $a_{x}=3.2$, $a_{y}=1.6, b_{x}=0.50, b_{y}=4.5$. (a) Find the angle between the directions of a and b. (b) Find the components of a vector c that is perpendicular to a, is in the $x y$ plane and has a magnitude of 5.0 units. [HRW5 3-51]
(a) The scalar product has something to do with the angle between two vectors... if the angle between $\mathbf{a}$ and $\mathbf{b}$ is $\phi$ then from Eq. 1.7 we have:
$$
\cos \phi=\frac{\mathbf{a} \cdot \mathbf{b}}{a b} .
$$
We can compute the right-hand-side of this equation since we know the components of a and $\mathbf{b}$. First, find $\mathbf{a} \cdot \mathbf{b}$. Using Eq. 1.8 we find:
$$
\begin{aligned}
\mathbf{a} \cdot \mathbf{b} & =a_{x} b_{x}+a_{y} b_{y} \\
& =(3.2)(0.50)+(1.6)(4.5) \\
& =8.8
\end{aligned}
$$
Now find the magnitudes of $\mathbf{a}$ and $\mathbf{b}$ :
$$
\begin{aligned}
& a=\sqrt{a_{x}^{2}+a_{y}^{2}}=\sqrt{(3.2)^{2}+(1.6)^{2}}=3.6 \\
& b=\sqrt{b_{x}^{2}+b_{y}^{2}}=\sqrt{(0.50)^{2}+(4.5)^{2}}=4.5
\end{aligned}
$$
This gives us:
$$
\cos \phi=\frac{\mathbf{a} \cdot \mathbf{b}}{a b}=\frac{8.8}{(3.6)(4.5)}=0.54
$$
From which we get $\phi$ by:
$$
\phi=\cos ^{-1}(0.54)=57^{\circ}
$$
(b) Let the components of the vector $\mathbf{c}$ be $c_{x}$ and $c_{y}$ (we are told that it lies in the $x y$ plane). If $\mathbf{c}$ is perpendicular to $\mathbf{a}$ then the dot product of the two vectors must give zero. This tells us:
$$
\mathbf{a} \cdot \mathbf{c}=a_{x} c_{x}+a_{y} c_{y}=(3.2) c_{x}+(1.6) c_{y}=0
$$
This equation doesn't allow us to solve for the components of $\mathbf{c}$ but it does give us:
$$
c_{x}=-\frac{1.6}{3.2} c_{y}=-0.50 c_{y}
$$
Since the vector c has magnitude 5.0, we know that
$$
c=\sqrt{c_{x}^{2}+c_{y}^{2}}=5.0
$$
Using the previous equation to substitute for $c_{x}$ gives:
$$
\begin{aligned}
c & =\sqrt{c_{x}^{2}+c_{y}^{2}} \\
& =\sqrt{\left(-0.50 c_{y}\right)^{2}+c_{y}^{2}} \\
& =\sqrt{1.25 c_{y}^{2}}=5.0
\end{aligned}
$$
Squaring the last line gives
$$
1.25 c_{y}^{2}=25 \quad \Longrightarrow \quad c_{y}^{2}=20 . \quad \Longrightarrow \quad c_{y}= \pm 4.5
$$
One must be careful... there are two possible solutions for $c_{y}$ here. If $c_{y}=4.5$ then we have
$$
c_{x}=-0.50 c_{y}=(-0.50)(4.5)=-2.2
$$
But if $c_{y}=-4.5$ then we have
$$
c_{x}=-0.50 c_{y}=(-0.50)(-4.5)=2.2
$$
So the two possibilities for the vector $\mathbf{c}$ are
$$
c_{x}=-2.2 \quad c_{y}=4.5
$$
and
$$
c_{x}=2.2 \quad c_{y}=-4.5
$$
28. Two vectors are given by $\mathbf{A}=-3 \mathbf{i}+4 \mathbf{j}$ and $\mathbf{B}=2 \mathbf{i}+3 \mathbf{j}$. Find (a) $\mathbf{A} \times \mathbf{B}$ and (b) the angle between A and B. [Ser4 11-7]
(a) Setting up the determinant in Eq. 1.12 (or just using Eq. 1.11 for the cross product) we find:
$$
\mathbf{A} \times \mathbf{B}=\left|\begin{array}{ccc}
\mathbf{i} & \mathbf{j} & \mathbf{k} \\
-3 & 4 & 0 \\
2 & 3 & 0
\end{array}\right|=(0-0) \mathbf{i}+(0-0) \mathbf{j}+((-9)-(8)) \mathbf{k}=-17 \mathbf{k}
$$
(b) To get the angle between $\mathbf{A}$ and $\mathbf{B}$ it is easiest to use the dot product and Eq. 1.7. The magnitudes of $\mathbf{A}$ and $\mathbf{B}$ are:
$$
A=\sqrt{A_{x}^{2}+A_{y}^{2}}=\sqrt{(-3)^{2}+(4)^{2}}=5 \quad B=\sqrt{B_{x}^{2}+B_{y}^{2}}=\sqrt{(2)^{2}+(3)^{2}}=3.61
$$
and the dot product of the two vectors is
$$
\mathbf{A} \cdot \mathbf{B}=A_{x} B_{x}+A_{y} B_{y}+A_{z} B_{z}=(-3)(2)+(4)(3)=6
$$
so then if $\phi$ is the angle between $\mathbf{A}$ and $\mathbf{B}$ we get:
$$
\cos \phi=\frac{\mathbf{A} \cdot \mathbf{B}}{A B}=\frac{6}{(5)(3.61)}=0.333
$$
which gives
$$
\phi=70.6^{\circ}
$$
29. Prove that two vectors must have equal magnitudes if their sum is perpendicular to their difference. [HRW6 3-23]
Suppose the condition stated in this problem holds for the two vectors $\mathbf{a}$ and $\mathbf{b}$. If the sum $\mathbf{a}+\mathbf{b}$ is perpendicular to the difference $\mathbf{a}-\mathbf{b}$ then the dot product of these two vectors is zero:
$$
(\mathbf{a}+\mathbf{b}) \cdot(\mathbf{a}-\mathbf{b})=0
$$
Use the distributive property of the dot product to expand the left side of this equation. We get:
$$
\mathbf{a} \cdot \mathbf{a}-\mathbf{a} \cdot \mathbf{b}+\mathbf{b} \cdot \mathbf{a}-\mathbf{b} \cdot \mathbf{b}
$$
But the dot product of a vector with itself gives the magnitude squared:
$$
\mathbf{a} \cdot \mathbf{a}=a_{x}^{2}+a_{y}^{2}+a_{z}^{2}=a^{2}
$$
(likewise $\mathbf{b} \cdot \mathbf{b}=b^{2}$ ) and the dot product is commutative: $\mathbf{a} \cdot \mathbf{b}=\mathbf{b} \cdot \mathbf{a}$. Using these facts, we then have
$$
a^{2}-\mathbf{a} \cdot \mathbf{b}+\mathbf{a} \cdot \mathbf{b}+b^{2}=0
$$
which gives:
$$
a^{2}-b^{2}=0 \quad \Longrightarrow \quad a^{2}=b^{2}
$$
Since the magnitude of a vector must be a positive number, this implies $a=b$ and so vectors $\mathbf{a}$ and $\mathbf{b}$ have the same magnitude.
30. For the following three vectors, what is $3 \mathbf{C} \cdot(2 \mathrm{~A} \times \mathbf{B})$ ?
$$
\begin{gathered}
\mathbf{A}=2.00 \mathbf{i}+3.00 \mathbf{j}-4.00 \mathbf{k} \\
\mathbf{B}=-3.00 \mathbf{i}+4.00 \mathbf{j}+2.00 \mathbf{k} \quad \mathbf{C}=7.00 \mathbf{i}-8.00 \mathbf{j}
\end{gathered}
$$
Actually, from the properties of scalar multiplication we can combine the factors in the desired vector product to give:
$$
3 \mathbf{C} \cdot(2 \mathbf{A} \times \mathbf{B})=6 \mathbf{C} \cdot(\mathbf{A} \times \mathbf{B}) .
$$
Evaluate $\mathbf{A} \times \mathbf{B}$ first:
$$
\begin{gathered}
\mathbf{A} \times \mathbf{B}=\left|\begin{array}{ccc}
\mathbf{i} & \mathbf{j} & \mathbf{k} \\
2.0 & 3.0 & -4.0 \\
-3.0 & 4.0 & 2.0
\end{array}\right|=(6.0+16.0) \mathbf{i}+(12.0-4.0) \mathbf{j}+(8.0+9.0) \mathbf{k} \\
=22.0 \mathbf{i}+8.0 \mathbf{j}+17.0 \mathbf{k}
\end{gathered}
$$
Then:
$$
\mathbf{C} \cdot(\mathbf{A} \times \mathbf{B})=(7.0)(22.0)-(8.0)(8.0)+(0.0)(17.0)=90
$$
So the answer we want is:
$$
6 \mathbf{C} \cdot(\mathbf{A} \times \mathbf{B})=(6)(90.0)=540
$$
31. A student claims to have found a vector A such that
$$
(2 \mathbf{i}-3 \mathbf{j}+4 \mathbf{k}) \times \mathbf{A}=(4 \mathbf{i}+3 \mathbf{j}-\mathbf{k}) .
$$
Do you believe this claim? Explain. [Ser4 11-8]
Frankly, I've been in this teaching business so long and I've grown so cynical that I don't believe anything any student claims anymore, and this case is no exception. But enough about me; let's see if we can provide a mathematical answer.
We might try to work out a solution for A, but let's think about some of the basic properties of the cross product. We know that the cross product of two vectors must be perpendicular to each of the "multiplied" vectors. So if the student is telling the truth, it must be true that $(4 \mathbf{i}+3 \mathbf{j}-\mathbf{k})$ is perpendicular to $(2 \mathbf{i}-3 \mathbf{j}+4 \mathbf{k})$. Is it?
We can test this by taking the dot product of the two vectors:
$$
(4 \mathbf{i}+3 \mathbf{j}-\mathbf{k}) \cdot(2 \mathbf{i}-3 \mathbf{j}+4 \mathbf{k})=(4)(2)+(3)(-3)+(-1)(4)=-5 .
$$
The dot product does not give zero as it must if the two vectors are perpendicular. So we have a contradiction. There can't be any vector $\mathbf{A}$ for which the relation is true.
| Textbooks |
Only show open access (2)
Physics and Astronomy (3)
Materials Research (1)
High Power Laser Science and Engineering (2)
MRS Online Proceedings Library Archive (1)
Materials Research Society (1)
Generation of high energy laser-driven electron and proton sources with the 200 TW system VEGA 2 at the Centro de Laseres Pulsados
L. Volpe, R. Fedosejevs, G. Gatti, J. A. Pérez-Hernández, C. Méndez, J. Apiñaniz, X. Vaisseau, C. Salgado, M. Huault, S. Malko, G. Zeraouli, V. Ospina, A. Longman, D. De Luis, K. Li, O. Varela, E. García, I. Hernández, J. D. Pisonero, J. García Ajates, J. M. Alvarez, C. García, M. Rico, D. Arana, J. Hernández-Toro, L. Roso
Journal: High Power Laser Science and Engineering / Volume 7 / 2019
Published online by Cambridge University Press: 26 April 2019, e25
Print publication: 2019
The Centro de Laseres Pulsados in Salamanca, Spain has recently started operation phase and the first user access period on the 6 J 30 fs 200 TW system (VEGA 2) already started at the beginning of 2018. In this paper we report on two commissioning experiments recently performed on the VEGA 2 system in preparation for the user campaign. VEGA 2 system has been tested in different configurations depending on the focusing optics and targets used. One configuration (long focal length $F=130$ cm) is for underdense laser–matter interaction where VEGA 2 is focused onto a low density gas-jet generating electron beams (via laser wake field acceleration mechanism) with maximum energy up to 500 MeV and an X-ray betatron source with a 10 keV critical energy. A second configuration (short focal length $F=40$ cm) is for overdense laser–matter interaction where VEGA 2 is focused onto a $5~\unicode[STIX]{x03BC}\text{m}$ thick Al target generating a proton beam with a maximum energy of 10 MeV and temperature of 2.5 MeV. In this paper we present preliminary experimental results.
Time evolution of stimulated Raman scattering and two-plasmon decay at laser intensities relevant for shock ignition in a hot plasma
G. Cristoforetti, L. Antonelli, D. Mancelli, S. Atzeni, F. Baffigi, F. Barbato, D. Batani, G. Boutoux, F. D'Amato, J. Dostal, R. Dudzak, E. Filippov, Y. J. Gu, L. Juha, O. Klimo, M. Krus, S. Malko, A. S. Martynenko, Ph. Nicolai, V. Ospina, S. Pikuz, O. Renner, J. Santos, V. T. Tikhonchuk, J. Trela, S. Viciani, L. Volpe, S. Weber, L. A. Gizzi
Published online by Cambridge University Press: 15 August 2019, e51
Laser–plasma interaction (LPI) at intensities $10^{15}{-}10^{16}~\text{W}\cdot \text{cm}^{-2}$ is dominated by parametric instabilities which can be responsible for a significant amount of non-collisional absorption and generate large fluxes of high-energy nonthermal electrons. Such a regime is of paramount importance for inertial confinement fusion (ICF) and in particular for the shock ignition scheme. In this paper we report on an experiment carried out at the Prague Asterix Laser System (PALS) facility to investigate the extent and time history of stimulated Raman scattering (SRS) and two-plasmon decay (TPD) instabilities, driven by the interaction of an infrared laser pulse at an intensity ${\sim}1.2\times 10^{16}~\text{W}\cdot \text{cm}^{-2}$ with a ${\sim}100~\unicode[STIX]{x03BC}\text{m}$ scalelength plasma produced from irradiation of a flat plastic target. The laser pulse duration (300 ps) and the high value of plasma temperature ( ${\sim}4~\text{keV}$) expected from hydrodynamic simulations make these results interesting for a deeper understanding of LPI in shock ignition conditions. Experimental results show that absolute TPD/SRS, driven at a quarter of the critical density, and convective SRS, driven at lower plasma densities, are well separated in time, with absolute instabilities driven at early times of interaction and convective backward SRS emerging at the laser peak and persisting all over the tail of the pulse. Side-scattering SRS, driven at low plasma densities, is also clearly observed. Experimental results are compared to fully kinetic large-scale, two-dimensional simulations. Particle-in-cell results, beyond reproducing the framework delineated by the experimental measurements, reveal the importance of filamentation instability in ruling the onset of SRS and stimulated Brillouin scattering instabilities and confirm the crucial role of collisionless absorption in the LPI energy balance.
Femtosecond to Nanosecond Characterization of Optical Limiting Mechanisms in Power Limiting Liquids and Solids
A. Malko, S. Xu, H-L. Wang, R. Kohlman, L. Smilowitz, V. Klimov, D. W. McBranch, J.-L. Nogues, W. Moreshead, D. Hagan, S. Yang, E. Van Stryland
Journal: MRS Online Proceedings Library Archive / Volume 597 / 1999
Published online by Cambridge University Press: 10 February 2011, 437
We present our recent advances toward the development of high-performance solid-state optical limiting devices using reverse saturable absorption (RSA) dyes doped into optical host materials. Femtosecond transient absorption spectroscopy was employed to determine both the spectral regions of strong RSA, and the singlet-triplet excited-state dynamics. The optical limiting in the visible spectrum in both metallo-phthalocyanines and metallo-porphyrins is due to a combination of singlet and triplet RSA. Optical limiting performance was studied for RSA dyes in dual tandem limiters (both in solution and solid-state). Our best results in the solid-state yielded an attenuation of 400x, and a damage threshold of up to several mJ at f/5 focusing. The optical limiting at f/5 is further enhanced, particularly in the solid-state, by self-defocusing thermal nonlinearities. | CommonCrawl |
\begin{document}
\begin{frontmatter}
\title{Functional central limit theorem for heavy tailed stationary infinitely divisible processes generated by conservative flows\thanksref{T1}} \runtitle{Functional central limit theorem}
\begin{aug} \author[A]{\fnms{Takashi} \snm{Owada}\ead[label=e1]{[email protected]}} \and \author[B]{\fnms{Gennady} \snm{Samorodnitsky}\corref{}\ead[label=e2]{[email protected]}} \runauthor{T. Owada and G. Samorodnitsky} \affiliation{Cornell University} \address[A]{School of Operations Research\\ \quad and Information Engineering\\ Cornell University\\ Ithaca, New York 14853\\ USA\\ \printead{e1}} \address[B]{School of Operations Research\\ \quad and Information Engineering\\ Department of Statistical Science\\ Cornell University \\ Ithaca, New York 14853\\ USA\\ \printead{e2}} \end{aug}
\thankstext{T1}{Supported in part by ARO Grants W911NF-07-1-0078 and W911NF-12-10385, NSF Grant DMS-10-05903 and NSA Grant H98230-11-1-0154 at Cornell University.}
\received{\smonth{9} \syear{2012}} \revised{\smonth{10} \syear{2013}}
\begin{abstract} We establish a new class of functional central limit theorems for partial sum of certain symmetric stationary infinitely divisible processes with regularly varying L\'evy measures. The limit process is a new class of symmetric stable self-similar processes with stationary increments that coincides on a part of its parameter space with a previously described process. The normalizing sequence and the limiting process are determined by the ergodic-theoretical properties of the flow underlying the integral representation of the process. These properties can be interpreted as determining how long the memory of the stationary infinitely divisible process is. We also establish functional convergence, in a strong distributional sense, for conservative pointwise dual ergodic maps preserving an infinite measure. \end{abstract}
\begin{keyword}[class=AMS] \kwd[Primary ]{60F17} \kwd{60G18} \kwd[; secondary ]{37A40} \kwd{60G52} \end{keyword}
\begin{keyword} \kwd{Infinitely divisible process} \kwd{conservative flow} \kwd{central limit theorem} \kwd{self-similar process} \kwd{pointwise dual ergodicity} \kwd{Darling--Kac theorem} \end{keyword}
\end{frontmatter}
\section{Introduction} \label{secintro} Let $\mathbf{X}=(X_1,X_2,\ldots)$ be a discrete time stationary\break stochastic process. A (functional) central limit theorem for such a process is a statement of the type
\begin{equation} \label{eFCLT} \Biggl( \frac{1}{c_n}\sum_{k=1}^{\lceil nt\rceil} X_k -h_nt, 0\leq t\leq1 \Biggr) \Rightarrow \bigl( Y(t), 0\leq t\leq1 \bigr). \end{equation}
Here, $(c_n)$ is a positive sequence growing to infinity, $(h_n)$ a real sequence, and $ ( Y(t), 0\leq t\leq1 )$ is a nondegenerate (i.e., nondeterministic) process. Convergence in (\ref{eFCLT}) is at least in finite-dimensional distributions, but preferably it is a weak convergence in the space $D[0,1]$ equipped with an appropriate topology. Not every stochastic process satisfies a central limit theorem, and for those that do, it is well known that both the rate of growth of the scaling constant $c_n$ and the nature of the limiting process $\mathbf{Y}= ( Y(t), 0\leq t\leq 1 )$ are determined both by the marginal tails of the stationary process $\mathbf{X}$ and its dependence structure. The limiting process (under very minor assumptions) is necessarily self-similar with stationary increments; this is known as the Lamperti theorem; see \citet{lamperti1962}.
If, say, $X_1$ has a finite second moment, and $\mathbf{X}$ is an i.i.d. sequence then, clearly, one can choose $c_n=n^{1/2}$, and then $\mathbf{Y}$ is a Brownian motion. With equally light marginal tails, if the memory is sufficiently short, then one expects the situation to remain, basically, the same, and this turns out to be the case. When the variance is finite, the basic tool to measure dependence is, obviously, the correlations, which have to decay fast enough. It is well known, however, that a fast decay of correlations is alone not sufficient for this purpose, and, in general, certain strong mixing conditions have to be assumed. See, for example, \citet{rosenblatt1956} and, more recently, \citet{merlevedepeligradutev2006}. If the memory is not sufficiently short, then both the rate of growth of $c_n$ can be different from $n^{1/2}$, and the limiting process can be different from the Brownian motion. In fact, the limiting process may fail to be Gaussian at all; see, for example, \citet{dobrushinmajor1979} and \citet{taqqu1979}.
If the marginal tails of the process are heavy, which in this case means that $X_1$ is in the domain of attraction of an $\alpha$-stable law, $0<\alpha<2$, and $\mathbf{X}$ is an i.i.d. sequence then clearly one can choose $c_n$ to be the inverse of the marginal tail (this makes $c_n$ vary regularly with exponent $1/\alpha$), and then $\mathbf{Y}$ is an $\alpha$-stable L\'evy motion. Again, one expects the situation to remain similar if the memory is sufficiently short. Since correlations do not exist under heavy tails, statements of this type have been established for special models, often for moving average models; see, for example, \citet{davisresnick1985}, \citet{avramtaqqu1992} and \citet{paulauskassurgailis2008}. Once again, as the memory gets longer, then both the rate of growth of $c_n$ can be different from that obtained by inverting the marginal tail, and the limiting process will no longer have independent increments (i.e., be an $\alpha$-stable L\'evy motion). It is here, however, that the picture gets more interesting than in the case of light tails. First of all, in absence of correlations there is no canonical way of measuring how much longer the memory gets. Even more importantly, certain types of memory turn out to result in the limiting process $\mathbf{Y}$ being a self-similar $\alpha$-stable process with stationary increments of a canonical form, the so-called linear fractional stable motion; see, for example, \citet{maejima1983B} for an example of such a situation, and \citet{samorodnitskytaqqu1994} for information on self-similar processes. However, when the memory gets even longer, linear fractional stable motions disappear as well, and even more ``unusual'' limiting processes $\mathbf{Y}$ may appear. This phenomenon may qualify as change from short to long memory; see \citet{samorodnitsky2006LRD}.
In this paper, we consider a functional central limit theorem for a class of heavy tailed stationary processes exhibiting long memory in this sense. It is particularly interesting both because of the manner in which memory in the process is measured, and because the limiting process $\mathbf{Y}$ that happens to be an extension of a very recently discovered self-similar stable process with stationary increments. Specifically, we will assume that $\mathbf{X}$ is a stationary infinitely divisible process (satisfying certain assumptions, described in detail in Section~\ref{secprelim}). That is, all finite-dimensional distributions of $\mathbf{X}$ are infinitely divisible; we refer the reader to \citet{rajputrosinski1989} for more information on infinitely divisible processes and their integral representations we will work with in the sequel.
The class of central limit theorems we consider involves a significant interaction of probabilistic and ergodic-theoretical ideas and tools. To make the discussion more transparent, we will only consider symmetric infinitely divisible processes without a Gaussian component (but there is no doubt that results of this type will hold in a greater generality as well). The law of such a process is determined by its (function level) L\'evy measure. This is a (uniquely determined) symmetric measure $\kappa$ on $\mathbb{R}^\mathbb{N}$ satisfying
\[ \kappa \bigl( {\mathbf x}=(x_1,x_2,\ldots)\in \mathbb{R}^\mathbb{N}\dvtx x_j=0\mbox{ for all }j\in\mathbb{N} \bigr)=0 \]
and
\[ \int_{\mathbb{R}^\mathbb{N}}\min\bigl(1,x_j^2\bigr) \kappa(d{\mathbf x})<\infty\qquad \mbox{for each }j\in\mathbb{N}, \]
such that for each finite subset $\{ j_1,\ldots, j_k\}$ of $\mathbb {N}$, the $k$-dimensional L\'evy measure of the infinitely divisible random vector $ ( X_{j_1}, \ldots, X_{j_k} )$ is given by the projection of $\kappa $ on the appropriate coordinates of ${\mathbf x}$; see \citet{maruyama1970}.
Because of the stationarity of the process $\mathbf{X}$, its L\'evy measure $\mu$ is invariant under the left shift $\theta$ on $\mathbb {R}^\mathbb{N}$,
\[ \theta(x_1,x_2,x_3,\ldots) = (x_2,x_3,\ldots). \]
It has been noticed in the last several years that the ergodic-theoretical properties of the shift operator with respect to the L\'evy measure have a profound effect on the memory of the stationary process $\mathbf{X}$. The L\'evy measure of the process is often described via an integral representation of the process, and in some cases the shift operator with respect to the L\'evy measure can be related to an operator acting on the space on which the integrals are taken. Thus, \citet{rosinskisamorodnitsky1996} and \citet{samorodnitsky2005a} dealt with the ergodicity and mixing of stationary stable processes, while \citet{roy2008} dealt with general stationary infinitely divisible processes. The effect of the ergodic-theoretical properties of the shift operator with respect to the L\'evy measure on the partial maxima of stationary stable processes was discussed in \citet{samorodnitsky2004a}.
In the present paper, we consider stationary symmetric infinitely divisible processes without a Gaussian component given via an integral representation described in Section~\ref{secprelim}. This representation naturally includes a measure-preserving operator on a measurable space, and we related its ergodic-theoretical properties to the kind of central limit theorem the process satisfies. We consider the so-called conservative operators that turn out to lead to nonstandard limit theorems of the type that, to the best of our knowledge, have not been observed before.
We describe our setup in Section~\ref{secprelim}. In Section~\ref{seclimprocess}, we introduce the limiting symmetric $\alpha$-stable (henceforth, $\mathrm{S}\alpha\mathrm{S}$) self-similar process with stationary increments and discuss its properties. In Section~\ref{secergodic}, we present the ergodic-theoretical notions that we use in the paper. The exact assumptions in the central limit theorem are stated in Section~\ref{secCLT}. In this section, we also present the statement of the theorem and several examples. The proof of the theorem uses several distributional ergodic-theoretical results we present and prove in Section~\ref{secdistrresults}. These results may be of independent interest in ergodic theory. Finally, the proof of the central limit theorem is completed in Section~\ref{secproof}.
\section{The setup} \label{secprelim} We consider infinitely divisible processes of the form
\begin{equation} \label{etheprocess} X_n = \int_E f_n(x) \,dM(x), \qquad n=1,2,\ldots, \end{equation}
where $M$ is an infinitely divisible random measure on a measurable space $(E,\mathcal{E})$, and the functions $f_n, n=1,2,\ldots$ are deterministic functions of the form
\begin{equation} \label{ethekernel} f_n(x) = f\circ T^n(x) = f \bigl( T^nx \bigr),\qquad x\in E, n=1,2,\ldots, \end{equation}
where $f\dvtx E \to\mathbb{R}$ is a measurable function, and $T\dvtx E \to E$ a measurable map. The (independently scattered) infinitely divisible random measure $M$ is assumed to be a homogeneous symmetric infinitely divisible random measure without a Gaussian component, with control measure $\mu$ and local L\'evy measure $\rho$. That is, $\mu$ is a \mbox{$\sigma$-}finite measure on $E$, which we will assume to be infinite. Further, $\rho$ is a symmetric L\'evy measure on $\mathbb{R}$, and for every $A \in \mathcal{E}$ with $\mu(A) < \infty$, $M(A)$ is a (symmetric) infinitely divisible random variable such that
\begin{equation} \label{eqchfIDRM} E e^{iu M(A)} = \exp \biggl\{ -\mu(A) \int _{\mathbb{R}} \bigl(1-\cos (ux) \bigr) \rho(dx) \biggr\}, \qquad u \in \mathbb{R}. \end{equation}
It is clear that, in order for the process $\mathbf{X}$ to be well defined, the functions $f_n, n=1,2,\ldots$ have to satisfy certain integrability assumptions; the assumptions we will impose below will be sufficient for that. Once the process $\mathbf{X}$ is well defined, it is, automatically, symmetric and infinitely divisible, without a Gaussian component, with the function level L\'evy measure given by
\begin{equation} \label{emeasureprocess} \kappa= (\rho\times\mu) \circ K^{-1} \end{equation}
with $K\dvtx \mathbb{R}\times E \to\mathbb{R}^\mathbb{N}$ given by $K(x,s) = x ( f_1(s),f_2(s),\ldots )$, $s\in E, x\in\mathbb{R}$. For details, see \citet{rajputrosinski1989}.
We will assume that the measurable map $T$ preserves the control measure $\mu$. It follows immediately from (\ref{emeasureprocess}) and the form of the functions $(f_n)$ given in (\ref{ethekernel}) that the L\'evy measure $\kappa$ is invariant under the left shift $\theta$, and hence, the process $\mathbf{X}$ is stationary. We intend to relate the ergodic-theoretical properties of the map $T$ to the dependence properties of the process $\mathbf{X}$, and subsequently, to the kind of central limit theorem the process satisfies. We refer the reader to \citet{aaronson1997} for more details on the ergodic-theoretical notions used in the sequel. A~short review of what we need will be given in Section~\ref{secergodic} below.
Our basic assumption is that the map $T$ is conservative. This property has already been observed to be related to long memory in the process $\mathbf{X}$; see, for example, \citet{samorodnitsky2004a} and \citet{roy2008}. We will quantify the resulting length of memory by assuming further that the map $T$ is ergodic and pointwise dual ergodic, with a regularly varying normalizing sequence. We will see that the exponent of regular variation plays a major role in the central limit theorem.
The second major ``player'' in the central limit theorem is the heaviness of the marginal tail of the process $\mathbf{X}$. We will assume that the local L\'evy measure $\rho$ has a regularly varying tail with index $-\alpha$, $0 < \alpha< 2$, that is,
\begin{equation} \label{eregvarlevy} \rho(\cdot,\infty) \in RV_{-\alpha}\qquad\mbox{at infinity.} \end{equation}
With a proper integrability assumption on the function $f$ in (\ref{ethekernel}), the process $\mathbf{X}$ has regularly varying marginal (and even finite-dimensional) distributions, with the same tail exponent $-\alpha$; see \citet{rosinskisamorodnitsky1993}. That is, all the finite-dimensional distributions of the process are in the domain of attraction of a $\mathrm{S}\alpha\mathrm{S}$ law.
This leads to a rather satisfying picture, in which the kind of the central limit theorem that holds for the process $\mathbf{X}$ depends both on the marginal tails of the process and on the length of memory in it, and both are clearly parameterized.
In fact, in order to obtain the central limit theorem for the process $\mathbf{X}$, we will need to impose more specific assumptions on the map $T$. We will also, clearly, need specific integrability assumptions on the kernel in the integral representation of the process. These assumptions are presented in Section~\ref{secCLT}.
We proceed, first, with a description of the limiting process we will eventually obtain.
\section{The limiting process} \label{seclimprocess}
In this section, we will introduce a class of self-similar $\mathrm{S}\alpha\mathrm{S}$ processes with stationary increments. These processes will later appear as weak limits in the central limit theorem. We will see this process is an extension (to a wider range of parameters) of a class recently introduced by \citet{dombryguillotin-plantard2009}. Before introducing this process, we need to do some preliminary work.
For $0<\beta<1$, let $ ( S_{\beta}(t), t\geq0 )$ be a \mbox{$\beta$-}stable subordinator, that is, a L\'evy process with increasing sample paths, satisfying $Ee^{-\theta S_{\beta}(t)} = \exp\{ -t \theta^{\beta} \}$ for \mbox{$\theta\geq0$ and $t\geq0$}; see, for example, Chapter III of \citet{bertoin1996}. Define its inverse process by
\begin{equation} \label{eMLprocess} M_{\beta}(t) = S_{\beta}^{\leftarrow}(t) = \inf \bigl\{u\geq0\dvtx S_{\beta}(u) \geq t \bigr\},\qquad t\geq0. \end{equation}
Recall that the marginal distributions of the process $ ( M_{\beta}(t), t\geq0 )$ are the Mittag--Leffler distributions, with the Laplace transform
\begin{equation} \label{MTtransform} E \exp\bigl\{ \theta M_{\beta}(t) \bigr\} = \sum _{n=0}^{\infty} \frac{ (\theta t^{\beta})^n}{\Gamma(1+n \beta)}, \qquad\theta\in \mathbb{R}; \end{equation}
see Proposition 1(a) in \citet{bingham1971}. We will call this process \textit{the Mittag--Leffler process}. This process has a continuous and nondecreasing version; we will always assume that we are working with such a version. It follows from (\ref{MTtransform}) (or simply from the definition) that the Mittag--Leffler process is self-similar with exponent $\beta$. Further, all of its moments are finite. Recall, however, that this process has neither stationary nor independent increments; see, for example, \citet{meerschaertscheffler2004}.
We are now ready to introduce the new class of self-similar $\mathrm{S}\alpha\mathrm{S}$ processes with stationary increments announced at the beginning of this section. Let $0<\alpha<2$ and $0<\beta<1$, and let $(\Omega^{\prime},\mathcal{F}^{\prime},P^{\prime})$ be a probability space. We define
\begin{equation} \label{eqMLSM} Y_{\alpha,\beta}(t) = \int_{\Omega^{\prime} \times[0,\infty)} M_{\beta} \bigl((t-x)_+,\omega^{\prime} \bigr) \,d Z_{\alpha,\beta} \bigl(\omega^{\prime},x\bigr), \qquad t \geq0, \end{equation}
where $Z_{\alpha,\beta}$ is a $\mathrm{S}\alpha\mathrm{S}$ random measure on $\Omega ^{\prime} \times[0,\infty)$ with control measure $P^{\prime} \times\nu$, with $\nu$ a measure on $[0,\infty)$ given by $\nu(dx) = (1-\beta) x^{-\beta} \,dx, x>0$. Here, $M_\beta$~is a Mittag--Leffler process defined on $(\Omega^{\prime},\mathcal{F}^{\prime},P^{\prime})$. The random measure $Z_{\alpha,\beta}$ itself, and hence, also the process $Y_{\alpha,\beta}$, are defined on some generic probability space $(\Omega,\mathcal{F},P )$. We refer the reader to \citet{samorodnitskytaqqu1994} for more information on integrals with respect to stable random measures.
In Theorem~\ref{tMTFSNbasic} below, we prove that the process $ ( Y_{\alpha,\beta}(t), t\geq0 )$ is a well-defined self-similar $\mathrm{S}\alpha\mathrm{S}$ processes with stationary increments. We call it \textit{the \mbox{\mbox{$\beta$-}}Mittag--Leffler} (or \mbox{$\beta$-}ML) \textit{fractional $\mathrm{S}\alpha\mathrm{S}$ motion}.
\begin{theorem} \label{tMTFSNbasic} The \mbox{$\beta$-}ML fractional $\mathrm{S}\alpha\mathrm{S}$ motion is a well-defined self-similar $\mathrm{S}\alpha\mathrm{S}$ processes with stationary increments. It is also self-similar with exponent of self-similarity $H=\beta+ (1-\beta)/\alpha$. \end{theorem}
\begin{pf} By the monotonicity of the process $M_{\beta}$ we have, for any $t\geq 0$,
\[ \int_{[0,\infty)} \int_{\Omega^{\prime}} M_{\beta} \bigl((t-x)_+,\omega^{\prime}\bigr)^{\alpha} P^{\prime}(d\omega) \nu(dx)\leq t^{1-\beta} E^{\prime} M_{\beta}(t)^{\alpha} < \infty, \]
which proves that the process $ ( Y_{\alpha,\beta}(t), t\geq 0 )$ is well defined. Further, by the \mbox{$\beta$-}self-similarity of the process $M_{\beta}$, we have for any $k\geq1$, $t_1, \ldots, t_k \geq0$, and $c>0$, for all real $\theta_1, \ldots, \theta_k $,
\begin{eqnarray*} && E \exp \Biggl\{ i \sum_{j=1}^k \theta_j Y_{\alpha,\beta}(ct_j) \Biggr\} \\ &&\qquad = \exp \Biggl
\{ - \int_0^{\infty} E^{\prime} \Biggl|\sum _{j=1}^k \theta_j M_{\beta}
\bigl((ct_j-x)_+\bigr) \Biggr|^{\alpha} (1-\beta)x^{-\beta} \,dx \Biggr\} \\ &&\qquad = \exp \Biggl\{ - \int_0^{\infty} E^{\prime}
\Biggl|\sum_{j=1}^k \theta_j c^H M_{\beta}\bigl((t_j-y)_+\bigr)
\Biggr|^{\alpha} (1-\beta)y^{-\beta} \,dy \Biggr\} \\ &&\qquad = E \exp \Biggl\{ i \sum _{j=1}^k \theta_j c^H Y_{\alpha,\beta}(t_j) \Biggr\}, \end{eqnarray*}
which shows the $H$-self-similarity of the \mbox{$\beta$-}ML fractional $\mathrm{S}\alpha\mathrm{S}$ motion.
For the proof of stationary increment property, it suffices to check that
\[ E \exp \Biggl\{ i \sum_{j=1}^k \theta_j \bigl(Y_{\alpha,\beta }(t_j+s) - Y_{\alpha,\beta}(s) \bigr) \Biggr\} = E \exp \Biggl\{ i \sum _{j=1}^k \theta_j Y_{\alpha,\beta}(t_j) \Biggr\} \]
for all $k\geq1$, $t_1, \ldots, t_k \geq0$, $s\geq0$, and $\theta_1, \ldots,\theta_k \in\mathbb{R}$. This is equivalent to verifying the equality in
\begin{eqnarray*}
&& \int_0^{\infty} E^{\prime} \Biggl|\sum _{j=1}^k \theta_j \bigl\{ M_{\beta}\bigl((t_j+s-x)_+\bigr) - M_{\beta}
\bigl((s-x)_+\bigr)\bigr\} \Biggr|^{\alpha }x^{-\beta} \,dx \\
&&\qquad =\int_0^{\infty} E^{\prime} \Biggl|\sum _{j=1}^k \theta_j M_{\beta}
\bigl((t_j-x)_+\bigr) \Biggr|^{\alpha} x^{-\beta} \,dx. \end{eqnarray*}
Changing variable by $r=s-x$ in the left-hand side and rearranging the terms shows that we need to check the equality in
\begin{eqnarray}\label{esicheck1}
&& \int_{0}^s E^{\prime} \Biggl| \sum_{j=1}^k \theta_j \bigl(M_{\beta }(t_j+r) - M_{\beta}(r)\bigr)
\Biggr|^{\alpha} (s-r)^{-\beta} \,dr \nonumber\\[-8pt]\\[-8pt]
&&\qquad = \int_0^{\infty} E^{\prime} \Biggl|\sum _{j=1}^k \theta_j M_{\beta}
\bigl((t_j-x)_+\bigr) \Biggr|^{\alpha} \bigl(x^{-\beta} - (s+x)^{-\beta}\bigr) \,dx.\nonumber \end{eqnarray}
Let $\delta_r = S_\beta ( M_\beta(r) )-r$ be the overshoot of the level $r>0$ by the \mbox{$\beta$-}stable subordinator $ ( S_{\beta}(t), t\geq0 )$ related to $ ( M_{\beta}(t), t\geq0 )$ by (\ref{eMLprocess}). The law of $\delta_r$ is known to be given by
\begin{equation} \label{eqDynkinLamperti} P(\delta_r \in dx) = \frac{\sin\beta\pi}{\pi} r^{\beta} (r+x)^{-1} x^{-\beta} \,dx,\qquad x>0; \end{equation}
see, for example, Exercise 5.6 in \citet{kyprianou2006}. Further, by the strong Markov property of the stable subordinator we have
\[ \bigl( S_\beta \bigl( M_{\beta}(r)+t \bigr), t\geq0 \bigr) \stackrel{d} {=} \bigl( r+\delta_r + S_\beta(t), t\geq0 \bigr), \]
where $S_\beta$ and $\delta_r$ in the right-hand side are independent. Therefore,
\begin{eqnarray*} && \bigl( M_{\beta}(t+r) - M_{\beta}(r), t\geq0 \bigr) \\ &&\qquad = \bigl( \inf \bigl\{u\geq0\dvtx S_{\beta} \bigl( M_{\beta }(r)+u \bigr)\geq t +r \bigr\}, t\geq0 \bigr) \\ &&\qquad \stackrel{d} {=} \bigl( \inf \bigl\{u\geq0\dvtx S_{\beta}(u)\geq t-\delta _r \bigr\}, t\geq0 \bigr) \\ &&\qquad = \bigl( M_{\beta}\bigl((t- \delta_r)_+\bigr), t\geq0 \bigr); \end{eqnarray*}
once again, $M_\beta$ and $\delta_r$ in the right-hand side are independent. We conclude that
\begin{eqnarray}\label{esicheck2}
&& \int_{0}^s E^{\prime} \Biggl| \sum_{j=1}^k \theta_j \bigl(M_{\beta}(t_j+r) - M_{\beta}(r)
\bigr)\Biggr|^{\alpha} (s-r)^{-\beta} \,dr\nonumber \\
&&\qquad = \frac{\sin\beta\pi}{\pi} \int_0^{\infty}\!\! \int _0^s E^{\prime} \Biggl|\sum _{j=1}^k \theta_j M_{\beta}
\bigl((t_j-x)_+\bigr)\Biggr|^{\alpha} \\ &&\hspace*{96pt}{}\times r^{\beta} (r+x)^{-1} x^{-\beta} (s-r)^{-\beta} \,dr \,dx.\nonumber \end{eqnarray}
Using the integration formula,
\[ \int_0^1 \biggl(\frac{t}{1-t} \biggr)^\beta\frac{1}{t+y} \,dt = \frac{\pi}{\sin\beta\pi} \biggl[ 1- \biggl( \frac{y}{1+y} \biggr)^\beta \biggr],\qquad y>0, \]
given on page 338 of \citet{gradshteynryzhik1994}, shows that (\ref{esicheck2}) is equivalent to (\ref{esicheck1}). This completes the proof. \end{pf}
Recall that, when $0<\beta\leq1/2$, the Mittag--Leffler process of (\ref{eMLprocess}) is distributionally equivalent to the local time at zero of a symmetric stable L\'evy process with index of stability $\hat\beta=(1-\beta)^{-1}$. Specifically,\vspace*{-1pt} let $ ( W_{\hat\beta}(t), t\geq0 )$ be a symmetric $\hat\beta$-stable L\'evy process, such that $Ee^{ir W_{\hat\beta}(t)} = \exp\{ -t
|r|^{\hat\beta} \}$ for $r\in\mathbb{R}$ and $t\geq0$. This process has a jointly continuous local time process, $L_t(x), t\geq0, x\in\mathbb{R}$; see, for example, \citet{getoorkesten1972}. Then
\begin{equation} \label{eMLLT} \bigl( M_{\beta}(t), t\geq0 \bigr) \stackrel{d} {=} \bigl( c_\beta L_t(0), t\geq0 \bigr) \end{equation}
for some $c_{\beta} > 0$; see Section~11.1.1 in \citet{marcusrosen2006}. Therefore, in the range $0<\beta\leq1/2$, the \mbox{$\beta$-}ML fractional $\mathrm{S}\alpha\mathrm{S}$ motion (\ref{eqMLSM}) can be represented in law as
\begin{equation} \label{eqMLSM1} Y_{\alpha,\beta}(t) = c_\beta\int_{\Omega^{\prime} \times [0,\infty)} L_{(t-x)_+} \bigl( 0,\omega^{\prime} \bigr) \,d Z_{\alpha,\beta}\bigl( \omega^{\prime},x\bigr), \qquad t \geq0, \end{equation}
where $ ( L_t(x) )$ is the local time of a symmetric $\hat \beta$-stable L\'evy process defined on $(\Omega^{\prime},\mathcal{F}^{\prime },P^{\prime})$. Recall also the $\hat\beta$-stable local time fractional $\mathrm{S}\alpha\mathrm{S}$ motion introduced in \citet{dombryguillotin-plantard2009} [see also \citet{cohensamorodnitsky2006}]. That process can be defined by
\begin{equation} \label{eqLTSM} \widehat Y_{\alpha,\beta}(t) = \int_{\Omega^{\prime} \times\mathbb{R}} L_{t} \bigl( x,\omega^{\prime} \bigr) \,d \widehat Z_{\alpha} \bigl(\omega^{\prime},x\bigr), \qquad t \geq0, \end{equation}
where $\widehat Z_{\alpha}$ is a $\mathrm{S}\alpha\mathrm{S}$ random measure on $\Omega^{\prime} \times\mathbb{R}$ with control measure $P^{\prime} \times\mathrm{Leb}$. We claim that, in fact, if $0<\beta\leq1/2$,
\begin{equation} \label{eequalSM} \bigl( Y_{\alpha,\beta}(t) t\geq0 \bigr) \stackrel {d} {=}c_\beta ^{(1)} \bigl( \widehat Y_{\alpha,\beta}(t) t\geq0 \bigr) \end{equation}
for some multiplicative constant $c_\beta^{(1)}$. Therefore,\vspace*{-1pt} one can view the ML fractional~$\mathrm{S}\alpha\mathrm{S}$ motion as an extension of the $\hat\beta$-stable local time fractional $\mathrm{S}\alpha\mathrm{S}$ motion from the range $1<\hat\beta\leq2$ to the range $1<\hat\beta<\infty$. It is interesting to note that the central limit theorem in Section~\ref {secCLT} is of a very different type from the random walk in random scenery situation of \citet{cohensamorodnitsky2006} and \citet{dombryguillotin-plantard2009}.
To check (\ref{eequalSM}), let
\[ H_x=\inf \bigl\{ t\geq0\dvtx W_{\hat\beta}(t)=x \bigr\},\qquad x\in \mathbb{R}. \]
Since $1<\hat\beta\leq2$, $H_x$ is a.s. finite for any $x\in\mathbb{R}$; see, for example, Remark 43.12 in \citet{sato1999}. Further, by the strong Markov property, for every $x\in\mathbb{R}$, the conditional law of $ ( L_{H_x+t}(x), t\geq0 )$ given $\mathcal{F}^{\prime}_{H_x}$, coincides a.s. with the law of $ ( L_{t}(0), t\geq0 ) $. We conclude that for any $k\geq1$, $t_1, \ldots, t_k \geq0$, and real $\theta_1, \ldots, \theta_k $,
\begin{eqnarray*} -\log E\exp \Biggl\{ \sum_{j=1}^k
\theta_j \widehat Y_{\alpha,\beta}(t_j) \Biggr\} &=& \int _\mathbb{R}E^\prime \Biggl| \sum _{j=1}^k \theta_j L_{t_j}(x)
\Biggr|^\alpha \,dx \\
&=& \int_\mathbb{R}\int_0^\infty E^\prime \Biggl| \sum_{j=1}^k
\theta_j L_{(t_j-y)_+}(0) \Biggr|^\alpha F_x(dy) \,dx, \end{eqnarray*}
where $F_x$ is the law of $H_x$. Using the obvious fact that $H_x\stackrel{d}{=}
|x|^{\hat\beta}H_1$, an easy calculation shows that the mixture $\int_\mathbb{R}F_x \,dx$ is, up to a multiplicative constant, equal to the measure $\nu$ in (\ref{eqMLSM}). Therefore, for some constant $c_\beta^{(1)}$,
\[ -\log E\exp \Biggl\{ \sum_{j=1}^k \theta_j c_\beta^{(1)}\widehat Y_{\alpha,\beta}(t_j) \Biggr\} = -\log E\exp \Biggl\{ \sum_{j=1}^k \theta_j Y_{\alpha,\beta}(t_j) \Biggr\} \]
and (\ref{eequalSM}) follows.
\begin{remark}\label{rkrangeH} It is interesting to observe that, for a fixed $0<\alpha<2$, the range of the exponent of self-similarity $H=\beta+ (1-\beta)/\alpha$ of the \mbox{$\beta$-}ML fractional $\mathrm{S}\alpha\mathrm{S}$ motion, as $\beta$ varies between 0 and 1, is a proper subset of the feasible range of the exponent of self-similarity of stationary increment self-similar $\mathrm{S}\alpha\mathrm{S}$ processes, which is $0 < H \leq\max(1, 1/\alpha) $; see \citet{samorodnitskytaqqu1994}. \end{remark}
It was shown in \citet{dombryguillotin-plantard2009} that the stable local time fractional $\mathrm{S}\alpha\mathrm{S}$ motion is H\"older continuous. We extend this statement to the ML fractional $\mathrm{S}\alpha\mathrm{S}$ motion.
\begin{theorem} \label{tholder} The \mbox{$\beta$-}ML fractional $\mathrm{S}\alpha\mathrm{S}$ motion satisfies, with probability 1,
\[
\sup_{0\leq s<t\leq1/2}\frac{ |
Y_{\alpha,\beta}(t)-Y_{\alpha,\beta}(s) |}{(t-s)^\beta
|\log(t-s) |^{1-\beta}} <\infty \]
if $0<\alpha<1$, and
\[
\sup_{0\leq s<t\leq1/2}\frac{ |
Y_{\alpha,\beta}(t)-Y_{\alpha,\beta}(s) |}{(t-s)^\beta
|\log(t-s) |^{3/2-\beta}} <\infty \]
if $1\leq\alpha<2$. \end{theorem}
\begin{pf} The statement of the theorem follows from Lemma~\ref{lMLholder} and the argument in Theorem 5.1 in \citet{cohensamorodnitsky2006}; see also Theorem 1.5 in \citet{dombryguillotin-plantard2009}. \end{pf}
The next lemma establishes H\"older continuity of the Mittag--Leffler process~(\ref{eMLprocess}). The statement might be known, but we could not find a reference, so we present a simple argument. In the case $0<\beta\leq1/2$ (most of) the statement is in Theorem 2.1 in \citet{ehm1981}, through the relation with the local time (\ref{eMLLT}).
\begin{lemma} \label{lMLholder} For $B>0$, let
\[
K = \sup_{0\leq s<t<s+1/2\leq B}\frac{ |
M_{\beta}(t)-M_{\beta}(s) |}{(t-s)^\beta
|\log(t-s) |^{1-\beta}}. \]
Then $K$ is an a.s. finite random variable with all finite moments. \end{lemma}
\begin{pf} Because of the self-similarity of the Mittag--Leffler process, it is enough to consider $B=1/2$. In the course of the proof, we will use the notation $c(\beta)$ for a finite positive constant that may depend on $\beta$, and that may change from one appearance to another. Recall the lower tail estimate of a positive \mbox{$\beta$-}stable random variable:
\begin{equation} \label{elowertail} P \bigl( S_\beta(1)\leq\theta \bigr) \leq\exp \bigl\{ -c(\beta) \theta^{-\beta/(1-\beta)} \bigr\},\qquad 0<\theta\leq1; \end{equation}
see \citet{zolotarev1986}. Let $\lambda\geq1$. We have
\begin{eqnarray*} P(K>\lambda) &\leq& \sum_{n=1}^\infty P \Bigl( \mathop{\sup_{0\leq s<t\leq1/2}}_{2^{-(n+1)}\leq t-s\leq2^{-n}} M_{\beta}(t)-M_{\beta}(s)>c(\beta) \lambda n^{1-\beta}2^{-n\beta} \Bigr) \\ &:=& \sum _{n=1}^\infty q_n(\lambda). \end{eqnarray*}
For $n=1,2,\ldots,$ we use the following decomposition:
\begin{eqnarray*} q_n(\lambda)&\leq& P \bigl( S_\beta(\lambda\log n)\leq1/2 \bigr) \\ &&{} + P \bigl[\mbox{for some } 0<t\leq\lambda\log n, S_\beta \bigl( t+c( \beta)\lambda n^{1-\beta}2^{-n\beta} \bigr) - S_\beta(t) \leq2^{-n} \bigr] \\ &:=& q ^{(1)}_n( \lambda)+q^{(2)}_n(\lambda). \end{eqnarray*}
Using (\ref{elowertail}) and self-similarity of the stable subordinator, we obtain
\[ \sum_{n=1}^\infty q ^{(1)}_n( \lambda) \leq c(\beta)^{-1}\exp \bigl\{ -c(\beta)\lambda^{1/(1-\beta)} \bigr\}. \]
On the other hand,
\begin{eqnarray*} q^{(2)}_n(\lambda) &\leq& P \bigl( S_\beta \bigl(2^{-1} (i+1)c(\beta )\lambda n^{1-\beta}2^{-n\beta} \bigr) \\ &&\hspace*{11pt}{}- S_\beta \bigl(2^{-1} ic(\beta )\lambda n^{1-\beta}2^{-n\beta} \bigr) \leq2^{-n},\mbox{ some } i=0,\ldots, K_n \bigr) \end{eqnarray*}
with $K_n \leq2 c(\beta)^{-1} n^{\beta-1}2^{n\beta}\log n$. Switching to the complements, and using once again (\ref{elowertail}) together with the independence of the increments and self-similarity of the stable subordinator, we conclude, after some straightforward calculus, that for all $\lambda\geq\lambda(\beta)\in(0,\infty)$,
\[ \sum_{n=1}^\infty q ^{(2)}_n( \lambda) \leq c(\beta)^{-1}\exp \bigl\{ -c(\beta)\lambda^{1/(1-\beta)} \bigr\}. \]
The resulting bound on the tail probability $P(K>\lambda)$ is sufficient for the statement of the lemma. \end{pf}
Recall that the only self-similar Gaussian process with stationary increments is the Fractional Brownian motion (FBM), whose law is, apart from the scale, uniquely determined by the self-similarity parameter $H\in (0,1)$; see \citet{samorodnitskytaqqu1994}. This parameter of self-similarity also determines the dependence properties of the increment process of the FBM, the so-called fractional Gaussian noise, with the case $H>1/2$ regarded as the long memory case. In contrast, the self-similarity parameter almost never determines the dependence properties of the increment processes of stable self-similar processes with stationary increments; see \citet{samorodnitsky2006LRD}. Therefore, it is interesting and important to discuss the memory properties of the increment process
\begin{equation} \label{eincrprocess} V_n^{(\alpha,\beta)} = Y_{\alpha,\beta}(n+1) - Y_{\alpha,\beta}(n), \qquad n=0,1,2,\ldots. \end{equation}
We refer the reader to \citet{rosinski1995} and \citet{samorodnitsky2005a} for some of the notions used in the statement of the following theorem.
\begin{theorem} \label{tincrprocess} The stationary process $ ( V_n^{(\alpha,\beta)} )$ is generated by a conservative null flow and is mixing. \end{theorem}
\begin{pf} Note that the increment process has the integral representation
\begin{eqnarray} V_n^{(\alpha,\beta)} = \int_{\Omega^{\prime} \times[0,\infty)} \bigl( M_{\beta} \bigl((n+1-x)_+,\omega^{\prime} \bigr) - M_{\beta} \bigl((n-x)_+,\omega^{\prime} \bigr) \bigr) \,dZ_{\alpha,\beta}\bigl( \omega^{\prime},x\bigr),\nonumber \\\ \eqntext{n=0,1,2,\ldots.} \end{eqnarray}
Since for every $x>0$, on a set of $P^\prime$ probability 1, by the strong Markov property of the stable subordinator we have
\[ \limsup_{n\to\infty} M_{\beta} \bigl((n+1-x)_+ \bigr) - M_{\beta} \bigl((n-x)_+ \bigr)>0, \]
we see that
\[ \sum_{n=1}^\infty \bigl( M_{\beta} \bigl((n+1-x)_+,\omega^{\prime } \bigr) - M_{\beta} \bigl((n-x)_+, \omega^{\prime} \bigr) \bigr)^\alpha= \infty, \qquad P^{\prime} \times\nu\mbox{ a.e.} \]
By Corollary 4.2 in \citet{rosinski1995}, we conclude that the increment process is generated by a conservative flow.
It remains to prove that the increment process is mixing, since mixing implies ergodicity which, in turn, implies that the increment process is generated by a null flow; see \citet{samorodnitsky2005a}. By Theorem 5 of \citet{rosinskizak1996}, it is enough to show that for every $\epsilon> 0$,
\begin{eqnarray*} &&\bigl(P^{\prime} \times\nu\bigr) \bigl\{ \bigl(\omega^{\prime}, x \bigr)\dvtx M_{\beta}\bigl((1-x)_+,\omega^{\prime}\bigr) > \epsilon, \\ &&\hspace*{44pt} M_{\beta}\bigl((n+1-x)_+,\omega^{\prime}\bigr) - M_{\beta} \bigl((n-x)_+,\omega^{\prime}\bigr) > \epsilon\bigr\} \to0 \qquad\mbox{as } n \to\infty. \end{eqnarray*}
However, an obvious upper bound on the expression in the left-hand side is
\begin{eqnarray*} &&\int_0^1 P^{\prime} \bigl( M_{\beta}(n+1-x) - M_{\beta}(n-x) > \epsilon \bigr) (1-\beta) x^{-\beta} \,dx \\ &&\qquad =\int_0^1 P^{\prime} \bigl(M_{\beta}\bigl((1-\delta_{n-x})_+\bigr) > \epsilon \bigr) (1-\beta)x^{-\beta} \,dx, \end{eqnarray*}
where for $r>0$, $\delta_{r}$ is a random variable, independent of the Mittag--Leffler process, with the distribution given by (\ref{eqDynkinLamperti}). Since $\delta_r$ converges weakly to infinity as $r\to\infty$, by the dominated convergence theorem, the above expression converges to zero as $n\to\infty$. \end{pf}
\begin{remark} \label{rkbeta0} Two extreme cases deserve mentioning. A formal substitution of $\beta=0$ into (\ref{MTtransform}) leads to a well-defined process $M_0(0)=0$ and $M_0(t)= E$, the same standard exponential random variable for all $t>0$. This process is no longer the inverse of a stable subordinator. It can, however, be used in (\ref{eqMLSM}). It is elementary to see that the resulting $\mathrm{S}\alpha\mathrm{S}$ process $Y_{\alpha,0}$ is, in fact, a $\mathrm{S}\alpha\mathrm{S}$ L\'evy motion.
On the other hand, a formal substitution of $\beta=1$ into (\ref{MTtransform}) leads to the degenerate process $M_1(t)=t$ for all $t\geq0$ [which can be viewed as the inverse of the degenerate 1-stable subordinator $S_1(t)=t$ for $t\geq0$]. Once again, this process can be used in (\ref{eqMLSM}), if one interprets the measure $\nu$ as the unit point mass at the origin. The resulting $\mathrm{S}\alpha\mathrm{S}$ process $Y_{\alpha,1}$ is now the degenerate process $Y_{\alpha,1}(t)=tY_{\alpha,1}(1)$ for all $t\geq0$, where $Y_{\alpha,1}(1)$ is a $\mathrm{S}\alpha\mathrm{S}$ random variable.
Both limiting cases, $Y_{\alpha,0}$ and $Y_{\alpha,1}$, are processes of a very different nature from the \mbox{$\beta$-}ML fractional $\mathrm{S}\alpha\mathrm{S}$ motion with $0<\beta<1$. \end{remark}
\section{Some ergodic theory} \label{secergodic}
In this section, we present some elements of ergodic theory used in this paper. The main reference for these notions is \citet{aaronson1997}; see also \citet{zweimuller2009}.
Let $ (E,\mathcal{E}, \mu )$ be a $\sigma$-finite measure space. We will often use the notation $A=B$ mod $\mu$ for $A,B\in \mathcal{E}$ when $\mu(A\triangle B)=0$.
Let $T\dvtx E \to E$ be a measurable map that preserves the measure $\mu$. When the entire sequence $T, T^2, T^3, \ldots$ of iterates of $T$ is involved, we will sometimes refer to it as \textit{a~flow}. The map $T$ is called \textit{ergodic} if the only sets $A$ in $\mathcal{E}$ for which $A=T^{-1}A$ mod $\mu$ are those for which $\mu(A)=0$ or $\mu(A^c)=0$. The map $T$ is called \textit{conservative} if
\[ \sum_{n=1}^\infty\mathbf{1}_A \circ T^n=\infty\qquad\mbox{a.e. on }A \]
for every $A\in\mathcal{E}$ with $\mu(A)>0$. If $T$ is ergodic, then the qualification ``on $A$'' above is not needed.
\textit{The dual operator} $\widehat T$ is an operator $L^1(\mu)\to L^1(\mu)$ defined by
\[ \widehat{T}f = \frac{d(\nu_f \circ T^{-1})}{d\mu} \]
with $\nu_f$ a signed measure on $ (E,\mathcal{E} )$ given by $\nu_f(A) = \int_A f \,d\mu$, $A\in\mathcal{E}$. The dual operator satisfies the relation
\begin{equation} \label{edualrel} \int_E (\widehat{T} f)\cdot g \,d\mu= \int _E f\cdot(g \circ T) \,d\mu \end{equation}
for $f\in L^1(\mu), g\in L^\infty(\mu)$. For any nonnegative measurable function $f$ on $E$, a similar definition gives a nonnegative measurable function $\widehat{T} f$, and (\ref{edualrel}) holds for any two nonnegative measurable functions $f$ and $g$.
An ergodic conservative measure preserving map $T$ is called \textit{pointwise dual ergodic} if there is a sequence of positive constants $a_n\to\infty$ such that
\begin{equation} \label{epointwdualerg} \frac{1}{a_n}\sum_{k=1}^n \widehat T^k f\to\int_E f \,d\mu\qquad \mbox{a.e.} \end{equation}
for every $f\in L^1(\mu)$. If the measure $\mu$ is infinite, pointwise dual ergodicity rules out invertibility of the map $T$; in fact, no factor of $T$ can be invertible; see page~129 of \citet{aaronson1997}.
Sometimes the convergence of the type described in the definition (\ref{epointwdualerg}) of pointwise dual ergodicity is uniform on certain sets. Let $A\in{\mathcal E}$ be a set with $0<\mu(A)<\infty$. We say that $A$ is \textit{a Darling--Kac set} for an ergodic conservative measure preserving map $T$ if for some sequence of positive constants $a_n\to\infty$,
\begin{equation} \label{eDKset} \frac{1}{a_n}\sum_{k=1}^n \widehat T^k \mathbf{1}_A\to\mu(A) \qquad \mbox{uniformly, a.e. on }A \end{equation}
[i.e., the convergence in (\ref{eDKset}) is uniform on a measurable subset $B$ of $A$ with $\mu(B)=\mu(A)$]. By Proposition 3.7.5 of \citet{aaronson1997}, existence of a Darling--Kac set implies pointwise dual ergodicity of $T$, so it is legitimate to use the same sequence $(a_n)$ in (\ref{epointwdualerg}) and (\ref{eDKset}).
Given a set $A\in{\mathcal E}$, the map $\varphi\dvtx E \to\mathbb{N} \cup\{ \infty\}$ defined by $\varphi(x) = \inf\{n \geq1\dvtx T^n x \in A \}$, $x\in E$ is called \textit{the first entrance time to $A$}. If $T$ is conservative and ergodic (in addition to being measure preserving), and $\mu(A)>0$, then $\varphi<\infty$ a.e. on $E$. It is natural to measure how often the set $A$ is visited by the flow $(T^n)$ by \textit{the wandering rate} sequence
\[ w_n = \mu \Biggl(\bigcup_{k=0}^{n-1} T^{-k} A \Biggr),\qquad n=1,2,\ldots. \]
There are several alternative expressions for the wandering rate sequence, the last two following from the fact that $T$ is measure preserving:
\begin{eqnarray}\label{ewanderingalt} w_n &=& \sum_{k=0}^{n-1} \mu(A_k) = \sum_{k=0}^{n-1} \mu \bigl( A \cap\{ \varphi> k \} \bigr) \nonumber\\[-8pt]\\[-8pt] & =& \sum_{k=1}^\infty \min(k,n) \mu \bigl( A \cap\{ \varphi= k \} \bigr).\nonumber \end{eqnarray}
Here, $A_0 = A$ and $A_k = A^c \cap\{\varphi= k \}$ for $k \geq 1$. If $\mu$ is an infinite measure, $T$ is conservative and ergodic, and $0<\mu(A)<\infty$, then it follows from (\ref{ewanderingalt}) that
\begin{equation} \label{eqwanderingmu} w_n \sim\mu(\varphi< n) \qquad\mbox{as } n \to\infty. \end{equation}
Let $T$ be a conservative ergodic measure preserving map. If a set $A$ is a Darling--Kac set, then there is a precise connection between the return sequence $(w_n)$ and the normalizing sequence $(a_n)$ in (\ref{eDKset}) [and hence, also in (\ref{epointwdualerg})], assuming regular variation. Specifically, if either $(w_n) \in RV_{1-\beta}$ or $(a_n) \in RV_{\beta}$ for some $\beta\in[0,1]$, then
\begin{equation} a_n \sim\frac{1}{\Gamma(2-\beta) \Gamma(1+\beta)} \frac{n}{w_n} \qquad\mbox{as } n \to \infty. \label{eqprop387} \end{equation}
Proposition 3.8.7 in \citet{aaronson1997} gives one direction of this statement, but the argument is easily reversed.
We will also have an opportunity to use a variation of the notion of a Darling--Kac set. Let $T$ be an ergodic conservative measure preserving map. A set $A\in{\mathcal E}$ with $0<\mu(A)<\infty$ is said to be a uniform set for a nonnegative function \mbox{$g\in L^1(\mu)$} if
\begin{equation} \label{equniformset} \frac{1}{a_n}\sum_{k=1}^n \widehat T^k g \to\int_E g \,d\mu \qquad \mbox{uniformly, a.e. on }A. \end{equation}
If $g=\mathbf{1}_A$, then a uniform set is just a Darling--Kac set.
\section{Central limit theorem associated with conservative null flows} \label{secCLT} In this section, we state and discuss a functional central limit theorem for stationary infinitely divisible processes generated by certain conservative flows. Throughout, $T$ is an ergodic conservative measure preserving map on an infinite $\sigma$-finite measure space $ (E,\mathcal{E}, \mu )$, and $M$ a symmetric homogeneous infinitely divisible random measure on $(E,\mathcal{E})$ with control measure $\mu$ and local L\'evy measure $\rho$, satisfying the regular variation with index $-\alpha$, $0<\alpha<2$ at infinity condition (\ref{eregvarlevy}). We will impose an extra assumption on the lower tail of the local L\'evy measure: for some~$p_0<2$
\begin{equation} \label{eqorginregularity} x^{p_0} \rho(x,\infty) \to0\qquad\mbox{as } x \to0. \end{equation}
Let $f\dvtx E \to\mathbb{R}$ be a measurable function. We will assume that $f$ is supported by a set of finite $\mu$-measure, and has the following integrability properties:
\begin{equation} \label{eqintegrabilitycond} f \in\cases{ L^{1\vee p}(\mu)\qquad\mbox{for some } p>p_0, &\quad if $0<\alpha<1$, \vspace*{2pt}\cr L^\infty(\mu), &\quad if $\alpha=1$, \vspace*{2pt}\cr L^2(\mu), &\quad if $1< \alpha<2$.} \end{equation}
We will, further, assume that
\begin{equation} \label{emeannonzero} \mu(f)=\int_E f(s) \mu(ds)\neq0. \end{equation}
We consider a stochastic process $\mathbf{X}= ( X_1,X_2, \ldots)$ of the form (\ref{etheprocess})--(\ref{ethekernel}). The integral is well defined under the condition
\[ \int_E\int_\mathbb{R}\min \bigl( 1, x^2f_n(s)^2 \bigr) \rho(dx) \mu(ds)<\infty. \]
It is not difficult to verify that this condition holds due to the assumptions on the L\'evy measure $\rho$ and the integrability conditions (\ref{eqintegrabilitycond}) on $f$. Therefore, the process $\mathbf{X}$ is a well-defined infinitely divisible stochastic process. It is automatically stationary. The L\'evy measure of each $X_n$ is given by $\nu_{\mathrm{marg}}= (\rho\times\mu)\circ H^{-1}$, where $H\dvtx \mathbb{R}\times E \to\mathbb{R}$ is given by $H(x,s) = xf(s)$. The assumptions on the L\'evy measure $\rho$ and the integrability conditions (\ref{eqintegrabilitycond}) on $f$ imply that
\[ \nu_{\mathrm{marg}}(\lambda,\infty) \sim \biggl( \int_E
\bigl|f(s)\bigr|^\alpha \mu(ds) \biggr) \rho(\lambda,\infty) \]
as $\lambda\to\infty$. It follows that the marginal tail of the process itself is the same:
\[ P(X_n>\lambda)\sim \biggl( \int_E
\bigl|f(s)\bigr|^\alpha \mu(ds) \biggr) \rho(\lambda,\infty) \]
as $\lambda\to\infty$; see \citet{rosinskisamorodnitsky1993}. In particular, the margi\-nal distributions of the process $\mathbf{X}$ are in the domain of attraction of a $\mathrm{S}\alpha\mathrm{S}$ law; its memory is determined by the operator $T$ through (\ref{ethekernel}).
We will assume that the operator $T$ has a Darling--Kac set $A$ [recall (\ref{eDKset})], and that the normalizing sequence $(a_n)$ is regularly varying with exponent $\beta\in(0,1)$. We will also assume that the function $f$ is supported by $A$. We will add an extra assumption on the set $A$. Recall the definition of the set $A_n$ in (\ref{ewanderingalt}) as being the collection of those points outside of $A$ that enter $A$ for the first time after $n$ steps, $n=1,2,\ldots.$ We will assume that there exists a measurable function $K\dvtx E \to\mathbb{R}_+$ such that
\begin{equation} \label{eqstrongDK} \frac{\widehat{T}{}^n \mathbf{1}_{A_n}}{\mu(A_n)} \to K\qquad\mbox {uniformly, a.e. on }A. \end{equation}
This condition is an extension of the property shared by certain operators $T$, the so-called Markov shifts [see Chapter~4 in \citet{aaronson1997}], to a more general class of operators. See Examples~\ref{exMC} and~\ref{exAFN} below.
Let $\rho^{\leftarrow}(y) = \inf \{ x\geq0\dvtx \rho(x,\infty) \leq y \}, y>0$ be the left continuous inverse of the tail of the local L\'evy measure. The regular variation of the tail implies that $\rho^{\leftarrow}\in RV_{-1/\alpha}$ at zero. Define
\begin{equation} \label{edefc} c_n = \Gamma(1+\beta) C_\alpha^{-1/\alpha} a_n \rho ^{\leftarrow} (1/w_n),\qquad n=1,2,\ldots, \end{equation}
where $C_\alpha$ is the $\alpha$-stable tail constant [see \citet{samorodnitskytaqqu1994}], $(a_n)$~is the normalizing sequence in the Darling--Kac property (\ref{eDKset}) [or, equivalently, in the pointwise dual ergodicity property (\ref{epointwdualerg})], and $(w_n)$ is the wandering rate sequence for the set $A$ [related to the sequence $(a_n)$ via (\ref{eqprop387})]. It follows immediately that
\begin{equation} \label{ecnregvar} c_n \in RV_{\beta+ (1-\beta) / \alpha}. \end{equation}
The sequence $(c_n)$ is the normalizing sequence in the functional central limit theorem below. We will see that under the conditions of that theorem we have the asymptotic relation
\begin{eqnarray}\label{eqasycn} && \rho ( c_n / a_n, \infty )\nonumber \\[-3pt] &&\qquad \sim C_\alpha \bigl(
C_{\alpha,\beta}/\Gamma(1+\beta) \bigr)^\alpha\bigl|\mu(f)\bigr|^\alpha a_n^{\alpha} \Biggl( \int_E \Biggl| \sum _{k=1}^n f \circ T^k
(x)\Biggr|^{\alpha} \mu(dx) \Biggr)^{-1} \\ \eqntext{\mbox{as } n \to\infty} \end{eqnarray}
with
\begin{equation} \label{eCalphabeta} C_{\alpha,\beta} = \Gamma(1+\beta) \bigl( (1-\beta) B(1-\beta, 1+\alpha \beta) E \bigl(M_{\beta}(1)\bigr)^{\alpha} \bigr)^{1/\alpha}. \end{equation}
Here, $B$ is the standard beta function, and $M_\beta$ the Mittag--Leffler process defined in (\ref{eMLprocess}). The following is our functional central limit theorem.
\begin{theorem} \label{tFCLT} Let $T$ be an ergodic conservative measure preserving map on an infinite $\sigma$-finite measure space $ (E,\mathcal{E}, \mu )$, possessing a Darling--Kac set $A$ whose normalizing sequence $(a_n)$ is regularly varying with exponent $\beta\in(0,1)$. Assume that (\ref{eqstrongDK}) holds. Let $M$ be a symmetric homogeneous infinitely divisible random measure on $(E,\mathcal{E})$ with control measure $\mu$ and local L\'evy measure $\rho$, satisfying the regular variation with index $-\alpha$, $0<\alpha<2$ at infinity condition~(\ref{eregvarlevy}). Assume further that (\ref{eqorginregularity}) holds for some $p_0<2$.
Let $f$ be a measurable function supported by $A$ and satisfying (\ref{eqintegrabilitycond}) and (\ref{emeannonzero}). If $1 < \alpha< 2$, assume further that either:
\begin{longlist}[(ii)]
\item[(i)] $A$ is a uniform set for $|f|$, or
\item[(ii)] $f$ is bounded.\vadjust{\goodbreak} \end{longlist}
Then the stationary infinitely divisible stochastic process $\mathbf {X}= ( X_1,X_2,\ldots)$ given by (\ref{etheprocess}) and (\ref{ethekernel}) satisfies
\begin{equation} \label{emain} \frac{1}{c_n} \sum_{k=1}^{\lceil n\cdot\rceil} X_k \Rightarrow \mu(f) Y_{\alpha, \beta} \qquad\mbox{in } D[0,\infty), \end{equation}
where $(c_n)$ is defined by (\ref{edefc}), and $\{ Y_{\alpha,\beta} \}$ is the \mbox{$\beta$-}Mittag--Leffler fractional $\mathrm{S}\alpha\mathrm{S}$ motion defined by (\ref{eqMLSM}). \end{theorem}
\begin{remark} The type of the limiting process obtained in Theorem \ref{tFCLT} is an indication of the long memory in the process $\mathbf{X}$. On the other hand, the Darling--Kac assumption (\ref{eDKset}) and the duality relation (\ref{edualrel}) imply that
\begin{eqnarray*} \frac{1}{a_n} \sum_{k=1}^n \mu \bigl(A \cap T^{-k}A\bigr) &=& \frac{1}{a_n} \sum _{k=1}^n \int_E \mathbf{1}_A\cdot\mathbf{1}_A \circ T^k d \mu \\ &=& \int_A \frac{1}{a_n} \sum _{k=1}^n \widehat T^k \mathbf{1}_A \,d\mu \to\mu(A)^2 \in(0,\infty) \end{eqnarray*}
as $n\to\infty$. Since $a_n=o(n)$, and $f$ is supported by $A$, we see that for every $\epsilon> 0$,
\[ \frac{1}{n} \sum_{k=1}^n \mu
\bigl\{x \in E\dvtx \bigl|f(x)\bigr| > \epsilon, \bigl|f \circ T^k(x)\bigr| > \epsilon \bigr\} \leq\frac{1}{n} \sum_{k=1}^n \mu \bigl(A \cap T^{-k}A\bigr) \to0 \]
and it follows immediately, for example, from Theorem 2 in \citet{rosinskizak1997} that the process $\mathbf{X}$ is ergodic.
Under certain additional assumptions on the map $T$, one can check that the process $\mathbf{X}$ is, in fact, mixing. We skip the details. See, however, Examples~\ref{exMC} and~\ref{exAFN} below. \end{remark}
\begin{remark} \label{rkresultbetazero} The statement of Theorem~\ref{tFCLT} makes sense in the limiting cases $\beta=0$ and $\beta=1$ of Remark~\ref{rkbeta0} (in the case $\beta=1$ the constant $C_{\alpha,1}$ needs to be interpreted as $C_\alpha^{1/\alpha}$). Most of the argument in the proof of Theorem~\ref{tFCLT} automatically works in these cases. The limiting processes would then turn out to be, correspondingly, a $\mathrm{S}\alpha\mathrm{S}$ L\'evy motion and the straight line process; see Remark~\ref{rkbeta0}. This case $\beta=0$ corresponds to short memory in the process $\mathbf{X}$, while the case $\beta=1$ corresponds to extremely long memory. \end{remark}
\begin{remark} \label{rkpositive} When $0<\alpha<1$, the argument we will use in the proof of Theorem \ref{tFCLT} can be used to establish a ``positive'' version of the theorem. Specifically, assume now that the local L\'evy measure $\rho$ is concentrated on $(0,\infty)$, and that the function $f$ is nonnegative. Then
\begin{equation} \label{emainpos} \frac{1}{c_n} \sum_{k=1}^{\lceil n\cdot\rceil} X_k \Rightarrow \mu(f) Y_{\alpha, \beta}^+ \qquad\mbox{in } D[0, \infty), \end{equation}
where $\{ Y_{\alpha,\beta}^+\}$ is a positive \mbox{$\beta$-}Mittag--Leffler fractional $\alpha$-stable motion defined as in~(\ref{eqMLSM}), but with $\mathrm{S}\alpha\mathrm{S}$ random measure $Z_{\alpha,\beta}$ replaced by a positive $\alpha$-stable random measure with the same control measure. \end{remark}
We finish this section with two examples of different situations where Theorem~\ref{tFCLT} applies. The first example is close to the heart of a probabilist.
\begin{example} \label{exMC} Consider an irreducible null recurrent Markov chain with state space $\mathbb{Z}$ and transition matrix $P=(p_{ij})$. Let $\{ \pi_j, j \in \mathbb{Z} \}$ be the unique invariant measure of the Markov chain that satisfies $\pi_0=1$. We define a $\sigma$-finite measure on $(E,\mathcal{E}) = (\mathbb{Z}^{\mathbb{N}}, \mathcal{B}(\mathbb{Z}^{\mathbb{N}}) )$ by
\[ \mu(\cdot) = \sum_{i \in\mathbb{Z}} \pi_i P_i(\cdot) \]
with the usual notation of $P_i(\cdot)$ being the probability law of the Markov chain starting in state $i \in \mathbb{Z}$. Since $\sum_{j} \pi_j = \infty$, $\mu$ is an infinite measure.
Let $T\dvtx \mathbb{Z}^\mathbb{N}\to\mathbb{Z}^\mathbb{N}$ be the left shift map $T(x_0,x_1,\ldots) = (x_1,x_2, \ldots)$ for\break $\{x_k, k=0,1,\ldots\}\in \mathbb{Z}^\mathbb{N}$. Obviously, $T$ preserves the measure $\mu$. Since the Markov chain is irreducible and null recurrent, the flow $\{T^n \}$ is conservative and ergodic; see \citet{harrisrobbins1953}.
Consider the set $A = \{ x \in\mathbb{Z}^{\mathbb{N}}\dvtx x_0 = 0 \}$ and the corresponding first entrance time $\varphi(x) = \min\{ n \geq1\dvtx x_n=0 \}$, $x\in\mathbb{Z}^\mathbb{N}$. Assume that
\begin{equation} \label{esumprobregvar} \sum_{k=1}^n P_0(\varphi\geq k) \in RV_{1-\beta} \end{equation}
for some $\beta\in(0,1)$. Since $\mu(\varphi= k) = P_0(\varphi \geq k)$ for $k\geq1$ [see Lemma 3.3 in \citet{resnicksamorodnitskyxue2000}], we see that $\mu(\varphi\leq n) \in RV_{1-\beta}$, and hence, by (\ref{eqwanderingmu}), the wandering rates $(w_n)$ have the same property,
\begin{equation} w_n \in RV_{1-\beta}. \label{eqMarkovwander} \end{equation}
In this example,
\[ \widehat{T}{}^k \mathbf{1}_A (x) = P_0(x_k=0) \qquad\mbox{constant for }x \in A; \]
see Section~4.5 in \citet{aaronson1997}. In particular, the set $A$ is a Darling--Kac set, and by (\ref{eqMarkovwander}) and (\ref{eqprop387}), we see that the corresponding normalizing sequence $(a_n)$ is regularly varying with exponent $\beta$. Assumption (\ref{eqstrongDK}) is easily seen to hold in this example. Indeed, applying the explicit expression for the dual operator given on page 156 in \citet{aaronson1997} to the function
\[ g(x_0,x_1,\ldots)= \mathbf{1} ( x_j\neq0, j=0,\ldots, n-1, x_n=0 ), \]
we see that
\[ \widehat{T}{}^n \mathbf{1}_{A_n} ( x_0,x_1,\ldots ) = \mathbf{1} (x_0=0 )\sum_{i_0\neq0} \pi_{i_0}\sum_{i_1\neq0}p_{i_0 i_1}\cdots\sum_{i_{n-1}\neq0}p_{i_{n-2} i_{n-1}}p_{i_{n-1} 0} \]
is constant on $A$ and vanishes outside of $A$. Therefore, the ratio in (\ref{eqstrongDK}) is identically equal to 1 on $A$.
We conclude that Theorem~\ref{tFCLT} applies in this case if we choose any measurable function $f$ supported by $A$ and satisfying the conditions of the theorem.
It is easy to see that the stationary infinitely divisible process $\mathbf{X}$ in this example is mixing. Indeed, by Theorem 5 of \citet{rosinskizak1996} it is enough to check that
\[
\mu \bigl\{x\dvtx \bigl|f(x)\bigr| > \epsilon, \bigl|f \circ T^n(x)\bigr| > \epsilon \bigr\} \to0 \]
for every $\epsilon>0$. However, since $f$ vanishes outside of $A$, null recurrence implies that as $n \to\infty$,
\[
\mu \bigl\{x\dvtx \bigl|f(x)\bigr| > \epsilon, \bigl|f \circ T^n(x)\bigr| > \epsilon \bigr\} \leq\mu\bigl(A \cap T^{-n}A\bigr) = P_0(x_n=0) \to0. \]
\end{example}
The next example is less familiar to probabilists, but is well known to ergodic theorists.
\begin{example} \label{exAFN} We start with a construction of the so-called \textit{AFN-system}, studied in, for example, \citet{zweimuller2000} and \citet{thalerzweimuller2006}. Let $E$ be the union of a finite family of disjoint bounded open intervals in $\mathbb{R}$ and let $\mathcal{E}$ be the Borel $\sigma$-field on $E$. Let $\lambda$ be the one-dimensional Lebesgue measure.
Let $\xi$ be a (possibly, infinite) collection of nonempty disjoint open subintervals (of the intervals in $E$) such that $\lambda ( E \setminus \bigcup_{ Z\in\xi}Z ) = 0$. Let $T\dvtx E \to E$ be a map that is twice differentiable on (each interval of) $E$. We assume that $T$ is strictly monotone on each $Z \in\xi$.
Map $T$ is further assumed to satisfy the following three conditions, (A), (F) and~(N), (giving rise to the name AFN-system).
\begin{longlist}[(B)] \item[(A)] \textit{Adler's condition}:
\[ T^{\prime\prime} / \bigl(T^{\prime}\bigr)^2 \mbox{ is bounded on } \bigcup_{ Z\in\xi}Z. \]
\item[(F)] \textit{Finite image condition}:
\[ \mbox{the collection } T \xi= \{ TZ\dvtx Z \in\xi\}\mbox{ is finite}. \]
\item[(N)] \textit{A possibility of nonuniform expansion}: there exists a finite subset $\zeta\subseteq\xi$ such that each $Z \in\zeta$ has \textit{an indifferent fixed-point} $x_Z$ as one of its end points. That is,
\[ \lim_{x \to x_Z, x \in Z} Tx = x_Z \quad\mbox{and}\quad\lim _{x \to x_Z, x \in Z} T^{\prime} x = 1. \]
Moreover, we suppose, for each $Z \in\zeta$,
\[ \mbox{either } T^{\prime} \mbox{ decreases on } (-\infty, x_Z) \cap Z\quad\mbox{or}\quad T^\prime\mbox{ increases on } (x_Z, \infty) \cap Z, \]
depending on whether $x_Z$ is the left endpoint or the right endpoint of $Z$. Finally, we assume that $T$ is uniformly expanding away from $\{x_Z\dvtx Z \in\zeta\}$, that is, for each $\epsilon> 0$, there is $\rho(\epsilon) > 1$ such that
\[
\bigl|T^{\prime}\bigr| \geq\rho(\epsilon)\qquad\mbox{on } E \bigm\backslash \bigcup _{Z \in\zeta} \bigl( (x_Z - \epsilon, x_Z + \epsilon ) \cap Z \bigr). \] \end{longlist}
If the conditions (A), (F), and (N) are satisfied, the triplet $(E,T,\xi)$ is called an AFN-system, and the map $T$ is called an AFN-map. If $T$ is also conservative and ergodic with respect to $\lambda$, and the collection $\zeta$ is nonempty, then the AFN-map $T$ is said to be \textit{basic}; we will assume this property in the sequel. Finally, we will assume that $T$ admits \textit{nice expansions} at the indifferent fixed points. That is, for every $Z\in\zeta$ there is $0 < \beta_Z < 1$ such that
\begin{equation}
\label{eniceexp} \qquad\quad Tx = x + a_Z |x-x_Z|^{1/\beta_Z + 1} +
o \bigl(|x-x_Z|^{1/\beta_Z + 1} \bigr)\qquad\mbox{as } x \to x_Z\mbox{ in } Z \end{equation}
for some $a_Z \neq0$.
It is shown in \citet{zweimuller2000} that every basic AFN-map has an infinite invariant measure $\mu\ll\lambda$ with the density given by $d\mu/ d \lambda(x) = h_0(x) G(x)$, \mbox{$x\in E$,} where
\[ G(x) =
\cases{\displaystyle (x-x_Z) \bigl(x-(T|_Z)^{-1}(x) \bigr)^{-1}, &\quad if $x \in Z \in\zeta$, \vspace*{3pt}\cr 1, &\quad if $\displaystyle x \in E \bigm\backslash\bigcup_{Z\in\zeta}Z$} \]
and $h_0$ is a function of bounded variation bounded away from both 0 and infinity. We view $T$ as a conservative ergodic measure-preserving map on the infinite measure space $(E,\mathcal{E},\mu)$.
An example of a basic AFN-map is Boole's transformation placed on $E=(0,1/2)\cup(1/2,1)$, defined by
\begin{eqnarray*} T(x) &=& \frac{x(1-x)}{1-x-x^2},\qquad x\in(0,1/2), \\ T(x) &=& 1-T(1-x),\qquad x\in(1/2,1). \end{eqnarray*}
It admits nice expansions at the indifferent fixed points $x_Z=0$ and $x_Z=1$ with $\beta_Z=1/2$ in both cases. The invariant measure $\mu$ satisfies
\[ \frac{d\mu}{d\lambda}(x) = \frac{1}{x^2}+ \frac{1}{(1-x)^2},\qquad x\in E. \]
See \citet{thaler2001}.
Let $T$ be a basic AFN-map. We put
\[ A=E \bigm\backslash\bigcup_{Z \in\zeta} \bigl( (x_Z - \epsilon, x_Z + \epsilon ) \cap Z \bigr) \]
for some $\epsilon> 0$ small enough so that the set $A$ is nonempty. Since $\lambda(\partial A)=0$ and $A$ is bounded away from the indifferent fixed points $\{ x_Z\dvtx Z \in\zeta\}$, it follows from Corollary 3 of \citet{zweimuller2000} that $A$ is a Darling--Kac set. Moreover, the corresponding normalizing sequence $(a_n)$ is regularly varying with exponent $\beta= \min_{Z \in\zeta} \beta_Z$ in the notation of (\ref{eniceexp}); see Theorems 3 and 4 in \citet{zweimuller2000}. The assumption (\ref{eqstrongDK}) also holds; see (2.6) in \citet{thalerzweimuller2006}.
Once again, Theorem~\ref{tFCLT} applies if we choose any measurable function $f$ supported by $A$ and satisfying the conditions of Theorem~\ref{tFCLT}. Note that, by Theorem~9 in
\citet{zweimuller2000}, Riemann integrability of $|f|$ on $A$ suffices for the uniformity of the set $A$ for $|f|$.
The stationary infinitely divisible process $\mathbf{X}$ in this example is also mixing. Indeed, the basic AFN-map $T$ is \textit{exact}, that is, the $\sigma$-field $\bigcap_{n=1}^\infty T^{-n}{\mathcal{E}}$ is trivial; see, for example, page 1522 in \citet{zweimuller2000}. The exactness of $T$ implies that
\[ \mu\bigl(A \cap T^{-n}A\bigr) = \int_A \widehat{T}{}^n \mathbf{1}_{A} \,d\mu \to0 \]
as $n\to\infty$; see page 12 in \citet{thaler2001}. Now mixing of the process $\mathbf{X}$ follows from the fact that $f$ is supported by $A$, as in Example~\ref{exMC}. \end{example}
\section{Distributional results in ergodic theory} \label{secdistrresults}
In this section, we prove two distributional ergodic-theoretical results that will be used in the proof of Theorem~\ref{tFCLT}. These results may be of interest on their own as well. We call our first result a generalized Darling--Kac theorem, because the first result of this type was proved in \citet{darlingkac1957} as a distributional limit theorem for the occupation times of Markov processes and chains under a certain uniformity assumption on the transition law. The limiting law is the Mittag--Leffler distribution described in (\ref{MTtransform}). Under the same setup and assumptions, \citet{bingham1971} extended the result to weak convergence in the space $D[0,\infty)$ endowed with the Skorohod $J_1$ topology, and the limiting process is the Mittag--Leffler process defined in (\ref{eMLprocess}).
The result of \citet{darlingkac1957} was put into ergodic-theoretic context by \citet{aaronson1981} who established the one-dimensional convergence for abstract conservative infinite measure preserving maps under the assumption of pointwise dual ergodicity, that is, dispensing with a condition of uniformity. Furthermore, Aaronson proves convergence in a \textit{strong distributional sense}, a stronger mode of convergence than weak convergence. The same strong distributional convergence was established later in \citet{thalerzweimuller2006}, with the assumption of pointwise dual ergodicity replaced by an averaged version of (\ref{eqstrongDK}). The latter assumption was further weakened in \citet{zweimuller2007}. Our result, Theorem \ref{tgenDK} below, extends Aaronson's result to the space $D[0,\infty)$, under the assumption of pointwise dual ergodicity.
We start with defining strong distributional convergence. Let $Y$ be a separable metric space, equipped with its Borel $\sigma$-field. Let $ ( \Omega_1, {\mathcal F}_1, m )$ be a measure space and $ ( \Omega_2, {\mathcal F}_2, P_2 )$ a probability space. We say that a sequence of measurable maps $R_n\dvtx \Omega_1\to Y$, $n=1,2,\ldots$ converges strongly in distribution to a measurable map $R\dvtx \Omega_2\to Y$ if $P_1\circ R_n^{-1}\Rightarrow P_2\circ R^{-1}$ in $Y$ for any probability measure $P_1\ll m$ on $ ( \Omega_1, {\mathcal F}_1 )$. That is,
\[ \int_{\Omega_1} g(R_n) \,dP_1 \to\int _{\Omega_2} g(R) \,dP_2 \]
for any such $P_1$ and a bounded continuous function $g$ on $Y$. We will use the notation $R_n \stackrel{\mathcal{L}(m)}{\Rightarrow} R$ when strong distributional convergence takes place.
\begin{theorem}[(Generalized Darling--Kac theorem)]\label{tgenDK} Let $T$ be an ergodic conservative measure preserving map on an infinite $\sigma$-finite measure space $ (E,\mathcal{E}, \mu )$. Assume that $T$ is pointwise dual ergodic with a normalizing sequence $(a_n)$ that is regularly varying with exponent $\beta\in(0,1)$. Let $f\in L^1(\mu)$ be such that $\mu(f)\neq0$, and denote $S_n(f) = \sum_{k=1}^n f \circ T^k$, $n=1,2,\ldots.$ Then
\begin{equation} \frac{1}{a_n} S_{\lceil n \cdot\rceil} (f) \stackrel{\mathcal{L}(\mu)} { \Rightarrow} \mu(f) \Gamma(1+\beta) M_{\beta}(\cdot) \qquad\mbox{in } D[0, \infty), \label{eqgoalLemma1} \end{equation}
where $M_\beta$ is the Mittag--Leffler process, and $D[0,\infty)$ is equipped with the $J_1$ topology. \end{theorem}
\begin{pf} It is shown in Corollary 3 of \citet{zweimuller2007a} that proving weak convergence in (\ref{eqgoalLemma1}) for one fixed probability measure on $ (E,\mathcal{E} )$, that is absolutely continuous with respect to $\mu$, already guarantees the full strong distributional convergence. We choose and fix an arbitrary set $A\in \mathcal{E}$ with $0<\mu(A)<\infty$, and prove weak convergence in (\ref{eqgoalLemma1}) with respect to $\mu_A(\cdot) =\mu(\cdot\cap A) / \mu(A)$.
It turns out that we only need to consider one particular function $f=\mathbf{1}_A$ and to establish the appropriate finite-dimensional convergence, that is, to show that
\begin{equation} \biggl( \frac{1}{a_n} S_{\lceil n t_i \rceil} (\mathbf{1}_A) \biggr)_{i=1}^k \Rightarrow \bigl( \mu(A) \Gamma(1+\beta) M_{\beta}(t_i) \bigr)_{i=1}^k \qquad \mbox{in }\mathbb{R}^k \label{eqlonggoal} \end{equation}
for all $k \geq1$, $0 \leq t_1 < \cdots< t_k$, when the law of the random vector in the left-hand side is computed with respect to $\mu_A$.
Indeed, suppose that (\ref{eqlonggoal}) holds. By Hopf's ergodic theorem [also sometimes called a ratio ergodic theorem; see Theorem 2.2.5 in \citet{aaronson1997}], the finite-dimensional convergence immediately extends to the corresponding finite-dimensional convergence with any function $f\in L^1(\mu)$ such that $\mu(f)\neq 0$. Next, write $f=f_+-f_-$, the difference of the positive and negative parts. Since the process $ ( a_n^{-1}S_{\lceil nt \rceil}(f_+), t\geq0 )$ has, for each $n$, nondecreasing sample paths, Theorem~3 in \citet{bingham1971} tells us that the convergence of the finite-dimensional distributions, and the continuity in probability of the limiting Mittag--Leffler process already imply weak convergence, hence tightness, of this sequence of processes. Similarly, the sequence of the processes $ ( a_n^{-1}S_{\lceil nt \rceil}(f_-), t\geq 0 )$, $n=1,2,\ldots$ is tight as well. Since both converge to a continuous limit, their sum, $ ( a_n^{-1}S_{\lceil nt \rceil}(f), t\geq 0 )$, $n=1,2,\ldots,$ is tight as well, because in this case the uniform modulus of continuity can be used instead of the $J_1$ modulus of continuity; see, for example, \citet{billingsley1999}.
This will give us the required weak convergence, and hence, complete the proof of the theorem.
It remains to show (\ref{eqlonggoal}). We will use a strategy similar to the one used in \citet{bingham1971}. We start with defining a continuous version of the process $ ( S_{\lceil n t \rceil} (\mathbf{1}_A), t\geq0 )$ given by the linear interpolation
\begin{eqnarray}\label{einterpprocess} \widetilde S_n(t) &=& \bigl( (i+1)-nt \bigr)S_i( \mathbf{1}_A) + (nt-i)S_{i+1}(\mathbf{1} _A) \nonumber\\[-8pt]\\[-8pt] \eqntext{\displaystyle\mbox{if } \frac{i}{n}\leq t\leq\frac{i+1}{n}, i=0,1,2,\ldots.} \end{eqnarray}
With the implicit argument $x\in E$ viewed as random (with the law $\mu_A$), each $\widetilde S_n$ defines a random Radon measure on $[0,\infty)$. Therefore, for any $k\geq1$ the $k$-tuple product $\widetilde S_n^k = \widetilde S_n\times\cdots\times\widetilde S_n$ is a random Radon measure on $[0,\infty)^k$. By Fubini's theorem,
\[ \tilde m_n^{(k)}(B) = \int_A \widetilde S_n^k(B) (x) \mu_A(dx),\qquad B\subseteq[0, \infty)^k, \mbox{ Borel,} \]
is a Radon measure on $[0,\infty)^k$. We define, similarly, $S_n$, $S_n^k$ and $m_n^{(k)}$, starting with $S_n(t) = S_{\lceil n t \rceil} (\mathbf{1}_A), t\geq0$. Finally, we perform the same operation on the limiting process and define $M_{\beta,A}$ by $\mu(A)\Gamma(1+\beta)M_\beta$, and then construct $M_{\beta,A}^k$ and $m_{\beta,A}^{(k)}=E M_{\beta,A}^k$.
Note that $\tilde m_n^{(k)}$ is absolutely continuous with respect to the $k$-dimensional Lebesgue measure, and
\begin{eqnarray} \frac{d^k \tilde m_n^{(k)}}{dt_1\cdots\, dt_k} &=& n^k \int_A \prod _{j=1}^k \mathbf{1}_A \circ T^{i_j}(x) \mu_A(dx)\nonumber \\ \eqntext{\displaystyle\mbox{on } \frac{i_j}{n} \leq t_j<\frac{i_j+1}{n}, i_j=0,1,\ldots, j=1,\ldots, k.} \end{eqnarray}
We will prove that for all $k \geq1$, $\theta_1, \ldots, \theta_k \geq0$,
\begin{eqnarray}\label{eqLSTconv} && \frac{1}{a_n^k} \int_0^{\infty} \cdots\int_0^{\infty} e^{-\sum_{j=1}^k \theta_j t_j} \tilde m_n^{(k)}(dt_1\cdots \, dt_k) \nonumber\\[-8pt]\\[-8pt] &&\qquad \to\int _0^{\infty} \cdots\int _0^{\infty} e^{-\sum_{j=1}^k \theta_j t_j} m_{\beta,A}^{(k)}(dt_1\cdots \, dt_k) \nonumber \end{eqnarray}
as $n\to\infty$. We claim that this will suffice for (\ref{eqlonggoal}).
Indeed, suppose that (\ref{eqLSTconv}) holds. Convergence of the joint Laplace transforms implies that
\[ a_n^{-k}\tilde m_n^{(k)} \stackrel{v} {\rightarrow} m_{\beta,A}^{(k)} \]
(vaguely) in $[0,\infty)^k$. Since the rectangles are, clearly, compact continuity sets with respect to the limiting measure $m_{\beta,A}^{(k)}$, we conclude that for every $k=1,2,\ldots$ and $t_j\geq0, j=1,\ldots, k$, we have
\begin{eqnarray*} && \int_A \prod_{j=1}^k a_n^{-1}\widetilde S_n(t_j) (x) \mu_A(dx) \\ &&\qquad = a_n^{-k}\tilde m_n^{(k)} \Biggl( \prod_{j=1}^k[0,t_j] \Biggr)\to m_{\beta,A}^{(k)} \Biggl( \prod_{j=1}^k[0,t_j] \Biggr) \\ &&\qquad = E \Biggl[ \prod_{j=1}^k \mu(A) \Gamma(1+\beta) M_\beta(t_j) \Biggr] \end{eqnarray*}
as $n\to\infty$. Since for every fixed $\varepsilon>0$ and $n>1/\varepsilon$,
\[ \widetilde S_n(t)\leq S_n(t)\leq\widetilde S_n(t+ \varepsilon) \]
for each $t\geq0$, we conclude by monotonicity and continuity of the Mittag--Leffler process that
\begin{equation} \label{eprodconvtilde} \int_A \prod _{j=1}^k a_n^{-1}S_n(t_j) \mu_A(dx) \to E \Biggl[ \prod_{j=1}^k \mu(A) \Gamma(1+\beta) M_\beta (t_j) \Biggr]. \end{equation}
We claim that (\ref{eprodconvtilde}) implies (\ref{eqlonggoal}). By taking linear combinations with nonnegative weights, we see that it is enough to show that the distribution of such a linear combination,
\[ \sum_{j=1}^k \theta_jM_\beta(t_j),\qquad \theta_j>0, j=1,\ldots, k, \]
is determined by its moments, and by the Carleman sufficient condition it is enough to check that
\[ \sum_{m=1}^\infty \biggl(\frac{1}{E ( \sum_{j=1}^k \theta_jM_\beta(t_j) )^m} \biggr)^{1/(2m)} = \infty. \]
A simple monotonicity and scaling argument shows that it is sufficient to verify only that
\begin{equation} \label{ecarleman} \sum_{m=1}^\infty \biggl( \frac{1}{E ( M_\beta(1) )^m} \biggr)^{1/(2m)} = \infty. \end{equation}
However, the moments of $M_\beta(1)$ can be read off (\ref {MTtransform}), and Stirling's formula together with elementary algebra imply (\ref{ecarleman}). Hence, (\ref{eqlonggoal}) follows.
It follows that we need to prove (\ref{eqLSTconv}). Taking into account the form of the density of $\tilde m_n^{(k)}$ with respect to the $k$-dimensional Lebesgue measure, we can write the left-hand side of (\ref{eqLSTconv}) as
\[ \sum_{\pi} F_{n,A} (\theta_{\pi(1)}\cdots\theta_{\pi(k)}), \]
where
\begin{eqnarray*} && F_{n,A} (\theta_1 \cdots\theta_k) \\ &&\qquad = \biggl( \frac{n}{a_n} \biggr)^k \int\cdots\int_{0<t_1 < \cdots< t_k} e^{-\sum_{j=1}^k \theta_j t_j} \mu_A \Biggl( \bigcap _{j=1}^k T^{- \lceil nt_j \rceil} A \Biggr) \,dt_1\cdots \, dt_k \end{eqnarray*}
and $\pi$ runs through the permutations of the sets $\{1,\ldots,k \}$. To establish (\ref{eqLSTconv}), it is enough to verify that
\begin{eqnarray}\label{eqFnrelation} && F_{n,A}(\theta_1 \cdots\theta_k) \nonumber\\[-8pt]\\[-8pt] &&\qquad \to \bigl( \mu(A) \Gamma(1+\beta ) \bigr)^k \bigl((\theta_1 + \cdots+ \theta_k) (\theta_2 + \cdots+ \theta_k) \cdots \theta_k \bigr)^{-\beta} \nonumber \end{eqnarray}
as $n \to\infty$, because Lemma 3 in \citet{bingham1971} shows that summing up the expression in the right-hand side of (\ref{eqFnrelation}) over all possible permutations $(\theta_{\pi(1)} \cdots\theta_{\pi(k)})$ produces the expression in the right-hand side of (\ref{eqLSTconv}).
Given $0<\varepsilon<1$, we use repeatedly pointwise dual ergodicity and Egorov's theorem to construct a nested sequence of measurable subsets of $E$, with $A_0=A$, and for $i=0,1,\ldots,$ $A_{i+1}\subseteq A_i$, and $\mu(A_{i+1})\geq(1-\varepsilon)\mu(A_i)$, while
\begin{equation} \label{eapproxDK} \frac{1}{a_n}\sum_{k=1}^n \widehat T^k \mathbf{1}_{A_i}\to\mu(A_i) \qquad\mbox{uniformly on }A_{i+1}. \end{equation}
It is elementary to see that with $v_1=\theta_1 + \theta_2+\cdots+ \theta_k$, $v_2=\theta_2 + \cdots+ \theta_k, \ldots, v_k=\theta_k$,
\begin{eqnarray}\label{ediscrF} && F_{n,A} (\theta_1 \cdots \theta_k)\nonumber \\ &&\qquad \sim\frac{1}{a_n^k} \sum_{m_1=0}^\infty \cdots\sum_{m_k=0}^\infty e^{-n^{-1}\sum_{j=1}^k v_j m_j} \mu_A \Biggl( \bigcap_{j=1}^k T^{- (m_1+\cdots+m_j) } A \Biggr)\nonumber \\ &&\qquad = \frac{1}{a_n^k} \int_A \Biggl[ \Biggl( \sum_{m_1=0}^\infty \widehat T^{m_1} \mathbf{1}_{A} e^{- v_1m_1/n} \Biggr) \\ &&\hspace*{63pt}{}\times \prod _{j=2}^k \Biggl( \sum _{m_j=0}^\infty\mathbf{1}_A\circ T^{m_2+\cdots+m_j} e^{- v_jm_j/n} \Biggr) \Biggr] \,d\mu_A\nonumber \\ &&\qquad \geq\frac{1}{a_n^k} \int_{A_1} ( \cdots ),\nonumber \end{eqnarray}
where the equality is due to the duality relation (\ref{edualrel}). Note that by (\ref{eapproxDK}) with $i=0$,
\begin{eqnarray}\label{euseDK} && \sum_{m_1=0}^{\infty} \widehat{T}{}^{m_1} \mathbf{1}_A e^{- v_1m_1/n} \nonumber\\[-8pt]\\[-8pt] &&\qquad = \bigl(1-e^{-v_1/n} \bigr) \sum_{i=0}^{\infty} \Biggl( \sum_{m_1=0}^i \widehat{T}{}^{m_1} \mathbf{1}_{A_0} \Biggr) e^{- v_1 i /n} \sim\frac{\mu(A_0)v_1}{n} \sum_{i=0}^{\infty} a_i e^{- v_1 i/n}\hspace*{-30pt} \nonumber \end{eqnarray}
uniformly on $A_1$ as $n \to\infty$. Therefore,
\begin{eqnarray*} && F_{n,A} (\theta_1 \cdots\theta_k) \\[-1pt] &&\qquad \geq \bigl( 1-o(1) \bigr) \frac{1}{a_n^k} \frac{\mu(A_0)v_1}{n} \\[-1pt] &&\quad\qquad{}\times \sum _{i=0}^{\infty} a_i e^{- v_1i/n} \int_{A_1}\prod_{j=2}^k \Biggl( \sum_{m_j=0}^\infty \mathbf{1}_A\circ T^{m_2+\cdots+m_j} e^{- v_jm_j/n} \Biggr) \,d \mu_A \\[-1pt] &&\qquad = \bigl( 1-o(1) \bigr) \frac{1}{a_n^k} \frac{\mu(A_0)v_1}{n} \\[-1pt] &&\quad\qquad{}\times \sum _{i=0}^{\infty} a_i e^{- v_1i/n} \int_{A} \Biggl[ \Biggl( \sum _{m_2=0}^\infty \widehat T^{m_2} \mathbf{1}_{A_1} e^{- v_2m_2/n} \Biggr) \\[-1pt] &&\hspace*{120pt}{}\times \prod _{j=3}^k \Biggl( \sum_{m_j=0}^\infty \mathbf{1}_A\circ T^{m_3+\cdots+m_j} e^{- v_jm_j/n} \Biggr) \Biggr] \,d\mu_A \\[-1pt] &&\qquad \geq \bigl( 1-o(1) \bigr) \frac{1}{a_n^k} \frac{\mu(A_0)v_1}{n} \sum _{i=0}^{\infty} a_i e^{- v_1i/n} \int _{A_2} ( \cdots ). \end{eqnarray*}
Using now repeatedly (\ref{eapproxDK}) with larger and larger $i$, together with the same argument as in (\ref{euseDK}), we conclude that
\begin{eqnarray*} && F_{n,A} (\theta_1 \cdots\theta_k) \\[-1pt] &&\qquad \geq \bigl( 1-o(1) \bigr) \frac{1}{a_n^k} \frac{\mu(A_0)\mu(A_1)v_1v_2}{n^2} \sum _{i_1=0}^{\infty } a_{i_1} e^{- v_{1}i_1/n}\sum _{i_2=0}^{\infty} a_{i_2} e^{- v_2i_2/n} \\[-1pt] &&\quad\qquad{}\times \int_{A_2}\prod_{j=3}^k \Biggl( \sum_{m_j=0}^\infty \mathbf{1}_A\circ T^{m_3+\cdots+m_j} e^{- v_jm_j/n} \Biggr) \,d \mu_A \\[-1pt] &&\qquad \geq \cdots\geq \bigl( 1-o(1) \bigr) \frac{1}{a_n^k} \frac{\prod_{j=0}^{k-1}\mu(A_j)v_{j+1}}{n^k} \prod _{j=1}^k \Biggl( \sum _{i=0}^{\infty} a_i e^{- v_ji/n} \Biggr) \frac{\mu(A_k)}{\mu(A)} \\[-1pt] &&\qquad \geq \bigl( 1-o(1) \bigr) (1-\varepsilon)^{k(k+1)/2} \biggl( \frac{\mu(A)}{na_n} \biggr)^k (v_1\cdots v_k) \prod_{j=1}^k \Biggl( \sum _{i=0}^{\infty} a_i e^{- v_ji/n} \Biggr). \end{eqnarray*}
Extending the sequence $(a_n)$ into a piece-wise constant regular varying function of real variable $ ( a(x), x>0 )$ and using Karamata's Tauberian theorem [see, e.g., Section~3.6 in \citet{aaronson1997}], we conclude that for every $j=1,\ldots, k$,
\[ \sum_{i=0}^{\infty} a_i e^{- v_ji/n} \sim\Gamma(1+\beta) \frac{n}{v_j} a(n/v_j), \qquad n\to\infty. \]
It follows that
\begin{eqnarray*} F_{n,A} (\theta_1 \cdots\theta_k) &\geq& \bigl( 1-o(1) \bigr) (1-\varepsilon)^{k(k+1)/2} \bigl( \mu(A) \Gamma(1+\beta) \bigr)^k \prod_{j=1}^k \frac{a(n/v_j)}{a_n} \\ &\to&(1-\varepsilon)^{k(k+1)/2} \bigl( \mu(A) \Gamma(1+\beta) \bigr)^k \prod_{j=1}^k v_j^{-\beta} \end{eqnarray*}
by the regular variation. Since this is true for every $0<\varepsilon <1$, we have obtained the lower bound
\begin{eqnarray}\label{elowerbound} && \liminf_{n\to\infty} F_{n,A}( \theta_1 \cdots\theta_k) \nonumber\\[-8pt]\\[-8pt] &&\qquad \geq \bigl(\mu(A) \Gamma(1+\beta) \bigr)^k \bigl((\theta_1 + \cdots+ \theta_k) ( \theta_2 + \cdots+ \theta_k) \cdots \theta_k \bigr)^{-\beta}.\nonumber \end{eqnarray}
The lower bound (\ref{elowerbound}) is valid for any measurable set $A$ with $0<\mu(A)<\infty$. We will now show that for any $k\geq1$ and $0<\theta<1$ there is a measurable set $A_{k,\theta}\subseteq A$ such that
\begin{equation} \label{ealmostA} \mu ( A_{k,\theta} ) \geq(1-\theta)\mu(A) \end{equation}
and such that
\begin{eqnarray}\label{eupperbound} && \limsup_{n\to\infty} F_{n,A_{k,\theta}}( \theta_1 \cdots\theta_k) \nonumber\\[-8pt]\\[-8pt] &&\qquad \leq \bigl(\mu(A_{k,\theta}) \Gamma(1+\beta) \bigr)^k \bigl((\theta_1 + \cdots+ \theta_k) (\theta_2 + \cdots+ \theta_k) \cdots\theta_k \bigr)^{-\beta}.\nonumber \end{eqnarray}
We know that (\ref{elowerbound}) and (\ref{eupperbound}) together imply (\ref{eqFnrelation}), hence that (\ref{eqlonggoal}) holds for the set $A_{k,\theta}$. We claim that this implies that (\ref{eqlonggoal}) for every measurable $A$ with $0<\mu(A)<\infty$.
Indeed, suppose that, to the contrary, (\ref{eqlonggoal}) fails for some measurable $A$ with $0<\mu(A)<\infty$, some $k\geq1$ and some $0<t_1<\cdots<t_k$. By the one-dimensional result of \citet{aaronson1981}, the $k$ components in the left hand side of (\ref{eqlonggoal}), individually, converge weakly. Therefore, the sequence of the laws of the $k$-dimensional vectors in the left-hand side of (\ref{eqlonggoal}) is tight, and so there is a sequence of integers $n_l\uparrow\infty$ and a random vector $(Y_1,\ldots, Y_k)$ with
\begin{equation} \label{ewronglimit} (Y_1,\ldots, Y_k)\stackrel{d} {\neq} \mu(A) \Gamma(1+\beta) \bigl( M_{\beta}(t_1)\cdots M_{\beta}(t_k) \bigr), \end{equation}
such that
\begin{equation} \label{epresumeconv} \frac{1}{a_{n_l}} \bigl( S_{\lceil n_l t_1 \rceil} ( \mathbf{1}_A), \ldots, S_{\lceil n_l t_k \rceil} (\mathbf{1}_A) \bigr)\Rightarrow (Y_1,\ldots, Y_k), \end{equation}
when the law of the random vector in the left-hand side is computed with respect to~$\mu_A$. It follows from (\ref{ewronglimit}) that there is a Borel set $B\subset\mathbb{R}^k$ such that, for each $b>0$, $bB$ is a continuity set for both $(Y_1,\ldots, Y_k)$ and $ \mu(A) \Gamma(1+\beta) ( M_{\beta}(t_1)\cdots M_{\beta}(t_k) )$ and (abusing the notation a bit by using the same letter $P$),
\begin{eqnarray}\label{etoomuchmass} && P \bigl( \mu(A) \Gamma(1+\beta) \bigl( M_{\beta}(t_1) \cdots M_{\beta}(t_k) \bigr)\in B \bigr) \nonumber\\[-8pt]\\[-8pt] &&\qquad > (1+\rho) P \bigl( (Y_1,\ldots, Y_k) \in B \bigr)\nonumber \end{eqnarray}
for some $\rho>0$. In fact, since the law of a Mittag--Leffler random variable is atomless, such a $B$ can be taken to be either a ``SW corner'' of the type $B=\prod_{j=1}^k (-\infty,x_j]$ for some $(x_1,\ldots, x_k)\in\mathbb{R}^k$, or its complement.
Choose now $0<\theta<1$ so small that
\begin{equation} \label{echoicevep} (1-\theta) (1+\rho)>1 \end{equation}
and consider the set $A_{k,\theta}$. It follows from (\ref{epresumeconv}) and Hopf's ergodic theorem that
\[ \frac{1}{a_{n_l}} \bigl( S_{\lceil n_l t_1 \rceil} (\mathbf{1} _{A_{k,\theta }}),\ldots, S_{\lceil n_l t_k \rceil} (\mathbf{1}_{A_{k,\theta}}) \bigr)\Rightarrow \frac{\mu ( A_{k,\theta} )}{\mu(A)} (Y_1,\ldots, Y_k), \]
when the law of the random vector in the left-hand side is still computed with respect to $\mu_A$. However, since (\ref{eqlonggoal}) holds for the set $A_{k,\theta}$, we see that
\begin{eqnarray*} && P \bigl( (Y_1,\ldots, Y_k) \in B \bigr) \\ && \qquad = \lim _{l\to\infty} \mu_A \biggl( \frac{1}{a_{n_l}} \bigl( S_{\lceil n_l t_1 \rceil} (\mathbf{1}_{A_{k,\theta}}), \ldots, S_{\lceil n_l t_k \rceil} ( \mathbf{1}_{A_{k,\theta}}) \bigr) \in \frac{\mu ( A_{k,\theta} )}{\mu(A)} B \biggr) \\ &&\qquad = \frac{\mu ( A_{k,\theta} )}{\mu(A)} \lim_{l\to\infty} \mu_ {A_{k,\theta}} \biggl( \frac {1}{a_{n_l}} \bigl( S_{\lceil n_l t_1 \rceil} (\mathbf{1}_{A_{k,\theta}}), \ldots, S_{\lceil n_l t_k \rceil} (\mathbf{1}_{A_{k,\theta}}) \bigr) \in \frac{\mu ( A_{k,\theta} )}{\mu(A)} B \biggr) \\ &&\qquad \geq(1-\theta) P \bigl( \mu(A) \Gamma(1+\beta) \bigl( M_{\beta }(t_1) \cdots M_{\beta}(t_k) \bigr)\in B \bigr) \\ &&\qquad > P \bigl( (Y_1,\ldots,Y_k) \in B \bigr), \end{eqnarray*}
where the last inequality follows from (\ref{etoomuchmass}) and (\ref {echoicevep}). This contradiction shows that, once we prove (\ref{eupperbound}), this will establish (\ref{eqlonggoal}) for every measurable $A$ with $0<\mu(A)<\infty$.
We call a nested sequence $(A_0,A_1,\ldots)$ of sets in (\ref{eapproxDK}) an $\varepsilon$-sequence starting at~$A_0$. Its finite subsequence $(A_0,A_1,\ldots, A_k)$ will be called an $\varepsilon$-sequence of length $k+1$ starting at $A_0$ and ending at $A_k$. Let $A$ be a measurable set with $0<\mu(A)<\infty$. Fix $0<\theta<1$. Let $0<r<1$ be a small number, to be specified in the sequel. We construct a nested sequence of sets as follows.
Let $B_0=A$. Construct an $r$-sequence of length $k+1$ starting at $B_0$, and ending at some set $B_1\subseteq B_0$. Next, construct an $r^2$-sequence of length $k+1$ starting at $B_1$, and ending at some set $B_2\subseteq B_1$. Proceeding this way we obtain a nested sequence of measurable sets $A=B_0\supseteq B_1\supseteq B_2\supseteq \cdots,$ such that
\[ \mu(B_n) \geq\prod_{i=1}^n \bigl(1-r^i\bigr)^k \mu(A),\qquad n=1,2,\ldots. \]
The sets $(B_n)$ decrease to some set $A_{k,\theta}$ with
\[ \mu(A_{k,\theta}) \geq\prod_{i=1}^\infty \bigl(1-r^i\bigr)^k \mu(A). \]
Notice that, by choosing $0<r<1$ small enough, we can ensure that (\ref{ealmostA}) holds. Note, further, that by construction, for every $d=1,2,\ldots,$
\[ \mu(A_{k,\theta}) \geq f_d \mu(B_d)\qquad \mbox{with } f_d = \prod_{i=d+1}^\infty \bigl(1-r^i\bigr)^k. \]
Clearly, $f_d\uparrow1$ as $d\to\infty$. Starting with the first line in (\ref{ediscrF}), we see that
\begin{eqnarray*} && F_{n,A_{k,\theta}}(\theta_1 \cdots\theta_k) \\ &&\qquad \leq \bigl(1+o(1) \bigr) \frac{1}{a_n^k} \\ &&\quad\qquad{}\times \sum_{m_1=0}^\infty \cdots\sum_{m_k=0}^\infty e^{-n^{-1}\sum_{j=1}^k v_j m_j} \mu_{B_d} \Biggl( \bigcap_{j=1}^k T^{- (m_1+\cdots+m_j) } B_d \Biggr) \frac{\mu(B_d)}{\mu ( A_{k,\theta} )} \\ &&\qquad \leq \bigl(1+o(1) \bigr) \frac{1}{f_d} \frac{1}{a_n^k} \int_{B_d} \Biggl[ \Biggl( \sum_{m_1=0}^\infty \widehat T^{m_1} \mathbf{1}_{B_{d-1}} e^{- v_1m_1/n} \Biggr) \\ &&\hspace*{127pt}{}\times \prod_{j=2}^k \Biggl( \sum _{m_j=0}^\infty\mathbf{1}_{B_d}\circ T^{m_2+\cdots +m_j} e^{- v_jm_j/n} \Biggr) \Biggr] \,d\mu_{B_d}. \end{eqnarray*}
Using repeatedly uniform convergence as in (\ref{euseDK}) above, we conclude, as in the case of the corresponding lower bound calculation that
\begin{eqnarray*} && F_{n,A_{k,\theta}}(\theta_1 \cdots\theta_k) \\ &&\qquad \leq \bigl(1+o(1) \bigr) \frac{1}{f_d} \frac{1}{a_n^k} \frac{\mu ( B_{d-1} )v_1}{n} \\ &&\quad\qquad{} \times\sum _{i=0}^{\infty} a_i e^{- v_1i/n}\int_{B_d} \Biggl[ \Biggl( \sum _{m_2=0}^\infty \widehat T^{m_2} \mathbf{1}_{B_{d-1}} e^{- v_2m_2/n} \Biggr) \\ &&\hspace*{124pt}{}\times \prod _{j=3}^k \Biggl( \sum_{m_j=0}^\infty \mathbf{1}_{B_d}\circ T^{m_3+\cdots +m_j} e^{- v_jm_j/n} \Biggr) \Biggr] \,d\mu_{B_d} \\ &&\qquad \leq \cdots\leq \bigl( 1 + o(1) \bigr) \frac{1}{f_d} \biggl( \frac {\mu (B_{d-1})}{na_n} \biggr)^k ( v_1 \cdots v_k ) \prod_{j=1}^k \Biggl( \sum _{i=0}^{\infty} a_i e^{-v_j i /n} \Biggr) \\ &&\qquad \leq \bigl( 1 + o(1) \bigr) \frac{1}{f_d f_{d-1}^k} \biggl( \frac {\mu(A_{k,\theta})}{na_n} \biggr)^k ( v_1 \cdots v_k ) \prod _{j=1}^k \Biggl( \sum_{i=0}^{\infty} a_i e^{-v_j i /n} \Biggr). \end{eqnarray*}
As in the case of the lower bound, Karamata's Tauberian theorem shows that
\begin{eqnarray*} F_{n,A_{k,\theta}}(\theta_1 \cdots\theta_k) &\leq& \bigl( 1+o(1) \bigr) \frac{1}{f_d f_{d-1}^k} \bigl( \mu(A_{k,\theta}) \Gamma(1+\beta) \bigr)^k \prod_{j=1}^k \frac{a(n/v_j)}{a_n} \\ &\to&\frac{1}{f_d f_{d-1}^k} \bigl( \mu(A_{k,\theta})\Gamma(1+\beta) \bigr)^k \prod_{j=1}^k v_j^{-\beta} \end{eqnarray*}
as $n\to\infty$. Since this is true for every $d\geq1$, we can let now $d\to\infty$ to obtain~(\ref{ealmostA}), and the proof of the theorem is complete. \end{pf}
\begin{remark} \label{rkalsoconv} It follows immediately from Theorem~\ref{tgenDK} and continuity of the limiting Mittag--Leffler process that for the continuous process $(\widetilde S_n)$ defined in~(\ref{einterpprocess}), strong distributional convergence as in (\ref{eqgoalLemma1}) also holds, either in $D[0,\infty)$ or in $C[0,\infty)$. \end{remark}
We use the strong distributional convergence obtained in Theorem \ref{tgenDK} in the following proposition.
\begin{proposition} \label{plineartime} Under the assumptions of Theorem \ref{tgenDK}, let $A$ be a Darling--Kac set with $0<\mu(A)<\infty$, such that (\ref{eqstrongDK}) is satisfied, and suppose that the function $f$ is supported by $A$. Define a probability measure on $E$ by $\mu_n (\cdot) = \mu(\cdot\cap \{\varphi\leq n\}) / \mu(\{\varphi\leq n \})$, where $\varphi$ is the first entrance time of $A$. Let $0 \leq t_1 < \cdots< t_H$, $H \geq1$, and\vspace*{1pt} fix $L \in\mathbb{N}$ with $t_H \leq L$. Then under $\mu_{nL}$, the sequence $ ( S_{\lceil nt_h \rceil}(f) / a_n )_{h=1}^H$ converges\vspace*{1pt} weakly in $\mathbb{R}^H$ to the random vector $ (\mu(f) \Gamma (1+\beta) M_{\beta}(t_h-T_{\infty}^{(L)})_+ )_{h=1}^H$, where $T_{\infty}^{(L)}$ is a random variable independent of the Mittag--Leffler process $M_\beta$, with $P ( T_{\infty }^{(L)}\leq x )=(x/L)^{1-\beta}$, $ 0\leq x\leq L$. \end{proposition}
\begin{pf} Since $T$ preserves measure $\mu$, for the duration of the proof we may and will modify the definition of $S_n$ to $S_n(f) = \sum_{k=0}^{n-1} f \circ T^k$, $n=1,2,\ldots.$ Fix $\theta_1, \ldots, \theta_H \in\mathbb{R}$ and let $\lambda\in\mathbb{R}$. Since $f$ is supported by $A$, we have, as $n \to\infty$,
\begin{eqnarray*} && \mu_{n L} \Biggl( \frac{1}{a_n} \sum _{h=1}^H \theta_h S_{\lceil n t_h \rceil} (f) > \lambda \Biggr) \\ &&\qquad \sim\mu_{n L} \Biggl( A^c \cap \Biggl\{ \frac{1}{a_n} \sum_{h=1}^H \theta_h S_{\lceil n t_h \rceil} (f) > \lambda \Biggr\} \Biggr) \\ &&\qquad = \mu(\varphi\leq nL)^{-1} \sum_{m=1}^{nL} \mu \Biggl( A_m \cap \Biggl\{ \frac{1}{a_n} \sum _{h=1}^H \theta_h S_{\lceil n t_h \rceil} (f) > \lambda \Biggr\} \Biggr) \\ &&\qquad \sim\mu(\varphi\leq nL)^{-1} \sum_{m=1}^{nL} \mu \Biggl( A_m \cap T^{-m} \Biggl\{ \frac{1}{a_n} \sum_{h=1}^H \theta_h S_{(\lceil n t_h \rceil- m)_+} (f) > \lambda \Biggr\} \Biggr) \\ &&\qquad = \int_A \frac{1}{\mu(\varphi\leq nL)} \sum _{m=1}^{nL} \widehat{T}{}^m \mathbf{1}_{A_m} \cdot\mathbf{1}_{\{ \sum_{h=1}^H \theta_h S_{(\lceil n t_h \rceil - m)_+} (f) > \lambda a_n \}} \,d\mu. \end{eqnarray*}
Note that the measure on $E$ defined by $\eta(\cdot) = \int_{\cdot} K \,d\mu$ with $K$ in (\ref{eqstrongDK}) is necessarily a probability measure. We conclude by (\ref{eqstrongDK}) that
\begin{eqnarray}\label{eqprodexpression} && \mu_{n L} \Biggl( \frac{1}{a_n} \sum _{h=1}^H \theta_h S_{\lceil n t_h\rceil} (f) > \lambda \Biggr) \nonumber\\[-8pt]\\[-8pt] &&\qquad \sim\sum_{m=1}^{nL} \eta \Biggl( \frac{1}{a_n} \sum_{h=1}^H \theta_h S_{(\lceil nt_h \rceil- m)_+} (f) > \lambda \Biggr) p_n(m), \nonumber \end{eqnarray}
where $p_n(j) = \mu(A_j) / \sum_{m=1}^{nL} \mu(A_m)$, $j=1, \ldots, nL$, is a probability mass function. Let $T_n^{(L)}$ be a discrete random variable with this probability mass function, independent of $S_{\lceil n \cdot\rceil}(f)$, which is, in turn, governed by the probability measure $\eta$. If we declare that $T_n^{(L)}$ is defined on some probability space $ ( \Omega_n, {\mathcal F}_n, P_n )$, then the right-hand side of (\ref{eqprodexpression}) becomes
\[ ( \eta\times P_n ) \Biggl( \frac{1}{a_n} \sum _{h=1}^H \theta_h S_{(\lceil nt_h \rceil-T_n^{(L)} )_+} (f) > \lambda \Biggr). \]
Since $\eta$ is a probability measure absolutely continuous with respect to $\mu$, it follows from the strong distributional convergence in Theorem~\ref{tgenDK} that
\begin{equation} \frac{1}{a_n} S_{\lceil n \cdot\rceil}(f) \Rightarrow\mu(f)\Gamma (1+\beta) M_{\beta}(\cdot) \qquad\mbox{in } D[0,L], \label{eqfromLemma1} \end{equation}
when the law in the left-hand side is computed with respect to $\eta$. On the other hand, by the regular variation of the wandering rate sequence and (\ref{eqwanderingmu}), for $x \in[0,L]$,
\begin{eqnarray} P_n \biggl( \frac{T_n^{(L)}}{n} \leq x \biggr) = \sum _{m=1}^{\lceil nx \rceil} p_n(m) \sim\frac{w_{\lceil nx \rceil}}{w_{nL}} \sim \biggl( \frac{x}{L} \biggr)^{1-\beta}, \label{eqbeta} \end{eqnarray}
which is precisely the law of $T_{\infty}^{(L)}$. We can put together (\ref{eqfromLemma1}), (\ref{eqbeta}), and independence between $S_n $ and $T_n^{(L)} $ to obtain
\begin{eqnarray*} && \mu_{nL} \Biggl( \frac{1}{a_n} \sum_{h=1}^H \theta_h S_{\lceil nt_h \rceil}(f) > \lambda \Biggr) \\ &&\qquad \to P \Biggl( \mu(f)\Gamma(1+\beta) \sum_{h=1}^H \theta_h M_{\beta}\bigl(\bigl(t_h - T_{\infty}^{(L)}\bigr)_+\bigr) > \lambda \Biggr) \end{eqnarray*}
for all continuity points $\lambda$ of the right-hand side, and all $\theta_1 \cdots\theta_H \in\mathbb{R}$ by, for example, Theorem 13.2.2 in \citet{whitt2002}. This proves the proposition. \end{pf}
\section{Proof of the main theorem} \label{secproof}
In this section, we prove Theorem~\ref{tFCLT}. We start with several preliminary results. The first lemma explains the asymptotic relation (\ref{eqasycn}).
\begin{lemma} \label{lmomentgrowth} Under the assumptions of Proposition~\ref{plineartime}, assume, additionally, that the set $A$ supporting $f$ is a Darling--Kac set. Let $0<\alpha<2$. If $1<\alpha<2$, assume, additionally, that $f\in L^2(\mu)$, and that either:
\begin{longlist}[(ii)]
\item[(i)] $A$ is a uniform set for $|f|$, or
\item[(ii)] $f$ is bounded. \end{longlist}
Then
\begin{equation}\label{eqasynormalizing}
\biggl( \int_E \bigl|S_n(f)\bigr|^{\alpha} \,d\mu
\biggr)^{1/\alpha} \sim \bigl|\mu(f)\bigr| C_{\alpha,\beta} a_n w_n^{1/\alpha} \qquad\mbox{as } n \to \infty \end{equation}
and (\ref{eqasycn}) holds. \end{lemma}
\begin{pf} It is an elementary calculation to check that (\ref{eqasynormalizing}) implies (\ref{eqasycn}), so in the sequel we concentrate on checking (\ref{eqasynormalizing}). It follows from (\ref{eqwanderingmu}) and the fact that $f$ is supported by $A$ that
\begin{equation}
\biggl( \int_E \bigl|S_n(f)\bigr|^{\alpha} \,d\mu \biggr)^{1/\alpha} = a_n \bigl(\mu(\varphi\leq n) \bigr)^{1/\alpha} A_n^{(\alpha)} \sim a_n w_n^{1/\alpha} A_n^{(\alpha)}, \label{eqexpressioncn} \end{equation}
where $A_n^{(\alpha)} =( \int_E |S_n(f) / a_n |^{\alpha} \,d \mu_n )^{1/\alpha}$. Therefore, proving (\ref{eqasynormalizing}) reduces to checking that
\begin{equation}
\label{emomentconv} A_n^{(\alpha)} \to\bigl|\mu(f)\bigr| C_{\alpha,\beta} \qquad\mbox{as }n\to\infty. \end{equation}
If $\alpha=1$ and $f$ is nonnegative, then this follows by direct calculation, using the definition of
$C_{\alpha,\beta}$. If $f$ is not necessarily nonnegative, we can use the obvious bound $-S_n(|f|)\leq S_n(f)\leq S_n(|f|)$ together with the so-called Pratt lemma; see \citet{pratt1960}, or Problem 16.4(a) in \citet{billingsley1995}.
It remains to consider the case $\alpha\in(0,1)\cup(1,2)$. Proposition~\ref{plineartime} shows that $ ( A_n^{(\alpha)} )$ is the sequence of the $\alpha$-norms of a weakly converging sequence, and the expression in the right-hand side of (\ref{emomentconv}) is easily seen to be the $\alpha$-norm of the weak limit. Therefore, our statement will follow once we show that this weakly convergent sequence is uniformly integrable, which we proceed now to do.
Suppose first that $0<\alpha<1$. Recalling the relation (\ref{eqprop387}) and the fact that $T$ preserves measure $\mu$, we see that
\begin{eqnarray}\label{eunifl1} \sup_{n \geq1} \int_E \biggl\vert \frac{S_n(f)}{a_n} \biggr\vert \,d\mu_n &=& \sup_{n \geq1}
\frac{1}{a_n \mu(\varphi\leq n)} \int_E \bigl|S_n(f)\bigr| \,d\mu \nonumber\\[-8pt]\\[-8pt] &\leq&\sup_{n \geq1} \frac{n}{a_n \mu(\varphi\leq n)} \int_E
|f| \,d\mu <\infty,\nonumber \end{eqnarray}
which proves uniform integrability in this case.
Finally, we consider the case $1<\alpha<2$, when it is sufficient to prove that
\begin{equation} \sup_{n \geq1} \int_E \biggl( \frac{S_n(f)}{a_n} \biggr)^2 \,d\mu_n < \infty. \label{equnifintegral} \end{equation}
Under the assumption (i), since $f$ is supported by $A$, we can use the duality relation (\ref{edualrel}) to write
\begin{eqnarray*} \int_E S_n(f)^2 \,d\mu&=& n \int _E f^2 \,d\mu+ \sum _{k=1}^n \sum_{l=1, k \neq l}^n \int_E f \circ T^k f \circ T^l d \mu \\ &=& n \int_E f^2 \,d\mu+ 2\sum _{k=1}^{n-1} \sum_{j=1}^{n-k} \int_A \widehat{T}{}^j f \cdot f \,d\mu, \end{eqnarray*}
so that
\begin{eqnarray*} && \int_E \biggl( \frac{S_n(f)}{a_n} \biggr)^2 \,d\mu_n \\ &&\qquad \leq\frac{n}{a_n^2 \mu(\varphi\leq n)} \int_E f^2 \,d\mu+ \frac{2}{a_n^2 \mu(\varphi \leq n)} \sum_{k=1}^{n-1} \sum_{j=1}^{n-k} \int_A
\widehat{T}{}^j |f| \cdot|f| \,d\mu. \end{eqnarray*}
Clearly, $n/ ( a_n^2 \mu(\varphi\leq n) ) \to0$. Further, since $A$ is uniform for $|f|$,
\begin{eqnarray*} && \frac{1}{a_n^2 \mu(\varphi\leq n)} \sum_{k=1}^{n-1} \sum _{j=1}^{n-k} \int_A
\widehat{T}{}^j |f| \cdot|f| \,d\mu \\ &&\qquad \leq\frac{n}{a_n \mu(\varphi\leq n)} \int _A \frac{1}{a_n} \sum_{j=1}^n
\widehat{T}{}^j |f| \cdot|f| \,d\mu
\sim\mu\bigl(|f|\bigr)^2 \frac{n}{a_n \mu(\varphi\leq n)}. \end{eqnarray*}
Using (\ref{eqprop387}), we see that (\ref{equnifintegral}) follows. On the other hand, under the assumption~(ii), the ratio $S_n(f) / S_n(\mathbf{1}_A)$ is bounded, hence for some finite $C> 0$,
\[ \sup_{n \geq1} \int_E \biggl( \frac{S_n(f)}{a_n} \biggr)^2 \,d\mu_n \leq C\sup _{n \geq1} \int_E \biggl( \frac{S_n(\mathbf{1}_A)}{a_n} \biggr)^2 \,d\mu_n. \]
However, the Darling--Kac property of $A$ means that it is uniform for $\mathbf{1}_A$, and so we are, once again, under the assumption (i). \end{pf}
In preparation for the proof of Theorem~\ref{tFCLT}, we introduce a useful decomposition of the process $\mathbf{X}$ given in (\ref{etheprocess}). We begin by decomposing the local L\'evy measure $\rho$ into a sum of two parts, corresponding to ``large jumps'' and ``small jumps.'' Let
\begin{eqnarray*}
\rho_1 (\cdot) &=& \rho \bigl(\cdot\cap\bigl\{|x| > 1 \bigr\} \bigr), \\
\rho_2 (\cdot) &=& \rho \bigl(\cdot\cap\bigl\{|x| \leq1 \bigr\} \bigr) \end{eqnarray*}
and let $M_1, M_2$ be independent homogeneous symmetric infinitely divisible random measures, without a Gaussian component, with the same control measure $\mu$ and local L\'evy measures $\rho_1, \rho_2$ accordingly. Under the integrability assumptions (\ref{eqintegrabilitycond}), the stochastic processes $X_n^{(i)} = \int_E f \circ T^n(x) \,dM_i(x), n=1,2,\ldots,$ for $i=1,2$, are independent stationary infinitely divisible processes, and $X_n = X_n^{(1)} + X_n^{(2)}, n=1,2,\ldots.$
Our final lemma shows that, from the point of view of the central limit behavior in the case $0<\alpha<1$, the contribution of the process $ (X_n^{(2)} )$, corresponding to the ``small jumps,'' is negligible.
\begin{lemma} \label{lsmalljumps} If $0<\alpha<1$, then
\begin{equation} \frac{1}{c_n} \sum_{k=1}^n X_k^{(2)} \stackrel{p} {\to} 0. \label{eqnegp} \end{equation}
\end{lemma}
\begin{pf} By Chebyshev's inequality, for any $\epsilon>0$,
\[
P \Biggl( \Biggl|\sum_{k=1}^n X_k^{(2)}\Biggr| > \epsilon c_n \Biggr) \leq
\frac{n}{\epsilon c_n} E\bigl|X_1^{(2)}\bigr| \to0 \]
(since $c_n \in RV_{\beta+ (1-\beta)/\alpha}$ implies $n/c_n \to0$ in the case $0<\alpha<1$) as long as the expectation $E|X_1^{(2)}|$ is finite. Since for every $p_1>p_0$ in (\ref{eqorginregularity}) and $p_1\geq1$,
\[
\int_E\int_\mathbb{R}\bigl|xf(s)\bigr|\mathbf{1}
\bigl( \bigl|xf(s)\bigr|>1 \bigr) \rho_2(dx) \mu(ds) \leq\int _{-1}^1 |x|^{p_1} \rho(dx) \int _E \bigl|f(s)\bigr|^{p_1} \mu (ds), \]
the expectation is finite because, by (\ref{eqintegrabilitycond}), we can find $p_1$ as above such that $\int_E |f|^{p_1} \,d\mu<\infty$. \end{pf}
\begin{pf*}{Proof of Theorem~\ref{tFCLT}} We start with proving the finite-dimensional weak convergence, for which it enough to show the convergence
\[ \frac{1}{c_n} \sum_{h=1}^H \theta_h \sum_{k=1}^{\lceil nt_h \rceil}
X_k \Rightarrow\bigl|\mu(f)\bigr|\sum_{h=1}^H \theta_h Y_{\alpha, \beta}(t_h) \]
for all $H \geq1$, $0 \leq t_1 < \cdots< t_H$, and $\theta_1 \cdots \theta_H \in\mathbb{R}$. Conditions for weak convergence of infinitely divisible random variables [see, e.g., Theorem 15.14 in \citet{kallenberg2002}] simplify in this one-dimensional symmetric case to
\begin{eqnarray} \label{eqfirstcond} && \int_E \Biggl( \frac{1}{c_n} \sum _{h=1}^H \theta_h S_{\lceil nt_h
\rceil}(f) \Biggr)^2 \int _0^{rc_n / |\sum\theta_h S_{\lceil nt_h
\rceil}(f)|} x \rho(x,\infty) \,dx \,d\mu\nonumber \\
&&\qquad \to\frac{r^{2-\alpha} C_\alpha}{2-\alpha} \bigl|\mu(f)\bigr|^\alpha \\ &&\quad\qquad{} \times \int_{[0,\infty)}\int_{\Omega^{\prime}} \Biggl\vert \sum_{h=1}^H \theta_h M_{\beta}\bigl((t_h-x)_+, \omega^{\prime}\bigr) \Biggr\vert ^{\alpha} P^{\prime}\bigl(d \omega^{\prime}\bigr) \nu(dx)\nonumber \end{eqnarray}
and
\begin{eqnarray}\label{eqomit}
&& \int_{E} \rho \Biggl( rc_n \Biggl| \sum _{h=1}^H \theta_h S_{\lceil nt_h \rceil}
(f)\Biggr|^{-1}, \infty \Biggr) \,d\mu\nonumber \\
&&\qquad \to r^{-\alpha} C_\alpha \bigl|\mu(f)\bigr|^\alpha \\ &&\quad\qquad {}\times \int _{[0,\infty)} \int_{\Omega^{\prime}} \Biggl\vert \sum _{h=1}^H \theta_h M_{\beta}\bigl((t_h-x)_+,\omega^{\prime}\bigr) \Biggr \vert ^{\alpha} P^{\prime}\bigl(d\omega^{\prime}\bigr) \nu(dx)\nonumber \end{eqnarray}
for every $r>0$. Fix $L \in\mathbb{N}$ with $t_H \leq L$ and $r>0$.
Since the argument for (\ref{eqfirstcond}) and the argument for (\ref{eqomit}) are very similar, we only prove (\ref{eqfirstcond}). By Proposition \ref{plineartime} and Skorohod's embedding theorem, there is some probability space $ ( \Omega^*, {\mathcal F}^*, P^* )$ and random variables $Y$, $Y_n$, $n=1,2,\ldots$ defined on that space such that, for every $n$, the law of $Y_n$ coincides with the law of $a_n^{-1}\sum_{h=1}^H \theta_h S_{\lceil nt_h \rceil}(f)$ under $\mu_{nL}$, the law of $Y$ coincides with the law of $\mu(f)\Gamma (1+\beta) \sum_{h=1}^H \theta_h M_{\beta}((t_h - T_{\infty}^{(L)})_+)$ under $P^{\prime}$, and $Y_n\to Y$ $P^*$-a.s.
Introduce a function
\[ \psi(y) = y^{-2} \int_0^{ry} x \rho(x,\infty) \,dx,\qquad y>0, \]
so that the expression in the left-hand side of (\ref{eqfirstcond}) becomes
\[
\int_E \psi \biggl( \frac{c_n}{|\sum_{h=1}^H \theta_h S_{\lceil nt_h \rceil}(f)|} \biggr) \,d\mu =
\mu (\varphi\leq nL ) E^* \biggl[ \psi \biggl( \frac{c_n}{a_n |Y_n|} \biggr) \biggr]. \]
By Karamata's theorem [see, e.g., Theorem 0.6 in \citet{resnick1987}],
\[ \psi(y) \sim\frac{r^2}{2-\alpha} \rho(ry,\infty) \qquad\mbox{as } y \to\infty, \]
so that, as $n\to\infty$,
\begin{eqnarray}\label{eqnegligbleuniform}
\qquad&& \mu (\varphi\leq nL ) \psi \biggl( \frac{c_n}{a_n|Y_n|} \biggr) \nonumber \\
&&\qquad \sim\frac{r^2}{2-\alpha} \mu (\varphi\leq nL ) |Y_n|^{\alpha} \rho \bigl( rc_na_n^{-1},\infty \bigr) \\ &&\quad\qquad{} + \frac{r^{2}}{2-\alpha} \mu (\varphi\leq nL ) \rho \bigl(rc_n a_n^{-1}, \infty \bigr) \biggl( \frac{\rho (rc_n a_n^{-1}
|Y_n|^{-1},\infty )}{\rho (rc_n a_n^{-1},\infty )} -
|Y_n|^{\alpha} \biggr).\nonumber \end{eqnarray}
By (\ref{eqasycn}), Lemma~\ref{lmomentgrowth} and (\ref{eqwanderingmu}),
\begin{equation} \qquad\rho\bigl(rc_n a_n^{-1},\infty\bigr) \sim r^{-\alpha} C_\alpha \bigl( \Gamma(1+\beta) \bigr)^{-\alpha} \bigl(\mu(\varphi\leq n) \bigr)^{-1} \qquad\mbox{as } n \to\infty. \label{eqrhoandmu} \end{equation}
This, together with the basic properties of regularly varying functions of a negative index [see, e.g., Proposition 0.5, \citet{resnick1987}], shows that the second term in the right-hand side of (\ref{eqnegligbleuniform}) converges to $0$. Therefore,
\[
\mu (\varphi\leq nL ) \psi \biggl( \frac{c_n}{a_n|Y_n|} \biggr) \to \frac{r^{2-\alpha}}{2-\alpha} C_\alpha L^{1-\beta
} \biggl( \frac{|Y|}{\Gamma(1+\beta)} \biggr)^{\alpha}. \]
Integrating the limit yields
\begin{eqnarray*} && E^* \biggl[ \frac{r^{2-\alpha}}{2-\alpha} C_\alpha L^{1-\beta } \biggl(
\frac{|Y|}{\Gamma(1+\beta)} \biggr)^{\alpha} \biggr] \\ &&\qquad = \frac{r^{2-\alpha}}{2-\alpha}
C_\alpha L^{1-\beta} \bigl|\mu (f)\bigr|^\alpha E^{\prime} \Biggl[ \sum_{h=1}^H \theta_h M_{\beta}\bigl(\bigl(t_h-T_{\infty}^{(L)} \bigr)_+\bigr) \Biggr]^{\alpha} \\
&&\qquad = \frac{r^{2-\alpha}C_\alpha}{2-\alpha} \bigl|\mu(f)\bigr|^\alpha \int_{[0,\infty)} \int_{\Omega^{\prime}} \Biggl( \sum_{h=1}^H \theta_h M_{\beta}\bigl((t_h-x)_+, \omega^{\prime }\bigr) \Biggr)^{\alpha} P^{\prime}\bigl(d \omega^{\prime}\bigr) \nu(dx), \end{eqnarray*}
which is exactly the right-hand side of (\ref{eqfirstcond}). Therefore, in order to complete the proof of (\ref{eqfirstcond}), we only need to justify taking the limit inside the integral. For this purpose, we use, once again, Pratt's lemma. We need to exhibit random variables~$G_n$, $n=0,1,2,\ldots$ on $ ( \Omega^*, {\mathcal F}^*, P^* )$ such that
\begin{eqnarray}
\mu (\varphi\leq nL ) \psi \biggl( \frac{c_n}{a_n|Y_n|} \biggr) &\leq& G_n, \qquad P^*\mbox{-a.s.}, \label{eq1stPratt} \\ G_n &\to& G_0, \qquad P^*\mbox{-a.s.}, \label{eqeasyPratt} \\ E^* G_n &\to& E^* G_0 \in[0,\infty). \label{eq3rdPratt} \end{eqnarray}
We start with writing [using (\ref{eqrhoandmu})]
\begin{eqnarray*}
&& \mu (\varphi\leq nL ) \psi \biggl( \frac{c_n}{a_n|Y_n|} \biggr) \\
&&\qquad \leq C_1 \frac{\psi ( c_n a_n^{-1} |Y_n|^{-1} )}{\psi(c_n a_n^{-1})} \mathbf{1}_{\{ c_n > a_n |Y_n| \}} + C_1
\frac{\psi ( c_n a_n^{-1} |Y_n|^{-1} )}{\psi(c_n a_n^{-1})} \mathbf{1}_{\{ c_n \leq a_n |Y_n| \}}, \end{eqnarray*}
where $C_1>0$ is a constant. Suppose first that $1 \leq\alpha< 2$, and choose $0<\xi<2-\alpha$. Then by the Potter bounds [see Proposition 0.8 in \citet{resnick1987}], for some constant $C_2 > 0$,
\[
\frac{\psi ( c_n a_n^{-1} |Y_n|^{-1} )}{\psi(c_n a_n^{-1})} \mathbf{1}_{\{ c_n > a_n |Y_n| \}} \leq C_2
\bigl(|Y_n|^{\alpha-
\xi} + |Y_n|^{\alpha+ \xi}\bigr) \]
for all $n$ large enough. Further, since $y^2 \psi(y) \to0$ as $y \downarrow0$, we have, for some constant $C_3>0$,
\[
\frac{\psi ( c_n a_n^{-1} |Y_n|^{-1} )}{\psi(c_n a_n^{-1})} \mathbf{1}_{\{ c_n \leq a_n |Y_n| \}} \leq C_3 \biggl(
\frac{a_n}{c_n} \biggr)^2 \frac{|Y_n|^2}{\psi(c_n a_n^{-1})}, \]
hence, for some constant $C_4>0$,
\begin{equation}
\qquad\quad \mu (\varphi\leq nL ) \psi \biggl( \frac{c_n}{a_n|Y_n|} \biggr) \leq C_4 \biggl( |Y_n|^{\alpha- \xi} +
|Y_n|^{\alpha+ \xi} + \biggl( \frac{a_n}{c_n}
\biggr)^2 \frac{|Y_n|^2}{\psi(c_n a_n^{-1})} \biggr) \label{eqmorethan1} \end{equation}
for all $n$ (large enough) and all realizations. We take
\begin{eqnarray*}
G_n &=& C_4 \biggl( |Y_n|^{\alpha- \xi} +
|Y_n|^{\alpha+ \xi} + \biggl( \frac{a_n}{c_n}
\biggr)^2 \frac{|Y_n|^2}{\psi(c_n a_n^{-1})} \biggr), \qquad n=1,2,\ldots, \\
G_0 &=& C_4\bigl(|Y|^{\alpha- \xi} +
|Y|^{\alpha+ \xi}\bigr). \end{eqnarray*}
Then (\ref{eq1stPratt}) holds by construction, while (\ref{eqeasyPratt}) follows from the fact that
\[ \biggl( \frac{a_n}{c_n} \biggr)^2 \frac{1}{\psi(c_n a_n^{-1})} \in RV_{(1-\beta)(1-2/\alpha)} \]
and $(1-\beta)(1-2/\alpha) < 0$. Keeping this in mind, and recalling that, by (\ref{equnifintegral}) (which holds also for $\alpha=1$ under the assumptions of the theorem), $\sup_{n\geq1}E^*Y_n^2<\infty$, we obtain the uniform integrability implying (\ref{eq3rdPratt}). This proves (\ref{eqfirstcond}) in the case $1\leq\alpha<2$.
If $0 < \alpha< 1$, then Lemma~\ref{lsmalljumps} allows us to assume, without loss of generality, that $\rho(x\dvtx |x| \leq1) = 0$. Then $\psi$ is bounded on $(0,1]$, so that for some $C_5>0$,
\[
\frac{\psi(c_n a_n^{-1} |Y_n|^{-1})}{\psi(c_n a_n^{-1})} \mathbf {1}_{\{ c_n
\leq a_n |Y_n| \}} \leq C_5
\frac{a_n}{c_n} \frac{|Y_n|}{\psi(c_n a_n^{-1})} \]
and the upper bound (\ref{eqmorethan1}) is replaced with
\[ \mu (\varphi\leq nL ) \psi \biggl( \frac{c_n}{a_n
|Y_n|} \biggr) \leq C_6 \biggl( |Y_n|^{\alpha-
\xi} + |Y_n|^{\alpha+ \xi}
+ \frac{a_n}{c_n} \frac{|Y_n|}{\psi(c_n a_n^{-1})} \biggr) \]
for some $C_6>0$, where we now choose $0<\xi<1-\alpha$. Since
\[ \frac{a_n}{c_n} \frac{1}{\psi(c_n a_n^{-1})} \in RV_{(1-\beta )(1-1/\alpha)} \]
with $(1-\beta)(1-1/\alpha) < 0$ and $\sup_{n\geq1}E^*|Y_n|<\infty$ by (\ref{eunifl1}), an argument similar to the case $1\leq\alpha<2$ applies here as well. A similar argument proves, in the case $0<\alpha<1$, the ``positive'' version described in Remark \ref{rkpositive}.
It remains to prove that the laws in the left-hand side of (\ref{emain}) are tight in $D[0,L]$ for any fixed $L>0$. By Theorem 13.5 of \citet{billingsley1999}, it is enough to show that there exist $\gamma_1 > 1$, $\gamma_2 \geq0$ and $B>0$ such that
\[
P \Biggl[ \min \Biggl( \Biggl|\sum_{k=1}^{\lceil ns \rceil} X_k - \sum_{k=1}^{\lceil nr \rceil}
X_k \Biggr|, \Biggl|\sum_{k=1}^{\lceil nt \rceil} X_k - \sum_{k=1}^{\lceil ns \rceil}
X_k \Biggr| \Biggr)\geq\lambda c_n \Biggr] \leq \frac{B}{\lambda^{\gamma_2}} (t-r)^{\gamma_1} \]
for all $0 \leq r \leq s \leq t \leq L$, $n \geq1$ and $\lambda> 0$. We start with a simple observation that, in the case
$0<\alpha<1$, we may assume that the function $f$ is bounded. To see that, note that we can always write $f=f\mathbf{1}_{|f|>M} + f\mathbf
{1}_{|f|\leq M}$, and use the finite-dimensional convergence in (\ref{emainpos}) and the fact that $\mu ( f\mathbf
{1}_{|f|>M} )\to 0$ as $M\to\infty$.
Next, for any $0<\alpha<2$, if $0<t-r < 1/n$, then the probability in the left-hand side vanishes. If $X_n = X_n^{(1)} + X_n^{(2)}, n=1,2,\ldots$ be the decomposition described prior to Lemma~\ref{lsmalljumps}. We start with the part corresponding to the ``small jumps.'' Note that, by Lemma \ref{lsmalljumps}, this part is negligible if $0<\alpha<1$ (since we can apply the lemma to the supremum of the process). Therefore, we only consider the case $1\leq\alpha<2$, and prove that there exist $\gamma_1 > 1$,
$\gamma_2 \geq0$ and $B>0$ such that for all $0 \leq s \leq t \leq L$, $n \geq1$, $|t-s|\geq1/n$ and $\lambda>0$,
\begin{equation}
\label{epart2} P \Biggl( \Biggl|\sum_{k=1}^{\lceil nt \rceil} X_k^{(2)} - \sum_{k=1}^{\lceil ns \rceil}
X_k^{(2)} \Biggr| \geq\lambda c_n \Biggr) \leq \frac{B}{\lambda^{\gamma_2}} (t-s)^{\gamma_1}. \end{equation}
Note that L\'evy--It{\^o} decomposition yields
\begin{eqnarray*} && \sum_{k=1}^{\lceil nt \rceil}X_k^{(2)} - \sum_{k=1}^{\lceil ns \rceil}X_k^{(2)} \\ &&\qquad \stackrel{d} {=} \int_E S_{\lceil nt \rceil- \lceil ns \rceil} (f) \,dM_2 \\
&&\qquad \stackrel{d} {=} \mathop{\int\!\!\int}_{|x S_{\lceil nt \rceil- \lceil ns \rceil
} (f)| \leq\lambda c_n} x S_{\lceil nt \rceil- \lceil ns \rceil} (f) \,d\overline{N}_2 \\
&&\quad\qquad{} + \mathop{\int\!\!\int}_{|x S_{\lceil nt \rceil- \lceil ns
\rceil} (f)| > \lambda c_n} x S_{\lceil nt \rceil- \lceil ns \rceil} (f) \,dN_2, \end{eqnarray*}
where $N_2$ is a Poisson random measure on $\mathbb{R} \times E$ with mean measure $\rho_2 \times\mu$ and $\overline{N}_2 \equiv N_2 - ( \rho_2 \times\mu )$. Therefore,
\begin{eqnarray}\label{eqLevyIto}
&&P \Biggl( \Biggl|\sum_{k=1}^{\lceil nt \rceil}X_k^{(2)} - \sum_{k=1}^{\lceil ns \rceil}X_k^{(2)}
\Biggr| \geq\lambda c_n \Biggr) \nonumber \\
&&\qquad \leq P \biggl( \biggl| \mathop{\int\!\!\int}_{|x S_{\lceil nt \rceil- \lceil ns \rceil}
(f)| \leq\lambda c_n} x S_{\lceil nt \rceil- \lceil ns \rceil} (f) \,d
\overline{N}_2 \biggr| \geq\lambda c_n \biggr) \\
&&\quad\qquad{} + P \biggl( \biggl|
\mathop{\int\!\!\int}_{|x S_{\lceil nt \rceil- \lceil ns \rceil}
(f)| > \lambda c_n} x S_{\lceil nt \rceil- \lceil ns
\rceil} (f) \,dN_2 \biggr| > 0 \biggr). \nonumber \end{eqnarray}
It follows from (\ref{eqorginregularity}) that for some constant $C_1>0$,
\begin{eqnarray*}
&& P \biggl( \biggl| \mathop{\int\!\!\int}_{|x S_{\lceil nt \rceil- \lceil ns \rceil}
(f)| \leq\lambda c_n} x S_{\lceil nt \rceil- \lceil ns \rceil} (f) \,d
\overline{N}_2 \biggr| \geq\lambda c_n \biggr) \\ &&\qquad \leq
\frac{1}{\lambda^2 c_n^2} E \biggl\vert \mathop{\int\!\!\int}_{|x S_{\lceil nt \rceil- \lceil ns \rceil} (f)| \leq\lambda c_n} x S_{\lceil nt \rceil- \lceil ns \rceil} (f) \,d\overline{N}_2 \biggr\vert ^2 \\
&&\qquad = \frac{1}{\lambda^2 c_n^2} \mathop{\int\!\!\int}_{|x S_{\lceil nt \rceil-
\lceil ns \rceil} (f)| \leq\lambda c_n} \bigl\vert x S_{\lceil nt \rceil- \lceil ns \rceil} (f) \bigr\vert ^2 \rho_2(dx) \,d\mu \\ &&\qquad \leq4 \int_E \biggl( \frac{S_{\lceil nt \rceil- \lceil ns \rceil} (f)}{\lambda c_n}
\biggr)^2 \int _0^{\lambda c_n / |S_{\lceil nt
\rceil- \lceil ns \rceil} (f)|} x \rho_2(x,\infty) \,dx \,d\mu \\ &&\qquad \leq\frac{C_1}{ \lambda^{p_0}} \frac{1}{c_n^{p_0}} \int_E
\bigl|S_{\lceil nt \rceil- \lceil ns \rceil} (f)\bigr|^{p_0} \,d\mu. \end{eqnarray*}
Similarly, for some constant $C_2>0$,
\begin{eqnarray*}
&& P \biggl( \biggl| \mathop{\int\!\!\int}_{|x S_{\lceil nt \rceil- \lceil ns \rceil}
(f)| > \lambda c_n} x S_{\lceil nt \rceil- \lceil ns \rceil} (f)
\,dN_2\biggr| > 0 \biggr) \\
&&\qquad \leq P \bigl(N_2 \bigl\{ \bigl|x S_{\lceil nt \rceil- \lceil ns \rceil} (f)\bigr| > \lambda c_n \bigr\} \geq1 \bigr) \\
&&\qquad \leq EN_2 \bigl\{ \bigl|x S_{\lceil nt \rceil- \lceil ns \rceil} (f)\bigr| > \lambda c_n \bigr\} \\
&&\qquad = 2 \int_E \rho_2 \bigl(\lambda c_n \bigl|S_{\lceil nt \rceil- \lceil ns
\rceil} (f)\bigr|^{-1},\infty \bigr) \,d\mu \\ &&\qquad \leq\frac{C_2}{ \lambda^{p_0}} \frac{1}{c_n^{p_0}} \int_E
\bigl|S_{\lceil nt \rceil- \lceil ns \rceil} (f)\bigr|^{p_0} \,d\mu. \end{eqnarray*}
Recall the notation $A_n^{(p_0)} =( \int_E |S_n(f) / a_n |^{p_0} \,d \mu_n )^{1/p_0}$ in (\ref{eqexpressioncn}). We conclude that
\begin{eqnarray*}
&& P \Biggl( \Biggl|\sum_{k=1}^{\lceil nt \rceil}X_k^{(2)} - \sum_{k=1}^{\lceil ns \rceil}X_k^{(2)}
\Biggr| \geq\lambda c_n \Biggr) \\ &&\qquad \leq \frac{C_1+C_2}{ \lambda^{p_0}} \frac{1}{c_n^{p_0}}
\int_E \bigl|S_{\lceil nt \rceil- \lceil ns \rceil} (f)\bigr|^{p_0} \,d\mu \\ &&\qquad = \frac{C_1+C_2}{ \lambda^{p_0}}\frac{\mu(\varphi\leq\lceil nt \rceil - \lceil ns \rceil)}{\mu(\varphi\leq n)} \biggl( \frac{a_{\lceil nt \rceil- \lceil ns \rceil}}{a_n} \biggr)^{p_0} \frac{(A_{\lceil nt \rceil- \lceil ns \rceil}^{( p_0)})^{p_0}}{c_n^{p_0} \mu(\varphi \leq n)^{-1} a_n^{-p_0}}. \end{eqnarray*}
It follows from (\ref{equnifintegral}) that
\[ \sup_{n \geq1, 0\leq s \leq t \leq L} A_{\lceil nt \rceil- \lceil ns \rceil}^{( p_0)} < \infty. \]
Next, we may, if necessary, increase $p_0$ in (\ref{eqorginregularity}) to achieve $p_0>\alpha$. In that case, the sequence $ c_n^{p_0} \mu(\varphi\leq n)^{-1} a_n^{-p_0} \in RV_{(1-\beta)(p_0/\alpha-1)}$ diverges to infinity, so for some constant $C_3>0$,
\[
\frac{1}{c_n^{p_0}} \int_E \bigl|S_{\lceil nt \rceil- \lceil ns \rceil}
(f)\bigr|^{p_0} \,d\mu\leq C_3 \frac{\mu(\varphi\leq\lceil n(t-s) \rceil)}{\mu(\varphi\leq n)} \biggl( \frac{a_{\lceil n(t-s) \rceil}}{a_n} \biggr)^{p_0}. \]
By the regular variation and the constraint $t-s\geq1/n$, for every $0<\eta<\min(\beta,1-\beta)$, there is $C_4>0$, such that
\begin{eqnarray*} \frac{\mu(\varphi\leq\lceil n(t-s) \rceil)}{\mu(\varphi\leq n)} &\leq& C_4 \biggl( \frac{\lceil n(t-s) \rceil}{n} \biggr)^{1-\beta -\eta} \leq2^{1-\beta-\eta} C_4 (t-s) ^{1-\beta-\eta}, \\ \frac{a_{\lceil n(t-s)\rceil}}{a_n} &\leq&2^{\beta-\eta} C_4 (t-s) ^{\beta-\eta}. \end{eqnarray*}
Therefore, for some constant $C_5>0$,
\[
P \Biggl( \Biggl|\sum_{k=1}^{\lceil nt \rceil}X_k^{(2)} - \sum_{k=1}^{\lceil ns \rceil}X_k^{(2)}
\Biggr| \geq\lambda c_n \Biggr) \leq C_5\frac{1}{\lambda^{p_0}} (t-s)^{1+(p_0-1)\beta-(1+p_0)\eta}. \]
Since $p_0> \alpha\geq1$, we can choose $\eta>0$ so small that $1+(p_0-1)\beta- (1+p_0)\eta>0$. This establishes (\ref{epart2}).
Next, we take up the process $ ( X_n^{(1)} )$. L\'evy--It{\^o} decomposition and the symmetry of the L\'evy measure $\rho_1$ allow us to write, for any $K>0$,
\begin{eqnarray*} \frac{1}{c_n} \sum_{k=1}^{\lceil nt \rceil}
X_k^{(1)} &\stackrel{d} {=}& \frac{1}{c_n} \sum _{k=1}^{\lceil nt \rceil} \mathop{\int\!\!\int}_{|x f_k|
\leq K c_na_n^{-1}} x f_k \,d\overline{N}_1 + \frac{1}{c_n} \sum _{k=1}^{\lceil nt \rceil} \mathop{\int\!\!\int}_{|x f_k| > Kc_n a_n^{-1}} x f_k \,dN_1 \\ &:=& Z_n^{(1,K)}(t) + Z_n^{(2,K)}(t), \end{eqnarray*}
where $N_1$ and $\overline{N}_1$ are as above. Here, we first show that or any $\epsilon>0$,
\begin{equation} \label{elemma571} \lim_{K \to\infty} \limsup_{n \to\infty} P
\Bigl( \sup_{ 0 \leq t \leq L} \bigl|Z_n^{(2,K)}(t) \bigr| \geq \epsilon \Bigr) =0. \end{equation}
Consider first the case $1<\alpha<2$. Choose $0<\tau\leq2-\alpha$, and define
\begin{eqnarray*} \kappa(w) &=& \cases{ 1, &\quad if $0\leq w< 1$, \cr w^{-(\alpha+\tau)}, &\quad if $w\geq1$,} \\ g(w) &=& \bigl( (w+1)\kappa(w) \bigr)^{-1},\qquad w\geq0. \end{eqnarray*}
Since $2g(w)/g(u)\geq1$ for $0\leq u\leq w$, we have
\begin{eqnarray*}
&& P \Bigl( \sup_{ 0 \leq t \leq L} \bigl|Z_n^{(2,K)}(t) \bigr| \geq\epsilon \Bigr) \\
&&\qquad \leq P \Biggl( \mathop{\int\!\!\int}_{\mathbb{R} \times E} |x| \sum _{k=1}^{ nL } |f|\circ T^k \mathbf{1}
\bigl( |x||f|\circ T^k > Kc_n a_n^{-1} \bigr) \,dN_1 \geq\epsilon c_n \Biggr) \\
&&\qquad = P \Biggl( 2\mathop{\int\!\!\int}_{\mathbb{R} \times E} |x| \sum_{k=1}^{ nL }
|f|\circ T^k g \bigl( |f|\circ T^k \bigr)
\frac{1}{g ( Kc_n a_n^{-1}/|x| )} \,dN_1 \geq\epsilon c_n \Biggr) \\ &&\qquad \leq \frac{2}{\epsilon} c_n^{-1} E \Biggl(
\mathop{\int\!\!\int}_{\mathbb{R} \times E} |x| \sum_{k=1}^{ nL }
|f|\circ T^k g \bigl( |f|\circ T^k \bigr)
\frac{1}{g ( Kc_n a_n^{-1}/|x| )} \,dN_1 \Biggr) \\ &&\qquad \leq C_1 nc_n^{-1} \int_1^\infty x \bigl( Kc_n a_n^{-1}/x+1 \bigr) \kappa \bigl( Kc_n a_n^{-1}/x \bigr) \rho(dx), \end{eqnarray*}
where $C_1>0$ is another constant. It is now straightforward to check that for some constant $C_2>0$,
\[ \limsup_{n\to\infty} P \Bigl( \sup_{ 0 \leq t \leq L}
\bigl|Z_n^{(2,K)}(t) \bigr| \geq\epsilon \Bigr) \leq C_2 K^{-(\alpha-1)}. \]
This implies (\ref{elemma571}).
On the other hand, let $0 < \alpha\leq1$. Recall that we are assuming that the function $f$ is now bounded. We have
\begin{eqnarray*}
P \Bigl( \sup_{0 \leq t \leq L}\bigl|Z_n^{(2,K)}(t)\bigr| \geq \epsilon \Bigr) &\leq& P \Bigl( \max_{k=1,\ldots,nL} N_1 \bigl
\{ (x,s)\dvtx \bigl|xf_k(s)\bigr| > K c_n a_n^{-1} \bigr\} \geq1 \Bigr) \\
&\leq& E N_1 \Bigl\{ (x,s)\dvtx |x| \max_{k=1,\ldots, nL}
|f_k| > K c_n a_n^{-1} \Bigr\} \\ &=& 2 \int_E \rho_1 \biggl( \frac{K c_n a_n^{-1}}{ \max_{k=1,\ldots, nL}
|f_k|}, \infty \biggr) \,d\mu. \end{eqnarray*}
If we denote $\parallel f \parallel= \sup_{x\in E} |f(x)|<\infty$, then we can use once again Potter's bounds to see that for some constant $C_1>0$ and $0<\xi<\alpha$,
\begin{eqnarray*}
&& \frac{\rho_1 (Kc_n a_n^{-1} (\max_k|f_k|)^{-1},\infty )}{\rho_1(c_na_n^{-1},\infty)} \\
&&\qquad \leq C_1 \biggl( \biggl( \frac{1}{K} \max _{k=1,\ldots,nL} |f_k| \biggr)^{\alpha-\xi} + \biggl(
\frac{1}{K} \max_{k=1,\ldots,nL} |f_k| \biggr)^{\alpha+\xi} \biggr). \end{eqnarray*}
Therefore by (\ref{eqwanderingmu}), (\ref{eqasycn}) and the fact that $f$ is supported by $A$, for some constant $C_2 >0$,
\begin{eqnarray*}
&& P \Bigl( \sup_{0 \leq t \leq L}\bigl|Z_n^{(2,K)}(t)\bigr| \geq \epsilon \Bigr) \\ &&\qquad \leq2C_1 \rho_1 \bigl(c_na_n^{-1},
\infty\bigr) \int_E \biggl( \frac{1}{K} \max _{k=1,\ldots,nL} |f_k| \biggr)^{\alpha-\xi} + \biggl(
\frac{1}{K} \max_{k=1,\ldots,nL} |f_k| \biggr)^{\alpha+\xi} \,d\mu \\ &&\qquad \leq 2C_1 \rho_1 \bigl(c_n a_n^{-1},\infty\bigr) \biggl( \biggl( \frac {\parallel f \parallel}{K} \biggr)^{\alpha-\xi} + \biggl( \frac{\parallel f \parallel }{K} \biggr)^{\alpha+\xi} \biggr) \mu(\varphi\leq nL) \\ &&\qquad \leq C_2 \biggl( \biggl( \frac{\parallel f \parallel}{K} \biggr)^{\alpha-\xi} + \biggl( \frac{\parallel f \parallel}{K} \biggr)^{\alpha+\xi} \biggr) \end{eqnarray*}
and (\ref{elemma571}) follows.
It remains to consider the processes $\{ Z_n^{(1,K)}(t), 0 \leq t \leq L \}$, $n=1,2,\ldots$ for a fixed $K>0$. In the sequel, we drop the superscript $K$ for notational convenience. We will show that exist $\gamma_1 > 1$, and $B>0$ such that for all $0 \leq s < t \leq L$, $n \geq1$, $t-s\geq1/n$ and $\lambda>0$,
\begin{equation}
P\bigl(\bigl|Z_n^{(1)}(t) - Z_n^{(1)}(s)\bigr| \geq\lambda\bigr) \leq \frac{B}{\lambda^2} (t-s)^{\gamma_1}.\label{eqconcisegoal} \end{equation}
Indeed, by Chebyshev's inequality and the fact that $f$ is supported by $A$, we see that
\begin{eqnarray*}
&& P \bigl(\bigl|Z_n^{(1)}(t)-Z_n^{(1)}(s)\bigr| \geq\lambda \bigr) \\ &&\qquad \leq \frac{1}{\lambda^2 c_n^2} E \Biggl\vert \sum _{k=1}^{\lceil nt \rceil-
\lceil ns \rceil} \mathop{\int\!\!\int}_{|x f_k| \leq K c_na_n^{-1}} x f_k \,d\overline{N}_1 \Biggr\vert ^2 \\ &&\qquad \leq\frac{2}{\lambda^2 c_n^2} \sum_{k=1}^{\lceil n(t-s) \rceil} \sum_{l=1}^{\lceil n(t-s) \rceil} \int_E
|f_k f_l| \int _0^{Kc_na_n^{-1} / |f_k| \vee|f_l|} x^2 \rho_1(dx) \,d\mu. \end{eqnarray*}
It follows from the Potter bounds and the fact that $\rho_1$ does not assigns mass to the interval $(0,1)$ that for any $0<\xi<2-\alpha$ there is $C>0$ such that for all $a>0$ large enough and all $r>0$,
\[ \frac{\int_0^{ra}x^2 \rho_1(dx)}{\int_0^{a}x^2 \rho_1(dx)} \leq C \bigl( r^{2-\alpha-\xi} \vee r^{2-\alpha+\xi} \bigr). \]
Therefore, for all $n$ large enough, for some constant $C_1>0$,
\begin{eqnarray*}
&& P \bigl(\bigl|Z_n^{(1)}(t)-Z_n^{(1)}(s)\bigr| \geq\lambda \bigr) \\ &&\qquad \leq\frac{C_1}{\lambda^2 c_n^2} \sum_{k=1}^{\lceil n(t-s) \rceil} \sum_{l=1}^{\lceil n(t-s) \rceil} \int_E
\frac{|f_k f_l|}{(|f_k| \vee|f_l|)^{2-\alpha-\xi}} \,d\mu \int_0^{c_na_n^{-1}} x^2 \rho_1(dx) \\ &&\quad\qquad{} +\frac{C_1}{\lambda^2 c_n^2} \sum_{k=1}^{\lceil n(t-s) \rceil} \sum_{l=1}^{\lceil n(t-s) \rceil} \int_E
\frac{|f_k f_l|}{(|f_k| \vee|f_l|)^{2-\alpha+\xi}} \,d\mu \int_0^{c_na_n^{-1}} x^2 \rho_1(dx). \end{eqnarray*}
Note that by Karamata's theorem, (\ref{eqwanderingmu}) and the definition (\ref{edefc}) of the normalizing sequence $(c_n)$, there is $C_2>0$ such that
\[ \int_0^{c_na_n^{-1}} x^2 \rho_1(dx) \leq C_2 \frac{c_n^2}{na_n}. \]
If $1<\alpha<2$, we impose also the constraint $\xi<\alpha-1$, and use the relation
\begin{equation}
\label{eboundg1} \frac{|f_k f_l|}{(|f_k| \vee|f_l|)^{2-\alpha\pm\xi}} = \bigl( |f_k|\wedge|f_l|
\bigr) \bigl(|f_k| \vee |f_l| \bigr)^{\alpha-1\mp\xi}, \end{equation}
so that
\begin{eqnarray*} && \frac{1}{c_n^2} \sum_{k=1}^{\lceil n(t-s) \rceil} \sum _{l=1}^{\lceil n(t-s) \rceil} \int_E
\frac{|f_k f_l|}{(|f_k| \vee|f_l|)^{2-\alpha\pm\xi}} \,d\mu \int_0^{c_na_n^{-1}} x^2 \rho_1(dx) \\ &&\qquad \leq C_2\frac{1}{na_n} \sum_{k=1}^{\lceil n(t-s) \rceil} \sum_{l=1}^{\lceil n(t-s) \rceil} \int_E
\bigl( |f_k|\wedge|f_l| \bigr) \bigl(|f_k| \vee
|f_l| \bigr)^{\alpha-1\mp\xi} \,d\mu \\ &&\qquad \leq2C_2 \frac{1}{na_n} \\
&&\quad\qquad{}\times \biggl[ \bigl\lceil n(t-s) \bigr\rceil\int _E |f|^{\alpha\mp\xi} \,d\mu \\ &&\hspace*{50pt}{}+ \sum_{k=1}^{\lceil n(t-s) \rceil-1} \sum _{l=k+1}^{\lceil n(t-s) \rceil} \biggl( \int_E
|f_l| |f_k|^{\alpha
-1\mp\xi} \,d\mu + \int _E |f_k||f_l|^{\alpha-1\mp\xi} \,d \mu \biggr) \biggr] \\ &&\qquad := J_n(1) + J_n(2)+J_n(3). \end{eqnarray*}
The fact that $t-s>1/n$ and $(a_n)$ is regularly varying with the positive exponent~$\beta$, shows that for any $1<\gamma_{1}<1+\beta$ there is some constant $C_3>0$, such that for all $n=1,2,\ldots,$
\[ J_n(1)\leq C_3(t-s)^{\gamma_{1}}. \]
Next, by the duality relation (\ref{edualrel}),
\begin{eqnarray*} J_n(2)&\leq&\frac{4C_2}{a_n}(t-s) \sum _{k=1}^{\lceil n(t-s) \rceil} \int_E
|f_k||f|^{\alpha-1\mp\xi } \,d\mu \\
&=& \frac{4C_2}{a_n}(t-s) \int_A |f| \Biggl(\sum _{k=1}^{\lceil n(t-s) \rceil} \widehat{T}{}^k
|f|^{\alpha-1\mp\xi} \Biggr) \,d\mu. \end{eqnarray*}
If $f$ is bounded, then by the Darling--Kac property of the set $A$ we have, for some constants $C_4, C_5>0$,
\[
J_n(2)\leq C_4(t-s)\frac{a_{\lceil n(t-s) \rceil}}{a_n} \mu\bigl(|f|\bigr) \leq C_5(t-s)^{\gamma_{1}},\qquad 1<\gamma_{1}<1+\beta \]
by the regular variation of $(a_n)$. If, on the other hand, $A$ is a uniform set for $|f|$, then we can write
\[ \sum_{k=1}^{\lceil n(t-s) \rceil} \widehat{T}{}^k
|f|^{\alpha-1\mp\xi} \leq \sum_{k=1}^{\lceil n(t-s) \rceil}
\widehat{T}{}^k \mathbf{1}_A + \sum _{k=1}^{\lceil n(t-s) \rceil} \widehat{T}{}^k |f| \]
and obtain the same bound on $J_2$ by using both the Darling--Kac property and the uniform property of the set $A$. A similar argument shows that, for some constant $C_6>0$ we also have
\[ J_n(3)\leq C_6(t-s)^{\gamma_{1}},\qquad 1< \gamma_{1}<1+\beta, \]
which proves (\ref{eqconcisegoal}) in the case $1<\alpha<2$.
Finally, for $0< \alpha\leq1$ the same argument works, if we replace the relation (\ref{eboundg1}) by
\begin{eqnarray*}
\frac{|f_k f_l|}{(|f_k| \vee|f_l|)^{1+\xi}}&\leq& \bigl( |f_k|\wedge |f_l| \bigr)^{1-\xi}, \\
\frac{|f_k f_l|}{(|f_k| \vee
|f_l|)^{1-\xi}} &=& \bigl(|f_k|\wedge|f_l|
\bigr) \bigl(|f_k|\vee |f_l| \bigr)^\xi, \end{eqnarray*}
respectively, if $\alpha=1$, and
\[
\frac{|f_k f_l|}{(|f_k| \vee|f_l|)^{2-\alpha\mp\xi}} \leq \bigl(|f_k|\wedge|f_l| \bigr)^{\alpha\pm\xi} \]
if $0<\alpha<1$. This proves (\ref{eqconcisegoal}) in all cases, and hence, completes the proof of the theorem. \end{pf*}
\printaddresses
\end{document} | arXiv |
Euler discovered that the polynomial $p(n) = n^2 - n + 41$ yields prime numbers for many small positive integer values of $n$. What is the smallest positive integer $n$ for which $p(n)$ and $p(n+1)$ share a common factor greater than $1$?
We find that $p(n+1) = (n+1)^2 - (n+1) + 41 = n^2 + 2n + 1 - n - 1 + 41 = n^2 + n + 41$. By the Euclidean algorithm, \begin{align*} &\text{gcd}\,(p(n+1),p(n)) \\
&\qquad = \text{gcd}\,(n^2+n+41,n^2 - n+41) \\
&\qquad = \text{gcd}\,(n^2 + n + 41 - (n^2 - n + 41), n^2 - n + 41) \\
&\qquad = \text{gcd}\,(2n,n^2-n+41). \end{align*}Since $n^2$ and $n$ have the same parity (that is, they will both be even or both be odd), it follows that $n^2 - n + 41$ is odd. Thus, it suffices to evaluate $\text{gcd}\,(n,n^2 - n + 41) = \text{gcd}\,(n,n^2-n+41 - n(n-1)) = \text{gcd}\,(n,41)$. The smallest desired positive integer is then $n = \boxed{41}$.
In fact, for all integers $n$ from $1$ through $40$, it turns out that $p(n)$ is a prime number. | Math Dataset |
What's the point of Pauli's Exclusion Principle if time and space are continuous?
What does the Pauli Exclusion Principle mean if time and space are continuous?
If time and space are continuous then identical quantum states are impossible to begin with. in the question.
This assertion is just plainly false. A quantum state is not given by a location in time and space. The often used kets $\lvert x\rangle$ that are "position eigenstates" are not actually admissible quantum states since they are not normalized - they do not belong to the Hilbert space of states. Essentially by assumption, the space of states is separable, i.e. spanned by a countably infinite orthonormal basis.
Real particles are never completely localised in space (well except in the limit case of a completely undefined momentum), due to the uncertainty principle. Rather, they are necessarily in a superposition of a continuum of position and momentum eigenstates (a wave packet).
Pauli's Exclusion Principle asserts that they cannot be in the same exact quantum state, but a direct consequence of this is that they tend to also not be in similar states. This amounts to an effective repulsive effect between particles.
You can see this by remembering that to get a physical two-fermion wavefunction you have to antisymmetrize it. This means that if the two single wavefunctions are similar in a region, the total two-fermion wavefunction will have nearly zero probability amplitude in that region, thus resulting in an effective repulsive effect.
As you can clearly see for this picture, for $x_1=x_2$ the probability vanishes, as an immediate consequence of Pauli's exclusion principle: you cannot find the two identical fermions in the same position state. But you also see that the more $x_1$ is close to $x_2$ the smaller is the probability, as it must be due to the wavefunction being continuous.
Addendum: Can the effect of Pauli's exclusion principle be thought of as a force in the conventional $F=ma$ sense?
The QM version of what is meant by force in the classical setting is an interaction mediated by some potential, like the electromagnetic interaction between electrons. This is in practice an additional term in the Hamiltonian of the system, which says that certain states (say, same charges very close together) correspond to high-energy states and are therefore harder to reach, and vice versa for low-energy states.
Pauli's exclusion principle is conceptually entirely different: it is not due to an increase of energy associated with identical fermions being close together, and there is no term in the Hamiltonian that mediates such "interaction" (important caveat here: this "exchange forces" can be approximated to a certain degree as "regular" forces).
Rather, it comes from the inherently different statistics of many-fermion states: it is not that identical fermions cannot be in the same state/position because there is a repulsive force preventing it, but that there is no physical (many-body) state associated with them being in the same state/position. There simply isn't: it's not something compatible with the physical reality described by quantum mechanics. We naively think of such states because we are used to think classically and cannot really wrap our heads around what the concept of "identical particles" really means.
Ok, but what about things like degeneracy pressure then? In some circumstances, like in dying stars, Pauli's exclusion principle really seems to behave like a force in the conventional sense, contrasting the gravitational force and preventing white dwarves from collapsing into a point. How do we reconcile the above described "statistical effect" with this?
What I think is a good way of thinking about this is the following: you are trying to squish a lot of fermions into the same place. However, Pauli's principle dictates a vanishing probability of any pair of them occupying the same position.
The only way to reconcile these two things is that the position distribution of any fermion (say, the $i$-th fermion) must be extremely localised at a point (call it $x_i$), different from all the other points occupied by the other fermions. It is important to note that I just cheated for the sake of clarity here: you cannot talk of any fermion as having an individual identity: any fermion will be very strictly confined in all the $x_i$ positions, provided that all the other fermions are not. The net effect of all this is that the properly antisymmetrized wavefunction of the whole system will be a superposition of lots of very sharp peaks in the high dimensional position space. And it is at this point that Heisenberg's uncertainty comes into play: very peaked distribution in position means very broad distribution in the momentum, which means very high energy, which means that the more you want to squish the fermions together, the more energy you need to provide (that is, classical speaking, the harder you have to "push" them together).
To summarize: due to Pauli's principle the fermions try so hard to not occupy the same positions, that the resulting many-fermion wavefunction describing the joint probabities becomes very peaked, highly increasing the kinetic energy of the state, thus making such states "harder" to reach.
Here (and links therein) is another question discussing this point.
Not the answer you're looking for? Browse other questions tagged quantum-mechanics wavefunction pauli-exclusion-principle or ask your own question.
Is Pauli-repulsion a "force" that is completely separate from the 4 fundamental forces?
What is the formal definition of a system?
How can Pauli's exclusion principle originate forces?
Can two solid objects pass through each other if they are moving sufficiently fast relative to each other?
Does Pauli Exclusion forbid two neutral fermions to occupy the same location in space? | CommonCrawl |
Five points determine a conic
In Euclidean and projective geometry, five points determine a conic (a degree-2 plane curve), just as two (distinct) points determine a line (a degree-1 plane curve). There are additional subtleties for conics that do not exist for lines, and thus the statement and its proof for conics are both more technical than for lines.
Formally, given any five points in the plane in general linear position, meaning no three collinear, there is a unique conic passing through them, which will be non-degenerate; this is true over both the Euclidean plane and any pappian projective plane. Indeed, given any five points there is a conic passing through them, but if three of the points are collinear the conic will be degenerate (reducible, because it contains a line), and may not be unique; see further discussion.
Proofs
This result can be proven numerous different ways; the dimension counting argument is most direct, and generalizes to higher degree, while other proofs are special to conics.
Dimension counting
Intuitively, passing through five points in general linear position specifies five independent linear constraints on the (projective) linear space of conics, and hence specifies a unique conic, though this brief statement ignores subtleties.
More precisely, this is seen as follows:
• conics correspond to points in the five-dimensional projective space $\mathbf {P} ^{5};$
• requiring a conic to pass through a point imposes a linear condition on the coordinates: for a fixed $(x,y),$ the equation $Ax^{2}+Bxy+Cy^{2}+Dx+Ey+F=0$ is a linear equation in $(A,B,C,D,E,F);$
• by dimension counting, five constraints (that the curve passes through five points) are necessary to specify a conic, as each constraint cuts the dimension of possibilities by 1, and one starts with 5 dimensions;
• in 5 dimensions, the intersection of 5 (independent) hyperplanes is a single point (formally, by Bézout's theorem);
• general linear position of the points means that the constraints are independent, and thus do specify a unique conic;
• the resulting conic is non-degenerate because it is a curve (since it has more than 1 point), and does not contain a line (else it would split as two lines, at least one of which must contain 3 of the 5 points, by the pigeonhole principle), so it is irreducible.
The two subtleties in the above analysis are that the resulting point is a quadratic equation (not a linear equation), and that the constraints are independent. The first is simple: if A, B, and C all vanish, then the equation $Dx+Ey+F=0$ defines a line, and any 3 points on this (indeed any number of points) lie on a line – thus general linear position ensures a conic. The second, that the constraints are independent, is significantly subtler: it corresponds to the fact that given five points in general linear position in the plane, their images in $\mathbf {P} ^{5}$ under the Veronese map are in general linear position, which is true because the Veronese map is biregular: i.e., if the image of five points satisfy a relation, then the relation can be pulled back and the original points must also satisfy a relation. The Veronese map has coordinates $[x^{2}:xy:y^{2}:xz:yz:z^{2}],$ and the target $\mathbf {P} ^{5}$ is dual to the $[A:B:C:D:E:F]$ $\mathbf {P} ^{5}$ of conics. The Veronese map corresponds to "evaluation of a conic at a point", and the statement about independence of constraints is exactly a geometric statement about this map.
Synthetic proof
That five points determine a conic can be proven by synthetic geometry—i.e., in terms of lines and points in the plane—in addition to the analytic (algebraic) proof given above. Such a proof can be given using a theorem of Jakob Steiner,[1] which states:
Given a projective transformation f, between the pencil of lines passing through a point X and the pencil of lines passing through a point Y, the set C of intersection points between a line x and its image $f(x)$ forms a conic.
Note that X and Y are on this conic by considering the preimage and image of the line XY (which is respectively a line through X and a line through Y).
This can be shown by taking the points X and Y to the standard points $[1:0:0]$ and $[0:1:0]$ by a projective transformation, in which case the pencils of lines correspond to the horizontal and vertical lines in the plane, and the intersections of corresponding lines to the graph of a function, which (must be shown) is a hyperbola, hence a conic, hence the original curve C is a conic.
Now given five points X, Y, A, B, C, the three lines $XA,XB,XC$ can be taken to the three lines $YA,YB,YC$ by a unique projective transform, since projective transforms are simply 3-transitive on lines (they are simply 3-transitive on points, hence by projective duality they are 3-transitive on lines). Under this map X maps to Y, since these are the unique intersection points of these lines, and thus satisfy the hypothesis of Steiner’s theorem. The resulting conic thus contains all five points, and is the unique such conic, as desired.
Construction
Given five points, one can construct the conic containing them in various ways.
Analytically, given the coordinates $(x_{i},y_{i})_{i=1,2,3,4,5}$ of the five points, the equation for the conic can be found by linear algebra, by writing and solving the five equations in the coefficients, substituting the variables with the values of the coordinates: five equations, six unknowns, but homogeneous so scaling removes one dimension; concretely, setting one of the coefficients to 1 accomplishes this.
This can be achieved quite directly as the following determinantal equation:
$\det {\begin{bmatrix}x^{2}&xy&y^{2}&x&y&1\\x_{1}^{2}&x_{1}y_{1}&y_{1}^{2}&x_{1}&y_{1}&1\\x_{2}^{2}&x_{2}y_{2}&y_{2}^{2}&x_{2}&y_{2}&1\\x_{3}^{2}&x_{3}y_{3}&y_{3}^{2}&x_{3}&y_{3}&1\\x_{4}^{2}&x_{4}y_{4}&y_{4}^{2}&x_{4}&y_{4}&1\\x_{5}^{2}&x_{5}y_{5}&y_{5}^{2}&x_{5}&y_{5}&1\end{bmatrix}}=0$
This matrix has variables in its first row and numbers in all other rows, so the determinant is visibly a linear combination of the six monomials of degree at most 2. Also, the resulting polynomial clearly vanishes at the five input points (when $(x,y)=(x_{i},y_{i})$), as the matrix has then a repeated row.
Synthetically, the conic can be constructed by the Braikenridge–Maclaurin construction,[2][3][4][5] by applying the Braikenridge–Maclaurin theorem, which is the converse of Pascal's theorem. Pascal's theorem states that given 6 points on a conic (a hexagon), the lines defined by opposite sides intersect in three collinear points. This can be reversed to construct the possible locations for a 6th point, given 5 existing ones.
Generalizations
The natural generalization is to ask for what value of k a configuration of k points (in general position) in n-space determines a variety of degree d and dimension m, which is a fundamental question in enumerative geometry.
A simple case of this is for a hypersurface (a codimension 1 subvariety, the zeros of a single polynomial, the case $m=n-1$), of which plane curves are an example.
In the case of a hypersurface, the answer is given in terms of the multiset coefficient, more familiarly the binomial coefficient, or more elegantly the rising factorial, as:
$k=\left(\!\!{n+1 \choose d}\!\!\right)-1={n+d \choose d}-1={\frac {1}{n!}}(d+1)^{(n)}-1.$
This is via the analogous analysis of the Veronese map: k points in general position impose k independent linear conditions on a variety (because the Veronese map is biregular), and the number of monomials of degree d in $n+1$ variables (n-dimensional projective space has $n+1$ homogeneous coordinates) is $\textstyle {\left(\!\!{n+1 \choose d}\!\!\right)},$ from which 1 is subtracted because of projectivization: multiplying a polynomial by a constant does not change its zeros.
In the above formula, the number of points k is a polynomial in d of degree n, with leading coefficient $1/n!$
In the case of plane curves, where $n=2,$ the formula becomes:
$\textstyle {\frac {1}{2}}(d+1)(d+2)-1=\textstyle {\frac {1}{2}}(d^{2}+3d)$
whose values for $d=0,1,2,3,4$ are $0,2,5,9,14$ – there are no curves of degree 0 (a single point is a point and is thus determined by a point, which is codimension 2), 2 points determine a line, 5 points determine a conic, 9 points determine a cubic, 14 points determine a quartic, and so forth.
Related results
While five points determine a conic, sets of six or more points on a conic are not in general position, that is, they are constrained as is demonstrated in Pascal's theorem.
Similarly, while nine points determine a cubic, if the nine points lie on more than one cubic—i.e., they are the intersection of two cubics—then they are not in general position, and indeed satisfy an addition constraint, as stated in the Cayley–Bacharach theorem.
Four points do not determine a conic, but rather a pencil, the 1-dimensional linear system of conics which all pass through the four points (formally, have the four points as base locus). Similarly, three points determine a 2-dimensional linear system (net), two points determine a 3-dimensional linear system (web), one point determines a 4-dimensional linear system, and zero points place no constraints on the 5-dimensional linear system of all conics.
As is well known, three non-collinear points determine a circle in Euclidean geometry and two distinct points determine a pencil of circles such as the Apollonian circles. These results seem to run counter the general result since circles are special cases of conics. However, in a pappian projective plane a conic is a circle only if it passes through two specific points on the line at infinity, so a circle is determined by five non-collinear points, three in the affine plane and these two special points. Similar considerations explain the smaller than expected number of points needed to define pencils of circles.
Tangency
Instead of passing through points, a different condition on a curve is being tangent to a given line. Being tangent to five given lines also determines a conic, by projective duality, but from the algebraic point of view tangency to a line is a quadratic constraint, so naive dimension counting yields 25 = 32 conics tangent to five given lines, of which 31 must be ascribed to degenerate conics, as described in fudge factors in enumerative geometry; formalizing this intuition requires significant further development to justify.
Another classic problem in enumerative geometry, of similar vintage to conics, is the Problem of Apollonius: a circle that is tangent to three circles in general determines eight circles, as each of these is a quadratic condition and 23 = 8. As a question in real geometry, a full analysis involves many special cases, and the actual number of circles may be any number between 0 and 8, except for 7.
See also
• Cramer's theorem (algebraic curves), for a generalization to n-th degree planar curves
References
1. Interactive Course on Projective Geometry Archived 2017-11-27 at the Wayback Machine, Chapter Five: The Projective Geometry of Conics Archived 2017-12-22 at the Wayback Machine: Section Four: Conics on the real projective plane Archived 2018-04-24 at the Wayback Machine, by J.C. Álvarez Paiva; proof follows Exercise 4.6
2. (Coxeter 1961, pp. 252–254)
3. The Animated Pascal, Sandra Lach Arlinghaus
4. Weisstein, Eric W. "Braikenridge-Maclaurin Construction." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/Braikenridge-MaclaurinConstruction.html
5. The GNU 3DLDF Conic Sections Page: Pascal's Theorem and the Braikenridge-Maclaurin Construction, Laurence D. Finston
• Coxeter, H. S. M. (1961), Introduction to Geometry, Washington, DC{{citation}}: CS1 maint: location missing publisher (link)
• Coxeter, H. S. M.; Greitzer, S. L. (1967), Geometry Revisited, Washington, DC: Mathematical Association of America, p. 76
• Dixon, A. C. (March 1908), "The Conic through Five Given Points", The Mathematical Gazette, The Mathematical Association, 4 (70): 228–230, doi:10.2307/3605147, JSTOR 3605147
External links
• Five Points Determine a Conic Section, Wolfram interactive demonstration
Topics in algebraic curves
Rational curves
• Five points determine a conic
• Projective line
• Rational normal curve
• Riemann sphere
• Twisted cubic
Elliptic curves
Analytic theory
• Elliptic function
• Elliptic integral
• Fundamental pair of periods
• Modular form
Arithmetic theory
• Counting points on elliptic curves
• Division polynomials
• Hasse's theorem on elliptic curves
• Mazur's torsion theorem
• Modular elliptic curve
• Modularity theorem
• Mordell–Weil theorem
• Nagell–Lutz theorem
• Supersingular elliptic curve
• Schoof's algorithm
• Schoof–Elkies–Atkin algorithm
Applications
• Elliptic curve cryptography
• Elliptic curve primality
Higher genus
• De Franchis theorem
• Faltings's theorem
• Hurwitz's automorphisms theorem
• Hurwitz surface
• Hyperelliptic curve
Plane curves
• AF+BG theorem
• Bézout's theorem
• Bitangent
• Cayley–Bacharach theorem
• Conic section
• Cramer's paradox
• Cubic plane curve
• Fermat curve
• Genus–degree formula
• Hilbert's sixteenth problem
• Nagata's conjecture on curves
• Plücker formula
• Quartic plane curve
• Real plane curve
Riemann surfaces
• Belyi's theorem
• Bring's curve
• Bolza surface
• Compact Riemann surface
• Dessin d'enfant
• Differential of the first kind
• Klein quartic
• Riemann's existence theorem
• Riemann–Roch theorem
• Teichmüller space
• Torelli theorem
Constructions
• Dual curve
• Polar curve
• Smooth completion
Structure of curves
Divisors on curves
• Abel–Jacobi map
• Brill–Noether theory
• Clifford's theorem on special divisors
• Gonality of an algebraic curve
• Jacobian variety
• Riemann–Roch theorem
• Weierstrass point
• Weil reciprocity law
Moduli
• ELSV formula
• Gromov–Witten invariant
• Hodge bundle
• Moduli of algebraic curves
• Stable curve
Morphisms
• Hasse–Witt matrix
• Riemann–Hurwitz formula
• Prym variety
• Weber's theorem (Algebraic curves)
Singularities
• Acnode
• Crunode
• Cusp
• Delta invariant
• Tacnode
Vector bundles
• Birkhoff–Grothendieck theorem
• Stable vector bundle
• Vector bundles on algebraic curves
| Wikipedia |
\begin{document}
\title{On truncated logarithms of flows on a Riemannian manifold} \date{\today} \author{Bruce K. Driver}
\begin{abstract} This paper gives quantitative global estimates between a time dependent flow on a Riemannian manifold $\left( M\right) $ and the flow of a vector field constructed by truncating the formal Magnus expansion for the logarithm of the flow. As a corollary, we also find quantitative estimates between the composition of the flows of two given time independent vector fields on $M$ and the flow of a truncated version of the Baker-Cambel-Hausdorff-Dynkin expansion associated to the two given vector fields.
\end{abstract} \subjclass[2010]{34C40 (primary), 34A45, 53C20} \keywords{Magnus expansion; Strichartz formula, Baker-Cambel-Hausdorff-Dynkin formula; Free nilpotent Lie groups} \maketitle \tableofcontents
\section{Introduction}\label{sec.1}
For the purposes of this paper, let $M$ be a connected manifold without boundary, $\Gamma\left( TM\right) \ $be the space of smooth vector fields on $M,$ $g$ be a Riemannian metric on $M,$ $d=d_{g}$ be the induced length metric on $M$ (see Notation \ref{not.2.1}), $\nabla=\nabla^{g}$ be the associated Levi-Civita covariant derivative, and $R=R^{g}$ be the curvature tensor of $\nabla$ (see Definition \ref{def.2.5}).
\begin{definition} [Complete vector fields]\label{def.1.1}Let $J=\left[ 0,T\right] $ (or possibly some other interval) and $J\ni t\rightarrow Y_{t}\in\Gamma\left( TM\right) $ be a smoothly varying time dependent vector field. We say that $Y$ is \textbf{complete }provided for every $s\in J$ and $m\in M,$ there exists a solution, $\sigma:J\rightarrow M$ solving the ordinary differential equation (ODE for short), \begin{equation} \dot{\sigma}\left( t\right) =Y_{t}\left( \sigma\left( t\right) \right) \text{ with }\sigma\left( s\right) =m, \label{e.1.1} \end{equation} where $\dot{\sigma}\left( 0\right) $ and $\dot{\sigma}\left( T\right) $ are to be interpreted as the appropriate one-sided derivative. [See Corollary \ref{cor.2.12} below for some necessary conditions on $\left( M,g\right) $ and $Y_{\cdot}$ which imply that $Y$ is complete.] \end{definition}
\begin{definition} [Flows]\label{def.1.2}If $J=\left[ 0,T\right] \ni t\rightarrow Y_{t} \in\Gamma\left( TM\right) $ is a smoothly varying time dependent complete vector field, let $\mu_{t,s}^{Y}\left( m\right) :=\sigma\left( t\right) $ where $\sigma\left( \cdot\right) $ is the solution to Eq. (\ref{e.1.1}). Thus the \textbf{flow associated to }$Y,$ $\mu_{t,s}^{Y}:M\rightarrow M$ for $s,t\in J,$ satisfies, \[ \frac{d}{dt}\mu_{t,s}^{Y}\left( m\right) =Y_{t}\circ\mu_{t,s}^{Y}\text{ with }\mu_{s,s}^{Y}=Id_{M}. \] When $Y_{t}=Y$ is independent of $t,$ we denote $\mu_{t,0}^{Y}$ by $e^{tY}$ so that $\mu_{t,s}^{Y}=e^{\left( t-s\right) Y}$ for all $s,t\in\mathbb{R}.$ \end{definition}
The \textbf{logarithm problem} in this context is the question of finding vector fields, $Z_{t}\in\Gamma\left( TM\right) ,$ so that $\mu_{t,0} ^{Y}=e^{Z_{t}}.$ This problem seems to have first been formally studied by Magnus \cite{Magnus1954} in the context of linear differential equations although the special case encoded in the Baker-Cambel-Hausdorff-Dynkin formula is much older. For a short derivation of Magnus result see \cite{Arnal2018}, and for an extensive review of the Magnus' expansion and its many generalizations and applications see the survey article, \cite{Blanes2009}. Although it is not within the author's ability to give a systematic review of the many uses of Magnus' idea, in order to understand the breadth of applications let me give a sampling references involving quantum physics, control theory, geometric numerical integration, and stochastic analysis. An early reference in quantum physics is \cite{Wilcox1967}. An example in control theory is \cite{Kawski2000} who is discussing taking logarithms of the Neumann-Dyson series and their non-linear extensions described by K.T. Chen \cite{ChenKT1957a,ChenKT1958a} and Fliess \cite{Fliess1981}. The following references, \cite{Cai2016,DAmbrosio2014,Hairer2003,Hochbruck2010,Lundervold2011a,Lundervold2015a,Munthe-Kaas1995} along with the survey articles \cite{Budd1999,Iserles2011} and the monographs \cite{Hairer2006c,Blanes2016}, give only a sporadic sampling of the extensive literature in geometric numerical integration theory. For references from stochastic analysis, see \cite{BenArous1989,BenArous1988,Castell1993,Castell1996,Inahama2010a,Inahama2017,Takanobu1988,Takanobu1990} which pertain to approximating stochastic flows and see \cite{Sussmann1986,Sussmann1988} for a couple of example involving stochastic control theory. Hopefully the reader sees from this sampling of references how ubiquitous the \textquotedblleft logarithm problem\textquotedblright\ has become.
A key starting point for this paper and many of the references above is Strichartz's \cite[Eq. (G.C-B-H-D)]{Strichartz1987a} (and also see \cite{Bialynicki-Birula1969,Mielnik1970}) formal series solution, \begin{equation} Z_{t}\sim\sum_{m=1}^{\infty}\sum_{\sigma\in P_{m}}\left( \frac{\left( -1\right) ^{e\left( \sigma\right) }}{m^{2}\binom{m-1}{e\left( \sigma\right) }}\right) \int_{\Delta_{m}\left( t\right) }\left( -1\right) ^{m}\operatorname{ad}_{Y_{\tau_{\sigma\left( m\right) }}} \dots\operatorname{ad}_{Y_{\sigma\left( 2\right) }}Y_{\tau_{\sigma\left( 1\right) }}d\mathbf{\tau,}\label{e.1.2} \end{equation} to the logarithm problem, i.e. $Z_{t}\in\Gamma\left( TM\right) $ \textquotedblleft represented\textquotedblright\ by the series above formally solves $\mu_{t,0}^{Y}=e^{Z_{t}}.$ In Eq. (\ref{e.1.2}), $P_{m}$ is the set of permutations of $\left\{ 1,2,\dots,m\right\} ,$ \[ \Delta_{m}\left( t\right) =\left\{ 0\leq\tau_{1}\leq\tau_{2}\leq\dots \leq\tau_{m}\leq t\right\} ,\text{ and} \] \[ e\left( \sigma\right) :=\#\left\{ j<m:\sigma\left( j\right) >\sigma\left( j+1\right) \right\} \] is the number of \textquotedblleft errors\textquotedblright\ in the ordering of $\sigma\left( 1\right) ,\dots,\sigma\left( m\right) .$
If $M$ is a Lie group and $Y_{t}$ is a family of left invariant vector fields on $M$ then the expansion in Eq. (\ref{e.1.2}) will converge when $t$ is sufficiently close to $0$ and the resulting sum will solve the logarithm problem in this context. There is a rather vast literature exploring when such series expansions actually converge, see for example \cite{Moan2008,Biagi2014,Lakos2017,Curry2018} just to give a very thin sample. In the general context of arbitrary time dependent vector fields, the expansion in Eq. (\ref{e.1.2}) will typically not converge. In this paper, our goal is not to discuss convergence of the series but rather to estimate the errors made by truncating the expansion in Eq. (\ref{e.1.2}).
For $n\in\mathbb{N},$ let $Z_{t}^{\left( n\right) }$ denote the series expansion in Eq. (\ref{e.1.2}) where the first sum, $\sum_{m=1}^{\infty},$ is truncated to $\sum_{m=1}^{n}.$ Roughly speaking, the main object of this paper is (under certain added hypothesis on $Y_{t})$ to estimate the distance between $\mu_{t,0}^{Y}$ and $e^{Z_{t}^{\left( n\right) }}.$ The rest of this introduction will be devoted to summarizing the main results of this paper and in particular to stating Theorems \ref{thm.1.30} and \ref{thm.1.32} below.
\subsection{Basic flow estimates\label{sec.1.1}}
\begin{notation} \label{not.1.3}If $\rho_{m}\geq0$ for all $m\in M,$ we let \[ \rho_{M}:=\sup_{m\in M}\rho_{m}. \] If $J$ is a compact subinterval of $\mathbb{R}$ and $M\times J\ni\left( m,t\right) \rightarrow\rho_{m}\left( t\right) \geq0$ is a continuous function we let, \[ \left\vert \rho\right\vert _{J}^{\ast}:=\int_{J}\rho_{M}\left( t\right) dt=\int_{J}\sup_{m\in M}\rho_{m}\left( t\right) dt. \] When $J=\left[ 0,t\right] $ for some $t>0$ we will simply write $\left\vert \rho\right\vert _{t}^{\ast}$ for $\left\vert \rho\right\vert _{\left[ 0,t\right] }^{\ast}.$ Note that if $J\ni t\rightarrow\rho\left( t\right) \geq0$ does not depend on $m\in M,$ then \[ \rho_{t}^{\ast}=\int_{J}\rho\left( t\right) dt=\left\Vert \rho\right\Vert _{L^{1}\left( J,m\right) } \] where $m$ is Lebesgue measure on $\mathbb{R}.$ \end{notation}
\begin{notation} \label{not.1.4}For $X\in\Gamma\left( TM\right) ,$ $m\in M,$ and $v,w\in T_{m}M,$ let \[ \nabla_{v\otimes w}^{2}X:=\nabla_{v}\left( \nabla_{W}X\right) -\nabla _{\nabla_{v}W}X \] where $W\in\Gamma\left( TM\right) $ is chosen so that $W\left( m\right) =w.$ \end{notation}
See Definition \ref{def.2.5} and Remark \ref{rem.2.6} below for an alternative but equivalent definition of $\nabla^{2}X$ as well as the verification that $\nabla_{v\otimes w}^{2}X$ is well defined bilinear form on $T_{m}M\times T_{m}M.$
\begin{notation} [Tensor Norms]\label{not.1.5}If $X\in\Gamma\left( TM\right) $ and $m\in M,$ let \begin{align*} \left\vert X\right\vert _{m} & :=\left\vert X\left( m\right) \right\vert _{g}\\ \left\vert \nabla X\right\vert _{m} & :=\sup_{\left\vert v_{m}\right\vert =1}\left\vert \nabla_{v_{m}}X\right\vert _{g},\\ \left\vert \nabla^{2}X\right\vert _{m} & =\sup_{\left\vert v_{m}\right\vert =1=\left\vert w_{m}\right\vert }\left\vert \nabla_{v_{m}\otimes w_{m}} ^{2}X\right\vert _{g},\\ \left\vert R\left( X,\cdot\right) \right\vert _{m} & :=\sup_{\left\vert v_{m}\right\vert =1=\left\vert w_{m}\right\vert }\left\vert R\left( X\left( m\right) ,v_{m}\right) w_{m}\right\vert _{g},\text{ and}\\ H_{m}\left( X\right) & :=\left\vert \nabla^{2}X\right\vert _{m}+\left\vert R\left( X,\bullet\right) \right\vert _{m}. \end{align*}
\end{notation}
Let us give a few examples of how this notation will be used.
\begin{example} \label{ex.1.6}Suppose that $J\ni t\rightarrow X_{t}\in\Gamma\left( TM\right) $ is a continuously varying time dependent vector field, then \begin{align*} \left\vert X_{t}\right\vert _{M} & :=\sup_{m\in M}\left\vert X_{t} \right\vert _{m},\text{\quad\ }\left\vert X_{\cdot}\right\vert _{J}^{\ast }:=\int_{J}\sup_{m\in M}\left\vert X_{t}\right\vert _{m}dt,\\ \left\vert \nabla^{2}X_{t}\right\vert _{M} & :=\sup_{m\in M}\left\vert \nabla^{2}X_{t}\right\vert _{m},\text{\quad\ }\left\vert \nabla^{2}X_{\cdot }\right\vert _{J}^{\ast}:=\int_{J}\sup_{m\in M}\left\vert \nabla^{2} X_{t}\right\vert _{m}dt,\\ H_{M}\left( X_{t}\right) & =\sup_{m\in M}\left[ \left\vert \nabla ^{2}X_{t}\right\vert _{m}+\left\vert R\left( X_{t},\bullet\right) \right\vert _{m}\right] \leq\left\vert \nabla^{2}X_{t}\right\vert _{M}+\left\vert R\left( X_{t},\bullet\right) \right\vert _{M}, \end{align*} and \[ H\left( X_{\cdot}\right) _{J}^{\ast}=\int_{J}\sup_{m\in M}H_{m}\left( X_{t}\right) dt\leq\left\vert \nabla^{2}X_{\cdot}\right\vert _{J}^{\ast }+\left\vert R\left( X_{\cdot},\cdot\right) \right\vert _{J}^{\ast}. \]
\end{example}
The next theorem is a combination of Theorem \ref{thm.2.30} and Corollary \ref{cor.2.31} below.
\begin{theorem} \label{thm.1.7}Let $J=\left[ 0,T\right] \ni t\rightarrow X_{t,}Y_{t} \in\Gamma\left( TM\right) $ be two smooth complete time dependent vector fields on $M$ and $\mu^{X}$ and $\mu^{Y}$ be their corresponding flows. Then for $m\in M$ and $t>0$ (for notational simplicity) we have the following estimates, \begin{align*} d\left( \mu_{t,0}^{X}\left( m\right) ,\mu_{t,0}^{Y}\left( m\right) \right) & \leq\int_{0}^{t}e^{\int_{s}^{t}\left\vert \nabla X_{\sigma }\right\vert _{\mu_{\sigma,s}^{X}\left( m\right) }d\sigma}\cdot\left\vert Y_{s}-X_{s}\right\vert _{\mu_{s,0}^{Y}\left( m\right) }~ds\\ & \leq e^{\left\vert \nabla X_{\cdot}\right\vert _{t}^{\ast}}\left\vert Y_{\cdot}-X_{\cdot}\right\vert _{t}^{\ast}, \end{align*} \begin{align*} d\left( \mu_{t,0}^{Y}\left( m\right) ,m\right) & \leq\int_{0} ^{t}\left\vert Y_{s}\right\vert _{\mu_{s,0}^{Y}\left( m\right) } ~ds\leq\left\vert Y\right\vert _{t}^{\ast},\text{ and}\\ d\left( \mu_{t,0}^{Y}\left( m\right) ,m\right) & \leq\int_{0}^{t} e^{\int_{s}^{t}\left\vert \nabla Y_{\sigma}\right\vert _{\mu_{\sigma,s} ^{Y}\left( m\right) }d\sigma}\cdot\left\vert Y_{s}\right\vert _{m}~ds\leq e^{\left\vert \nabla Y\right\vert _{t}^{\ast}}\int_{0}^{t}\left\vert Y_{s}\left( m\right) \right\vert ds. \end{align*}
\end{theorem}
We are also interested in estimating the distance between the differentials, $\left( \mu_{t,0}^{X}\right) _{\ast}$ and $\left( \mu_{t,0}^{Y}\right) _{\ast},$ of $\mu_{t,0}^{X}$ and $\mu_{t,0}^{Y}.$ To do so we endow $TM$ with its \textquotedblleft natural\textquotedblright\ Riemannian metric induced from the Riemannian metric, $g,$ on $M$ (see Definition \ref{def.5.1} of Section \ref{sec.5} below) and let $d^{TM}$ be the induced length metric on $TM.$ The next theorem is a combination of Theorem \ref{thm.7.2} and Corollary \ref{cor.7.3} below.
\begin{theorem} \label{thm.1.8}If $J=\left[ 0,T\right] \ni t\rightarrow X_{t,}Y_{t}\in \Gamma\left( TM\right) $ are smooth complete (see Definition \ref{def.1.1}) time dependent vector fields on $M$ and $\mu^{X}$ and $\mu^{Y}$ be their corresponding flows, then \begin{align*} \sup_{v\in TM:\left\vert v\right\vert =1}d^{TM} & \left( \left( \mu _{t,0}^{X}\right) _{\ast}v,\left( \mu_{t,0}^{Y}\right) _{\ast}v\right) \\ & \leq e^{2\left\vert \nabla X\right\vert _{t}^{\ast}+\left\vert \nabla Y\right\vert _{t}^{\ast}}\cdot\left( \left( 1+H\left( X_{\cdot}\right) _{t}^{\ast}\right) \left\vert Y-X\right\vert _{t}^{\ast}+\left\vert \nabla\left[ Y-X\right] \right\vert _{t}^{\ast}\right) , \end{align*} and \[ \sup_{v\in TM:\left\vert v\right\vert =1}d^{TM}\left( \left( \mu_{t,0} ^{Y}\right) _{\ast}v,v\right) \leq e^{\left\vert \nabla Y\right\vert _{t}^{\ast}}\cdot\left( \left\vert Y\right\vert _{t}^{\ast}+\left\vert \nabla Y\right\vert _{t}^{\ast}\right) . \]
\end{theorem}
The next proposition starts to indicate how the two previous theorems fit into the logarithm approximation theorem.
\begin{proposition} \label{pro.1.9}Suppose that $\left[ 0,T\right] \ni t\rightarrow X_{t} \in\Gamma\left( TM\right) $ is a complete time dependent vector field and $\left[ 0,T\right] \ni t\rightarrow Z_{t}\in\Gamma\left( TM\right) $ is another time dependent vector field such that $Z_{t}$ is complete for each fixed $t$ and $Z_{0}\equiv0.$ Then \begin{equation} d\left( \mu_{t,0}^{X}\left( m\right) ,e^{Z_{t}}\left( m\right) \right) \leq\int_{0}^{t}e^{\int_{s}^{t}\left\vert \nabla X_{\sigma}\right\vert _{\mu_{\sigma,s}^{X}\left( m\right) }d\sigma}\cdot\left\vert W_{s}^{Z} -X_{s}\right\vert _{e^{Z_{s}}\left( m\right) }~ds \label{e.1.3} \end{equation} where \[ W_{t}^{Z}:=\int_{0}^{1}e_{\ast}^{sZ_{t}}\dot{Z}_{t}\circ e^{-sZ_{t}} ds=\int_{0}^{1}\operatorname{Ad}_{e^{sZ_{t}}}\dot{Z}_{t}~ds\in\Gamma\left( TM\right) . \]
\end{proposition}
\begin{proof} By Corollary \ref{cor.2.24} below, which states, \[ \frac{d}{dt}e^{Z_{t}}=W_{t}^{Z}\circ e^{Z_{t}}\text{ with }e^{Z_{0}}=Id \] and so $\mu_{t,0}^{W^{Z}}=e^{Z_{t}}$ for all $t\in\left[ 0,T\right] .$ Thus the estimate in Eq. (\ref{e.1.3}) follows by applying Theorem \ref{thm.1.7} with $Y_{t}=W_{t}^{Z}.$ \end{proof}
Because of Proposition \ref{pro.1.9}, in order to find good approximate logarithms for the flow, $\mu^{X},$ we should choose $Z_{t}\in\Gamma\left( TM\right) $ so that $Z_{0}=0$ and $\left\vert W_{s}^{Z}-X_{s}\right\vert _{e^{Z_{s}}\left( m\right) }$ is small. Ideally we would like to choose $Z$ so that $W_{s}^{Z}=X_{s}$ but this is not possible in general. However, formally solving the equation $W_{s}^{Z}=X_{s}$ for $Z$ would lead to the expansion in Eq. (\ref{e.1.2}). In order to get precise estimates we are now going to make more assumptions (in the spirit of control theory) on what we allow for our choice of $X_{t}.$ These additional assumptions and necessary notations will be explained in the next subsection.
\subsection{Free niltpotent Lie groups and dynamical systems\label{sec.1.2}}
\begin{definition} [Tensor Algebras]\label{def.1.10}Let $T\left( \mathbb{R}^{d}\right) :=\oplus_{k=0}^{\infty}\left[ \mathbb{R}^{d}\right] ^{\otimes k}$ be the tensor algebra over $\mathbb{R}^{d}$ so the general element of $\omega\in T\left( \mathbb{R}^{d}\right) $ is of the form \[ \omega=\sum_{k=0}^{\infty}\omega_{k}\text{ with }\omega_{k}\in\left( \mathbb{R}^{d}\right) ^{\otimes k}\text{ for }k\in\mathbb{N}_{0} \] where we assume $\omega_{k}=0$ for all but finitely many $k.$ Multiplication is the tensor product and associated to this multiplication is the Lie bracket, \begin{equation} \left[ A,B\right] _{\otimes}:=A\otimes B-B\otimes A\text{ for all }A,B\in T\left( \mathbb{R}^{d}\right) . \label{e.1.4} \end{equation}
\end{definition}
\begin{definition} [Free Lie Algebra]\label{not.1.11}The \textbf{free Lie algebra over }$\mathbb{R}^{d}$ will be taken to be the Lie-subalgebra, $F\left( \mathbb{R}^{d}\right) ,$ of $\left( T\left( \mathbb{R}^{d}\right) ,\left[ \cdot,\cdot\right] _{\otimes}\right) $ generated by $\mathbb{R}^{d}.$ \end{definition}
\begin{remark} \label{rem.1.12}If $\left( \mathfrak{g},\left[ \cdot,\cdot\right] \right) $ is a Lie algebra and $V\subset\mathfrak{g}$ is a subspace, then using Jacobi's identity one easily shows that Lie sub-algebra $\left( \operatorname*{Lie}\left( V\right) \right) $ of $\mathfrak{g}$ generated by $V$ may be described as; \[ \operatorname*{Lie}\left( V\right) =\operatorname*{span}\cup_{k=1}^{\infty }\left\{ \operatorname{ad}_{v_{1}}\dots\operatorname{ad}_{v_{k-1}}v_{k} :v_{1},\dots,v_{k}\in V\right\} , \] where $\operatorname{ad}_{A}B:=\left[ A,B\right] $ for all $A,B\in \mathfrak{g}.$ As a consequence of this remark it follows that $F\left( \mathbb{R}^{d}\right) $ is a $\mathbb{N}_{0}$-graded algebra with \[ F\left( \mathbb{R}^{d}\right) =\oplus_{k=0}^{\infty}F_{k}\left( \mathbb{R}^{d}\right) \text{ where }F_{k}\left( \mathbb{R}^{d}\right) =F\left( \mathbb{R}^{d}\right) \cap\left[ \mathbb{R}^{d}\right] ^{\otimes k}\subset F\left( \mathbb{R}^{d}\right) . \] According to this grading, if $A\in F\left( \mathbb{R}^{d}\right) $ we let $A_{k}\in F_{k}\left( \mathbb{R}^{d}\right) $ denote the projection of $A$ into $F_{k}\left( \mathbb{R}^{d}\right) .$ \end{remark}
See \cite{ReutenauerBook}, for general background information on free Lie algebras. The spaces $T\left( \mathbb{R}^{d}\right) $ and $F\left( \mathbb{R}^{d}\right) $ are infinite dimensional. We are going to be most interested in the finite dimensional truncated versions of these algebras.
\begin{definition} [Truncated Tensor Algebras]\label{def.1.13}Given $\kappa\in\mathbb{N},$ let \[ T^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) :=\oplus _{k=0}^{\kappa}\left[ \mathbb{R}^{d}\right] ^{\otimes k}\subset T\left( \mathbb{R}^{d}\right) \] which is algebra under the multiplication rule, \[ AB=\sum_{k=0}^{\kappa}\left( AB\right) _{k}=\sum_{k=0}^{\kappa}\sum _{j=0}^{k}A_{j}\otimes B_{k-j}~\text{ }\forall~A,B\in T^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \] and a Lie algebra under the bracket operation, $\left[ A,B\right] :=AB-BA$ for all $A,B\in T^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) .$ \end{definition}
\begin{notation} \label{not.1.14}Let $\pi_{\leq\kappa}:T\left( \mathbb{R}^{d}\right) \rightarrow T^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) $ and $\pi_{>\kappa}:=I_{T\left( \mathbb{R}^{d}\right) }-\pi_{\leq\kappa}:T\left( \mathbb{R}^{d}\right) \rightarrow\oplus_{k=\kappa+1}^{\infty}\left[ \mathbb{R}^{d}\right] ^{\otimes k}$ be the projections associated to the direct sum decomposition, \[ T\left( \mathbb{R}^{d}\right) =T^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \oplus\left( \oplus_{k=\kappa+1}^{\infty}\left[ \mathbb{R}^{d}\right] ^{\otimes k}\right) . \] Further let \begin{equation} \mathfrak{g}^{\left( \kappa\right) }=\oplus_{k=1}^{\kappa}\left[ \mathbb{R}^{d}\right] ^{\otimes k} \label{e.1.6} \end{equation} which is a two sided ideal as well as a Lie sub-algebra of $T^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) .$ \end{notation}
With this notation the multiplication and Lie bracket on $T^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) $ may be described as, \[ AB=\pi_{\leq\kappa}\left( A\otimes B\right) \text{ and }\left[ A,B\right] =\pi_{\leq\kappa}\left[ A,B\right] _{\otimes}. \]
\begin{notation} [Induced Inner product]\label{not.1.15}The usual dot product on $\mathbb{R} ^{d}$ induces an inner product, $\left\langle \cdot,\cdot,\right\rangle $ on $T^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) $ uniquely determined by requiring $T^{\left( \kappa\right) }\left( \mathbb{R} ^{d}\right) :=\oplus_{k=0}^{\kappa}\left[ \mathbb{R}^{d}\right] ^{\otimes k}$ to be an orthogonal direct sum decomposition, $\left\langle 1,1\right\rangle =1$ for $1\in\left[ \mathbb{R}^{d}\right] ^{\otimes0},$ and \[ \left\langle v_{1}v_{2}\dots v_{k},w_{1}w_{2}\dots w_{k}\right\rangle =\left\langle v_{1},w_{1}\right\rangle \left\langle v_{2},w_{2}\right\rangle \dots\left\langle v_{k},w_{k}\right\rangle \] for any $v_{j},w_{j}\in\mathbb{R}^{d}$ and $1\leq k\leq\kappa.$ We let $\left\vert A\right\vert :=\sqrt{\left\langle A,A\right\rangle }$ denote the associated Hilbertian norm of $A\in T^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) .$ \end{notation}
Often, it turns out to be more convenient (see Proposition \ref{pro.3.24} below) to measure the size of $A\in\mathfrak{g}^{\left( \kappa\right) }$ using the following \textquotedblleft homogeneous norms.\textquotedblright\
\begin{definition} [Homogeneous norms]\label{def.1.16}For $A\in\mathfrak{g}^{\left( \kappa\right) }\subset T^{\left( \kappa\right) }\left( \mathbb{R} ^{d}\right) ,$ let \[ N\left( A\right) :=\max_{1\leq k\leq\kappa}\left\vert A_{k}\right\vert ^{1/k} \] and for $f\in C\left( \left[ 0,t\right] ,\mathfrak{g}^{\left( \kappa\right) }\right) $ let \[ N_{t}^{\ast}\left( f\right) :=\max_{1\leq k\leq\kappa}\left\vert f_{k}\right\vert _{t}^{\ast1/k}=\max_{1\leq k\leq\kappa}\left( \int_{0} ^{t}\left\vert f_{k}\left( \tau\right) \right\vert d\tau\right) ^{1/k} \] be the \textbf{homogeneous }$L^{1}$\textbf{-norm of }$f$\textbf{. [}Note that $N\left( A\right) $ is the best constant such that $\left\vert A_{k}\right\vert \leq N\left( A\right) ^{k}$ for $1\leq k\leq\kappa.]$ \end{definition}
Let us observe that for $t\in\mathbb{R},$ \begin{equation} N\left( tA\right) =\max_{1\leq k\leq\kappa}\left[ \left\vert t\right\vert ^{1/k}\left\vert A_{k}\right\vert ^{1/k}\right] \leq\max_{1\leq k\leq\kappa }\left[ \left\vert t\right\vert ^{1/k}\right] \cdot N\left( A\right) \leq\left( 1\vee\left\vert t\right\vert \right) \cdot N\left( A\right) \label{e.1.7} \end{equation} and if $\delta_{t}:T^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \rightarrow T^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) $ is the dilation operator defined by $\delta_{t}\left( A\right) =\sum_{k=0}^{\kappa }t^{k}A_{k},$ then \begin{equation} N\left( \delta_{t}A\right) =\max_{1\leq k\leq\kappa}\left[ \left\vert t^{k}A_{k}\right\vert ^{1/k}\right] =\left\vert t\right\vert N\left( A\right) . \label{e.1.8} \end{equation}
\begin{definition} [Free Nilpotent Lie Algebra]\label{not.1.17}The \textbf{step }$\kappa$\textbf{ free Nilpotent Lie algebra} on $\mathbb{R}^{d}$ may then be realized as the Lie sub-algebra, $F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) ,$ of $\mathfrak{g}^{\left( \kappa\right) }$ generated by $\mathbb{R} ^{d}\subset T^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) .$ \end{definition}
Again, a simple consequence of Remark \ref{rem.1.12} is that, as vector spaces, $F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) =\pi _{\leq\kappa}\left( F\left( \mathbb{R}^{d}\right) \right) $ and $F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) $ is graded as \[ F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) =\oplus _{k=0}^{\kappa}F_{k}^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \] where \[ F_{k}^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) :=F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \cap\left[ \mathbb{R} ^{d}\right] ^{\otimes k}\subset F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \text{ for }1\leq k\leq\kappa. \]
The set, \begin{equation} G^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) :=1+\mathfrak{g} ^{\left( \kappa\right) }\subset T^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) , \label{e.1.9} \end{equation} forms a group under the multiplication rule of $T^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) $ which is a Lie group with Lie algebra, $\operatorname*{Lie}\left( G^{\left( \kappa\right) }\right) =\mathfrak{g} ^{\left( \kappa\right) }.$ Moreover, the exponential map, \[ \mathfrak{g}^{\left( \kappa\right) }\ni\xi\rightarrow e^{\xi}=\sum _{k=0}^{\kappa}\frac{\xi^{k}}{k!}\in G^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) , \] is a diffeomorphism whose inverse is given by \begin{equation} \log\left( 1+\xi\right) =\sum_{k=1}^{\kappa}\frac{\left( -1\right) ^{k+1} }{k}\xi^{k}. \label{e.1.10} \end{equation} [See Section \ref{sec.3} for more details.] We will mostly only use the following subgroup of $G^{\left( \kappa\right) }\left( \mathbb{R} ^{d}\right) .$
\begin{definition} [Free Nilpotent Lie Groups]\label{not.1.18}For $\kappa\in\mathbb{N},$ let $G_{\text{geo}}^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \subset G^{\left( \kappa\right) }$ be the simply connected Lie subgroup of $G^{\left( \kappa\right) }=1\oplus_{k=1}^{\kappa}\left[ \mathbb{R} ^{d}\right] ^{\otimes k}$ whose Lie algebra is $F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) .$ This subgroup is a step-$\kappa$ (free) nilpotent Lie group which we refer to as the \textbf{geometric sub-group }of $G^{\left( \kappa\right) }.$ \end{definition}
It is well known as a consequence of the Baker-Campel-Dynken-Hausdorff formula (see Proposition \ref{pro.3.12} of Section \ref{sec.3}) that the exponential map restricted to $F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) ,$ \[ F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \ni\xi\rightarrow e^{\xi}=\sum_{k=0}^{\kappa}\frac{\xi^{k}}{k!}\in G_{\text{geo}}^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) , \] is again diffeomorphism.
\begin{notation} \label{not.1.19}Let $\mathrm{LD}\left( C^{\infty}\left( M,\mathbb{R}\right) \right) $ denote the algebra of smooth linear differential operators from $C^{\infty}\left( M,\mathbb{R}\right) .$ \end{notation}
As usual we view the smooth vector fields, $\Gamma\left( TM\right) ,$ on $M$ as a subspace of $\mathrm{LD}\left( C^{\infty}\left( M,\mathbb{R}\right) \right) .$
\begin{definition} [Dynamical systems]\label{def.1.20}A $d$\textbf{-dimensional dynamical system} on $M$ is a linear map, $\mathbb{R}^{d}\ni w\rightarrow V_{w}\in\Gamma\left( TM\right) .$ \end{definition}
A $d$-dimensional dynamical system on $M$ is completely determined by knowing $\left\{ V_{e_{j}}\right\} _{j=1}^{d}\subset\Gamma\left( TM\right) $ where $\left\{ e_{j}\right\} _{j=1}^{d}$ is the standard basis for $\mathbb{R} ^{d}.$ The tensor algebra, $T\left( \mathbb{R}^{d}\right) ,$ of Definition \ref{def.1.10} satisfies the following universal property; if $V:\mathbb{R} ^{d}\rightarrow\mathcal{A}$ is a linear map, $\mathcal{A}$ is another associative algebra with identity, then $V$ extends uniquely to an algebra homomorphism from $T\left( \mathbb{R}^{d}\right) $ to $\mathcal{A}$ which we still denote by $V.$ The extension is uniquely determined by $V_{1} =1_{\mathcal{A}}$ and $V_{v_{1}\otimes\dots\otimes v_{k}}=V_{v_{1}}\dots V_{v_{k}}$ for all $v_{i}\in\mathbb{R}^{d}$ and $k\in\mathbb{N}.$ The following example is of primary importance to this paper.
\begin{example} \label{ex.1.21}Every $d$-dimensional dynamical system on $M,$ $\mathbb{R} ^{d}\ni w\rightarrow V_{w}\in\Gamma\left( TM\right) \subset\mathrm{LD} \left( C^{\infty}\left( M,\mathbb{R}\right) \right) ,$ extends to an algebra homomorphism from $T\left( \mathbb{R}^{d}\right) $ to $\mathrm{LD} \left( C^{\infty}\left( M,\mathbb{R}\right) \right) .$ We will still denote this extension by $V.$ Because of Remark \ref{rem.1.12}, it is easy to see that $V\left( F\left( \mathbb{R}^{d}\right) \right) \subset
\Gamma\left( TM\right) $ and $V|_{F\left( \mathbb{R}^{d}\right) }:F\left( \mathbb{R}^{d}\right) \rightarrow\Gamma\left( TM\right) $ is a Lie algebra homomorphism. \end{example}
\begin{notation} [Extension of $V$ to $F^{\left( \kappa\right) }\left( \mathbb{R}
^{d}\right) $]\label{not.1.22}The restriction, $V|_{F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) },$ of $V$ to the subspace $F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) $ of $F\left( \mathbb{R} ^{d}\right) $ will be denoted by $V^{\left( \kappa\right) }:F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \rightarrow\Gamma\left( TM\right) .$ \end{notation}
\begin{remark} \label{rem.1.23}It is \textbf{not} generally true that $V^{\left(
\kappa\right) }:=V|_{F^{\left( \kappa\right) }\left( \mathbb{R} ^{d}\right) }:F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \rightarrow\Gamma\left( TM\right) $ is a Lie algebra homomorphism. In order for this to be true we must require that $\operatorname{ad}_{V_{a_{\kappa}} }\dots\operatorname{ad}_{V_{a_{1}}}V_{a_{0}}=0$ for all $\left\{ a_{j}\right\} _{j=0}^{\kappa}\subset\mathbb{R}^{d},$ i.e. $\left\{ V_{a}:a\in\mathbb{R}^{d}\right\} $ should generate a \textbf{step-}$\kappa $\textbf{ nilpotent Lie sub-algebra} of $\Gamma\left( TM\right) .$ \end{remark}
\begin{definition} [Dynamical System Norms]\label{def.1.24}If $V$ is a dynamical system and $\kappa\in\mathbb{N},$ we let \begin{align} \left\vert V^{\left( \kappa\right) }\right\vert _{M} & :=\left\{ \left\vert V_{A}\right\vert _{M}:A\in F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \text{ with }\left\vert A\right\vert =1\right\} ,\label{e.1.11}\\ \left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M} & :=\left\{ \left\vert \nabla V_{A}\right\vert _{M}:A\in F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \text{ with }\left\vert A\right\vert =1\right\} ,\label{e.1.12}\\ \left\vert \nabla^{2}V^{\left( \kappa\right) }\right\vert _{M} & :=\left\{ \left\vert \nabla^{2}V_{A}\right\vert _{M}:A\in F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \text{ with }\left\vert A\right\vert =1\right\} ,\text{ and}\label{e.1.13}\\ H_{M}\left( V^{\left( \kappa\right) }\right) & :=\sup\left\{ H_{M}\left( V_{A}\right) :A\in F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \text{ with }\left\vert A\right\vert =1\right\} \label{e.1.14} \end{align} where we allow for the possibility that any of these expressions might be infinite. [Recall that $H_{M}\left( V_{A}\right) $ is defined in Notation \ref{not.1.5} and Example \ref{ex.1.6}.] \end{definition}
\subsection{Approximate logarithm theorems\label{sec.1.3}}
\begin{definition} [See Definition \ref{def.3.6}]\label{def.1.25}For $\xi\in C^{1}\left( \left[ 0,T\right] ,F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \right) ,$ let $g^{\xi}\in C^{1}\left( \left[ 0,T\right] ,G_{geo}\right) $ denote the solution to the ODE, \begin{equation} \dot{g}^{\xi}\left( t\right) =g^{\xi}\left( t\right) \dot{\xi}\left( t\right) \text{ with }g^{\xi}\left( 0\right) =1 \label{e.1.15} \end{equation} and \begin{equation} C^{\xi}\left( t\right) :=\log\left( g^{\xi}\left( t\right) \right) =\sum_{k=1}^{\kappa}\frac{\left( -1\right) ^{k+1}}{k}\left( g^{\xi}\left( t\right) -1\right) ^{k}\in F^{\left( \kappa\right) }\left( \mathbb{R} ^{d}\right) . \label{e.1.16} \end{equation}
\end{definition}
\begin{notation} \label{not.1.26}For $f,g\in C^{1}\left( M,M\right) ,$ let \begin{align*} d_{M}\left( f,g\right) & :=\sup_{m\in M}d\left( f\left( m\right) ,g\left( m\right) \right) \text{ and }\\ d_{M}^{TM}\left( f_{\ast},g_{\ast}\right) & :=\sup_{v\in TM:\left\vert v\right\vert =1}d^{TM}\left( f_{\ast}v,g_{\ast}v\right) \end{align*} where again $d^{TM}$ is defined in Section \ref{sec.5} below. \end{notation}
\begin{definition} [$\kappa$-complete]\label{def.1.27}We say that a dynamical system, $\mathbb{R}^{d}\ni w\rightarrow V_{w}\in\Gamma\left( TM\right) ,$ is $\kappa$\textbf{-complete }if for any $\xi\in C^{1}\left( \left[ 0,T\right] ,F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \right) $ the time dependent vector-field, $\left[ 0,T\right] \ni t\rightarrow V_{\dot{\xi }\left( t\right) }\in\Gamma\left( TM\right) ,$ is complete as defined in Definition \ref{def.1.1}. \end{definition}
\begin{ass} \label{ass.1}Unless otherwise stated, the dynamical system $V:\mathbb{R} ^{d}\rightarrow\Gamma\left( TM\right) $ is assumed to be $\kappa$-complete. \end{ass}
The next two theorems are the main theorems of this paper. The first theorem is a combination of Theorem \ref{thm.4.12}, Eq. (\ref{e.4.18}), and Corollary \ref{cor.4.16}. To simplify the statements we first introduce the following notation.
\begin{notation} \label{not.1.28}For $\lambda\geq0$ and $m,n\in\mathbb{N}$ with $m<n,$ let \begin{align*} Q_{[m,n]}\left( \lambda\right) & :=\max\left\{ \lambda^{k}:k\in \mathbb{N}\cap\left[ m,n\right] \right\} =\max\left\{ \lambda^{m} ,\lambda^{n}\right\} \text{ and}\\ Q_{(m,n]}\left( \lambda\right) & =Q_{[m+1,n]}\left( \lambda\right) :=\max\left\{ \lambda^{k}:k\in\mathbb{N}\cap(m,n]\right\} =\max\left\{ \lambda^{m+1},\lambda^{n}\right\} . \end{align*}
\end{notation}
\begin{notation} \label{not.1.29}Given two functions, $f\left( x\right) $ and $g\left( x\right) ,$ depending on some parameters indicated by $x,$ we write $f\left( x\right) \lesssim g\left( x\right) $ if there exists a constant, $C\left( \kappa\right) ,$ only possibly depending on $\kappa$ so that $f\left( x\right) \leq C\left( \kappa\right) g\left( x\right) $ for the allowed values of $x.$ Similarly we write $f\left( x\right) \asymp g\left( x\right) $ if both $f\left( x\right) \lesssim g\left( x\right) $ and $g\left( x\right) \lesssim f\left( x\right) $ hold. \end{notation}
\begin{theorem} \label{thm.1.30}There is a constant $c\left( \kappa\right) <\infty$ such that \begin{align*} d_{M} & \left( \mu_{T,0}^{V_{\dot{\xi}}},e^{V_{\log\left( g^{\xi}\left( T\right) \right) }}\right) \\ & \lesssim\left\vert V^{\left( \kappa\right) }\right\vert _{M}\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}e^{c\left( \kappa\right) \left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}Q_{\left[ 1,\kappa\right] }\left( N_{T}^{\ast}\left( \dot{\xi}\right) \right) }Q_{(\kappa,\kappa+1]}\left( N_{T}^{\ast}\left( \dot{\xi}\right) \right) \end{align*} for every $\xi\in C^{1}\left( \left[ 0,T\right] ,F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \right) .$ Moreover, if $A,B\in F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) ,$ then \[ d_{M}\left( e^{V_{B}},Id_{M}\right) \leq\left\vert V^{\left( \kappa\right) }\right\vert \left\vert B\right\vert \leq\left\vert V^{\left( \kappa\right) }\right\vert Q_{\left[ 1,\kappa\right] }\left( N\left( B\right) \right) \] and \begin{align*} d_{M} & \left( e^{V_{B}}\circ e^{V_{A}},e^{V_{\log\left( e^{A} e^{B}\right) }}\right) \\ & \lesssim\mathcal{K}_{0}N\left( A\right) N\left( B\right) Q_{\left[ \kappa-1,2\kappa-2\right] }\left( N\left( A\right) +N\left( B\right) \right) \end{align*} where \[ \mathcal{K}_{0}:=\left\vert V^{\left( \kappa\right) }\right\vert _{M}\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}e^{c\left( \kappa\right) \left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}Q_{\left[ 1,\kappa\right] }\left( N\left( A\right) +N\left( B\right) \right) }. \]
\end{theorem}
\begin{remark} [Dialating Theorem \ref{thm.1.30}]\label{rem.1.31}If we define the dilation homomorphism, $\delta_{\lambda}:T^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \rightarrow T^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) ,$ where $\delta_{\lambda}A=\sum_{k=0}^{\kappa} \lambda^{k}A_{k}$ for $\lambda>0$ and $A\in T^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) ,$ then $N_{T}^{\ast}\left( \lambda\dot{\xi}\right) =\lambda N_{T}^{\ast}\left( \dot{\xi}\right) $ and hence it follows from Theorem \ref{thm.1.30} that \[ d_{M}\left( \mu_{T,0}^{V_{\delta_{\lambda}\dot{\xi}}},e^{V_{\log\left( g^{\delta_{\lambda}\xi}\left( T\right) \right) }}\right) =O\left( \lambda^{\kappa+1}\right) \text{ and }\lambda\rightarrow0. \] If is also easy to verify, 1) $N\left( \delta_{\lambda}A\right) =\lambda N\left( A\right) $ for all $A\in F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) ,$ 2) $g^{\delta_{\lambda}\xi}=\delta_{\lambda}\left( g^{\xi}\right) ,$ \[ \text{3)~}\log\left( g^{\delta_{\lambda}\xi}\right) =\log\left( \delta_{\lambda}\left( g^{\xi}\right) \right) =\delta_{\lambda}\log\left( g^{\xi}\right) , \] and 4) $\delta_{\lambda}\dot{\xi}\left( t\right) =\lambda\dot{\xi}\left( t\right) $ in the special case where $\xi\left( t\right) \in\mathbb{R} ^{d}\subset F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) .$ \end{remark}
The next theorem (which is a combination of Theorem \ref{thm.8.4}, Eq. (\ref{e.4.19}), and Corollary \ref{cor.8.5}) is an analogue of Theorem \ref{thm.1.30} for the differentials of $\mu_{T,0}^{V_{\dot{\xi}}}$ of $e^{V_{\log\left( g^{\xi}\left( T\right) \right) }}.$
\begin{theorem} \label{thm.1.32}If $\xi\in C^{1}\left( \left[ 0,T\right] ,F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \right) ,$ then \[ d_{M}^{TM}\left( \mu_{T,0\ast}^{V_{\dot{\xi}}},e_{\ast}^{V_{\log\left( g^{\xi}\left( T\right) \right) }}\right) \leq\mathcal{K}\cdot Q_{(\kappa,2\kappa]}\left( N_{T}^{\ast}\left( \dot{\xi}\right) \right) , \] where \[ \mathcal{K=K}\left( T,\left\vert V^{\left( \kappa\right) }\right\vert _{M},\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M},\left\vert \nabla^{2}V^{\left( \kappa\right) }\right\vert _{M},\left\vert R\left\langle V_{\cdot},\bullet\right\rangle \right\vert _{M},N_{T}^{\ast}\left( \dot{\xi }\right) \right) \] is a (fairly complicated) increasing function of each of its arguments. Moreover, if $A,B\in F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) ,$ then \[ d_{M}^{TM}\left( e^{V_{B}},Id_{M}\right) \leq\left\vert V^{\left( \kappa\right) }\right\vert \left\vert B\right\vert \leq\left\vert V^{\left( \kappa\right) }\right\vert Q_{\left[ 1,\kappa\right] }\left( N\left( B\right) \right) \] and \begin{align*} d_{M}^{TM} & \left( \left[ e^{V_{B}}\circ e^{V_{A}}\right] _{\ast },e_{\ast}^{V_{\log\left( e^{A}e^{B}\right) }}\right) \\ & \leq\mathcal{K}_{1}\cdot N\left( A\right) N\left( B\right) Q_{(\kappa-1,2\left( \kappa-1\right) ]}\left( N\left( A\right) +N\left( B\right) \right) . \end{align*} where \[ \mathcal{K}_{1}=\mathcal{K}_{1}\left( \left\vert V^{\left( \kappa\right) }\right\vert _{M},\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M},H_{M}\left( V^{\left( \kappa\right) }\right) ,N\left( A\right) \vee N\left( B\right) \right) . \]
\end{theorem}
This paper separates into two parts. The first part consisting of Sections \ref{sec.2} --\ref{sec.4} which develops the results needed to prove Theorem \ref{thm.1.30} estimating error between the flow $\mu_{T,0}^{V_{\dot{\xi}}}$ and $e^{V_{\log\left( g^{\xi}\left( T\right) \right) }}.$ The second part of the paper consists of Sections \ref{sec.5} -- \ref{sec.8} where the tools are developed to estimate the error between the differentials of $\mu _{T,0}^{V_{\dot{\xi}}}$ and $e^{V_{\log\left( g^{\xi}\left( T\right) \right) }}$ given in Theorem \ref{thm.1.32}. The computations in the second part are necessarily more complicated and this is where curvature of $M$ enters the scene. Lastly, the Appendix \ref{sec.9} gathers some basic Gronwall type estimates used in the body of this paper.
\subsection{Acknowledgments}
The author is very grateful to Masha Gordina for many illuminating conversations on this work and to her hospitality and to that of the mathematics department at the University of Connecticut where this work was started while I was on sabbatical in the Fall of 2017.
\section{Geometric notation and background\label{sec.2}}
\subsection{Riemannian distance\label{sec.2.1}}
Given $-\infty<a<b<\infty,$ a path, $\sigma\in C\left( \left[ a,b\right] \rightarrow M\right) $ is said to be \textbf{absolutely continuous} provided for any chart $x$ on $M$ and closed positive length subinterval, $J\subset\left[ a,b\right] $ such that $\sigma\left( J\right)
\subset\mathcal{D}\left( x\right) $ ($\mathcal{D}\left( x\right) $ is the domain of $x)$ we have then $x\circ\sigma|_{J}:J\rightarrow\mathbb{R}^{d}$ is absolutely continuous.
\begin{notation} \label{not.2.1}For $-\infty<a<b<\infty,$ let $AC\left( \left[ a,b\right] \rightarrow M\right) $ denote the \textbf{absolutely continuous paths} from $\left[ a,b\right] $ to $M$. Moreover, if $p,q\in M,$ let \[ AC_{p,q}\left( \left[ a,b\right] \rightarrow M\right) :=\left\{ \sigma\in AC\left( \left[ a,b\right] \rightarrow M\right) :\sigma\left( a\right) =p\text{ and }\sigma\left( b\right) =q\right\} . \] The\textbf{ length}, $\ell_{M}\left( \sigma\right) ,$ of a path in $\sigma\in AC\left( \left[ a,b\right] \rightarrow M\right) $ is defined by \[ \ell_{M}\left( \sigma\right) :=\int_{a}^{b}\left\vert \dot{\sigma}\left( t\right) \right\vert dt \] and (as usual) the \textbf{distance }between $m,m^{\prime}\in M$ is defined by \[ d\left( m,m^{\prime}\right) :=\inf\left\{ \ell_{M}\left( \sigma\right) :\sigma\in AC_{m,m^{\prime}}\left( \left[ 0,1\right] \rightarrow M\right) \right\} . \] Given $v\in TM,$ let $\sigma_{v}\left( t\right) $ be the geodesic in $M$ such that $\dot{\sigma}_{v}\left( 0\right) =v,$ $\exp\left( v\right) =\sigma_{v}\left( 1\right) \in M$ for those $v\in TM$ such that $\sigma _{v}\left( 1\right) $ exists, and for $m\in M$ we let $\exp_{m}
:=\exp|_{T_{m}M}:T_{m}M\rightarrow M.$ \end{notation}
Throughout this paper we will use the following geometric notations.
\begin{notation} [Metric vector bundles and connections]\label{not.2.2}Let $\left( M,g\right) $ be a Riemannian manifold, $\pi:E\rightarrow M$ be a real Hermitian vector bundle over $M$ (with fiber dimension, $D)$ with the fiber metric denoted by, $\left\langle \cdot,\cdot\right\rangle _{E}.$ We further assume that $E$ is equipped with a metric compatible covariant derivative, $\nabla=\nabla^{E}$. [Typically we are interested in the setting where $E=TM$ in which case we always take $\nabla=\nabla^{TM}$ to be the Levi-Civita covariant derivative on $TM.]$ Further,
\begin{enumerate} \item let $E_{m}:=\pi^{-1}\left( \left\{ m\right\} \right) $ be the fiber over $m$ which is isomorphic to $\mathbb{R}^{D},$
\item let $\pt_{t}^{\nabla}\left( \sigma\right) :E_{\sigma\left( a\right) }\rightarrow E_{\sigma\left( t\right) }$ denote parallel translation along a curve $\sigma\in C^{1}\left( \left[ a,b\right] \rightarrow M\right) $ or more generally along $\sigma\in AC\left( \left[ a,b\right] \rightarrow M\right) $ -- the space of $M$-valued absolutely continuous paths on $\left[ a,b\right] ,$ and
\item if $\xi\left( t\right) \in E_{\sigma\left( t\right) }$ for $t\in\left[ a,b\right] ,$ let \[ \nabla_{t}\xi\left( t\right) =\frac{\nabla\xi}{dt}\left( t\right) :=\pt_{t}\left( \sigma\right) \frac{d}{dt}\left[ \pt_{t}\left( \sigma\right) ^{-1}\xi\left( t\right) \right] . \]
\end{enumerate} \end{notation}
By assumption, for every $m\in M,$ there exists an open neighborhood $\left( W\right) $ of $m$ and a smooth function $W\times\mathbb{R}^{D}\ni\left( m,\alpha\right) \rightarrow u\left( m\right) \alpha\in E$ such that $u\left( m\right) :\mathbb{R}^{D}\rightarrow E_{m}$ is an isometric isomorphism of inner product spaces. We refer to $\left( u,W\right) $ as \textbf{a (local) orthogonal frame of }$E.$ We also let $SO\left( \mathbb{R}^{D}\right) $ be the group of $D\times D$ real orthogonal matrices with determinant equal to $1$ and let $so\left( \mathbb{R}^{D}\right) $ be its Lie algebra of $D\times D$ real skew-symmetric matrices.
\begin{remark} [Local model for $E$]\label{rem.2.3}As described just above, after choosing a local orthogonal frame, we may identify (locally) $E$ with the trivial bundle $W\times\mathbb{R}^{D}$ where $W$ is an open subset of $W.$ In this local model we have;
\begin{enumerate} \item $\pi\left( m,\alpha\right) =m$ for all $m\in W$ and $\alpha \in\mathbb{R}^{d}.$
\item $\left\langle \left( m,\alpha\right) ,\left( m,\beta\right) \right\rangle =\alpha\cdot\beta$ for all $m\in W$ and $\alpha,\beta \in\mathbb{R}^{d}.$
\item There exists and $so\left( \mathbb{R}^{D}\right) $-valued one form, $\Gamma,$ such that if $S\left( m\right) =\left( m,\alpha\left( m\right) \right) $ is a section of $E$ and $v\in T_{m}W,$ then \[ \nabla_{v}S=\left( m,d\alpha\left( v_{m}\right) +\Gamma\left( v_{m}\right) \alpha\left( m\right) \right) . \]
\item If $\sigma\in C^{1}\left( \left[ \alpha,\beta\right] \rightarrow W\right) ,$ then $\pt_{t}\left( \sigma\right) \left( \sigma\left( a\right) ,\alpha\right) =\left( \sigma\left( t\right) ,g\left( t\right) \alpha\right) $ where $g\left( t\right) \in SO\left( \mathbb{R} ^{D}\right) $ is the solution to the ordinary differential equation, \[ \dot{g}\left( t\right) +\Gamma\left( \dot{\sigma}\left( t\right) \right) g\left( t\right) =0\text{ with }g\left( a\right) =I_{\mathbb{R}^{D}}. \]
\item If $\xi\left( t\right) =\left( \sigma\left( t\right) ,\alpha\left( t\right) \right) $ is a $C^{1}$-path in $E,$ then \begin{equation} \frac{\nabla\xi}{dt}\left( t\right) =\left( \sigma\left( t\right) ,\dot{\alpha}\left( t\right) +\Gamma\left( \dot{\sigma}\left( t\right) \right) \alpha\left( t\right) \right) . \label{e.2.1} \end{equation}
\end{enumerate}
For completeness, here is the verification of Eq. (\ref{e.2.1}); \begin{align*} \frac{d}{dt}\left[ \pt_{t}\left( \sigma\right) ^{-1}\xi\left( t\right) \right] & =\frac{d}{dt}\left( \sigma\left( a\right) ,g\left( t\right) ^{-1}\alpha\left( t\right) \right) \\ & =\left( \sigma\left( a\right) ,g\left( t\right) ^{-1}\dot{\alpha }\left( t\right) -g\left( t\right) ^{-1}\dot{g}\left( t\right) g\left( t\right) ^{-1}\alpha\left( t\right) \right) \\ & =\left( \sigma\left( a\right) ,g\left( t\right) ^{-1}\dot{\alpha }\left( t\right) +g\left( t\right) ^{-1}\Gamma\left( \dot{\sigma}\left( t\right) \right) \alpha\left( t\right) \right) \\ & =\pt_{t}\left( \sigma\right) ^{-1}\left( \sigma\left( t\right) ,\dot{\alpha}\left( t\right) +\Gamma\left( \dot{\sigma}\left( t\right) \right) \alpha\left( t\right) \right) . \end{align*}
\end{remark}
The next elementary lemma illustrates how the structures in Notation \ref{not.2.2} fit together.
\begin{lemma} \label{lem.2.4}If $\xi:\left[ a,b\right] \rightarrow E$ is a $C^{1}$-curve and $\sigma:=\pi\circ\xi\in C^{1}\left( \left[ a,b\right] ,M\right) ,$ then \begin{equation} \left\vert \left\vert \xi\left( b\right) \right\vert -\left\vert \xi\left( a\right) \right\vert \right\vert \leq\left\vert \pt_{b}\left( \sigma\right) ^{-1}\xi\left( b\right) -\xi\left( a\right) \right\vert \leq\int_{a} ^{b}\left\vert \frac{\nabla}{dt}\xi\left( t\right) \right\vert dt. \label{e.2.2} \end{equation}
\end{lemma}
\begin{proof} By the metric compatibility of $\nabla,$ $\left\vert \xi\left( b\right) \right\vert =\left\vert \pt_{b}\left( \sigma\right) ^{-1}\xi\left( b\right) \right\vert $ and therefore \[ \left\vert \left\vert \xi\left( b\right) \right\vert -\left\vert \xi\left( a\right) \right\vert \right\vert =\left\vert \left\vert \pt_{b}\left( \sigma\right) ^{-1}\xi\left( b\right) \right\vert -\left\vert \xi\left( a\right) \right\vert \right\vert \leq\left\vert \pt_{b}\left( \sigma\right) ^{-1}\xi\left( b\right) -\xi\left( a\right) \right\vert \] which proves the first inequality in Eq. (\ref{e.2.2}). By the fundamental theorem of calculus and the definition of $\frac{\nabla}{dt},$ \begin{align*} \pt_{b}\left( \sigma\right) ^{-1}\xi\left( b\right) -\xi\left( a\right) & =\int_{a}^{b}\frac{d}{dt}\left[ \pt_{t}\left( \sigma\right) ^{-1} \xi\left( t\right) \right] dt\\ & =\int_{a}^{b}\pt_{t}\left( \sigma\right) ^{-1}\frac{\nabla}{dt}\xi\left( t\right) dt. \end{align*} The second inequality in Eq. (\ref{e.2.2}) now follows from this identity and the triangle inequality for vector valued integrals, \[ \left\vert \int_{a}^{b}\pt_{t}\left( \sigma\right) ^{-1}\frac{\nabla}{dt} \xi\left( t\right) dt\right\vert \leq\int_{a}^{b}\left\vert \pt_{t}\left( \sigma\right) ^{-1}\frac{\nabla}{dt}\xi\left( t\right) \right\vert dt=\int_{a}^{b}\left\vert \frac{\nabla}{dt}\xi\left( t\right) \right\vert dt. \]
\end{proof}
\begin{definition} \label{def.2.5}If $X\in\Gamma\left( TM\right) $ and $v_{m},w_{m}\in T_{m}M,$ let \[
\nabla_{v_{m}\otimes w_{m}}^{2}X:=\frac{d}{dt}|_{0}\left[ \pt_{t}\left( \sigma\right) ^{-1}\left( \nabla_{\pt_{t}\left( \sigma\right) w_{m} }X\right) \right] \] where $\sigma\left( t\right) \in M$ is chosen so that $\dot{\sigma}\left( 0\right) =v_{m}.$ In this notation the curvature tensor may be defined by \[ R\left( v_{m},w_{m}\right) \xi_{m}=\nabla_{v_{m}\otimes w_{m}}^{2} X-\nabla_{w_{m}\otimes v_{m}}^{2}X, \] where $X\in\Gamma\left( TM\right) $ is any vector field such that $X\left( m\right) =\xi_{m}\in T_{m}M.$ \end{definition}
\begin{remark} \label{rem.2.6}If $W,X\in\Gamma\left( TM\right) $ and $v_{m}=\dot{\sigma }\left( 0\right) \in T_{m}M,$ then \begin{align}
\nabla_{v_{m}}\nabla_{W}X & =\frac{d}{dt}|_{0}\pt_{t}\left( \sigma\right) ^{-1}\left( \nabla_{W}X\right) \left( \sigma\left( t\right) \right) \nonumber\\
& =\frac{d}{dt}|_{0}\left[ \pt_{t}\left( \sigma\right) ^{-1} \nabla_{W\left( \sigma\left( t\right) \right) }X\right] \nonumber\\
& =\frac{d}{dt}|_{0}\left[ \pt_{t}\left( \sigma\right) ^{-1} \nabla_{\pt_{t}\left( \sigma\right) \left[ \pt_{t}\left( \sigma\right) ^{-1}W\left( \sigma\left( t\right) \right) \right] }X\right] \nonumber\\
& =\frac{d}{dt}|_{0}\left[ \pt_{t}\left( \sigma\right) ^{-1} \nabla_{\pt_{t}\left( \sigma\right) W\left( m\right) }X\right] +\frac
{d}{dt}|_{0}\left[ \nabla_{\left[ \pt_{t}\left( \sigma\right) ^{-1}W\left( \sigma\left( t\right) \right) \right] }X\right] \nonumber\\ & =\nabla_{v_{m}\otimes W\left( m\right) }^{2}X+\nabla_{\nabla_{v_{m}}W}X. \label{e.2.3} \end{align} This shows two things; 1) that $\nabla_{v_{m}\otimes w_{m}}^{2}X$ is independent of the choice of curve, $\sigma\left( t\right) $ such that $\dot{\sigma}\left( 0\right) =v_{m}$ since \[ \nabla_{v_{m}\otimes W\left( m\right) }^{2}X=\nabla_{v_{m}}\nabla _{W}X-\nabla_{\nabla_{v_{m}}W}X, \] and 2) that with this definition of $\nabla^{2}X$ the natural product rule derived in Eq. (\ref{e.2.3}) holds. \end{remark}
\begin{definition} \label{def.2.7}For $f\in C^{1}\left( M,M\right) $ let $f_{\ast }:TM\rightarrow TM$ be the differential of $f,$ \begin{align*} \left\vert f_{\ast}\right\vert _{m} & :=\sup_{v\in T_{m}M:\left\vert v\right\vert =1}\left\vert f_{\ast}v\right\vert \text{ for each }m\in M,\text{ and}\\ \left\vert f_{\ast}\right\vert _{M} & :=\sup_{m\in M}\left\vert f_{\ast }\right\vert _{m}=\sup_{v\in TM:\left\vert v\right\vert =1}\left\vert f_{\ast }v\right\vert . \end{align*}
\end{definition}
\begin{definition} \label{def.2.8}We say $f\in C\left( M,M\right) $ is \textbf{Lipschitz} if there exists $K=K\left( f\right) <\infty$ such that \begin{equation} d\left( f\left( m\right) ,f\left( m^{\prime}\right) \right) \leq Kd\left( m,m^{\prime}\right) \text{ }\forall~m,m^{\prime}\in M. \label{e.2.4} \end{equation} The best smallest $K\in\left[ 0,\infty\right] $ such that Eq. (\ref{e.2.4}) holds is denoted by $\operatorname{Lip}\left( f\right) ,$ i.e. \[ \operatorname{Lip}\left( f\right) :=\sup_{m\neq m^{\prime}}\frac{d\left( f\left( m\right) ,f\left( m^{\prime}\right) \right) }{d\left( m,m^{\prime}\right) }. \] We will write $\operatorname{Lip}\left( f\right) =\infty$ if $f$ is not Lipschitz. \end{definition}
\begin{lemma} \label{lem.2.9}If $f\in C^{1}\left( M,M\right) $ then $\operatorname{Lip} \left( f\right) =\left\vert f_{\ast}\right\vert _{M}$. \end{lemma}
\begin{proof} Let $m,m^{\prime}\in M$ and $\sigma\in AC\left( \left[ 0,1\right] ,M\right) $ such that $\sigma\left( 0\right) =m$ and $\sigma\left( 1\right) =m^{\prime}.$ Then $f\circ\sigma\in AC\left( \left[ 0,1\right] ,M\right) $ and $\frac{d}{dt}f\left( \sigma\left( t\right) \right) =f_{\ast}\dot{\sigma}\left( t\right) $ for a.e. $t$ and therefore, \begin{align*} d\left( f\left( m\right) ,f\left( m^{\prime}\right) \right) & \leq \ell\left( f\circ\sigma\right) =\int_{0}^{1}\left\vert f_{\ast}\dot{\sigma }\left( t\right) \right\vert dt\\ & \leq\int_{0}^{1}\left\vert f_{\ast}\right\vert _{M}\left\vert \dot{\sigma }\left( t\right) \right\vert dt=\left\vert f_{\ast}\right\vert _{M}\ell _{M}\left( \sigma\right) . \end{align*} Taking the infimum of this inequality over all $\sigma\in AC_{m,m^{\prime} }\left( \left[ 0,1\right] ,M\right) $ then shows \[ d\left( f\left( m\right) ,f\left( m^{\prime}\right) \right) \leq\left\vert f_{\ast}\right\vert _{M}d\left( m,m^{\prime}\right) \] which implies $\operatorname{Lip}\left( f\right) \leq\left\vert f_{\ast }\right\vert _{M}.$
For the opposite inequality let $m\in M,$ $v\in T_{m}M$ with $\left\vert v\right\vert =1,$ and let $\sigma_{v}\left( t\right) :=\exp_{m}\left( tv\right) $ for $t$ near $0.$ Further let $\gamma\left( t\right) $ be the smooth curve in $T_{f\left( m\right) }M$ satisfying $\gamma\left( 0\right) =0_{f\left( m\right) }$ and $f\left( \sigma_{v}\left( t\right) \right) =\exp_{f\left( m\right) }\left( \gamma\left( t\right) \right) .$ It then follows that \[ \dot{\gamma}\left( 0\right) =\left( \exp_{f\left( m\right) }\right) _{\ast}\dot{\gamma}\left( 0\right) =f_{\ast}\dot{\sigma}_{v}\left( 0\right) =f_{\ast}v\text{ } \] and for $t$ sufficiently close to $0\in\mathbb{R},$ that \[ \left\vert \gamma\left( t\right) \right\vert =d\left( f\left( m\right) ,f\left( \sigma_{v}\left( t\right) \right) \right) \leq\operatorname{Lip} \left( f\right) d\left( m,\sigma_{v}\left( t\right) \right) =\operatorname{Lip}\left( f\right) \left\vert v\right\vert \left\vert t\right\vert =\operatorname{Lip}\left( f\right) \left\vert t\right\vert . \] Since \[ \lim_{t\rightarrow0}\frac{1}{t}\gamma\left( t\right) =\lim_{t\rightarrow 0}\frac{1}{t}\left[ \gamma\left( t\right) -\gamma\left( 0\right) \right] =\dot{\gamma}\left( 0\right) =f_{\ast}v \] we may conclude that \[ \left\vert f_{\ast}v\right\vert =\lim_{t\rightarrow0}\left\vert \frac{1} {t}\gamma\left( t\right) \right\vert \leq\operatorname{Lip}\left( f\right) . \] As $v\in TM$ was arbitrary, it follows that $\left\vert f_{\ast}\right\vert _{M}\leq\operatorname{Lip}\left( f\right) .$ \end{proof}
\begin{lemma} \label{lem.2.10}If $X\in\Gamma\left( TM\right) $ satisfies, $\left\vert \nabla X\right\vert _{M}<\infty,$ then \begin{equation} \left\vert \left\vert X\left( p\right) \right\vert -\left\vert X\left( m\right) \right\vert \right\vert \leq\left\vert \nabla X\right\vert _{M}\cdot d\left( p,m\right) \text{ }\forall~m,p\in M, \label{e.2.5} \end{equation} i.e. $\operatorname{Lip}\left( \left\vert X\left( \cdot\right) \right\vert \right) \leq\left\vert \nabla X\right\vert _{M}.$ \end{lemma}
\begin{proof} Let $\sigma\in C^{1}\left( \left[ 0,1\right] ,M\right) $ satisfy $\sigma\left( 0\right) =m$ and $\sigma\left( 1\right) =p$ and define $\xi\left( t\right) =X\left( \sigma\left( t\right) \right) $ and note that \[ \left\vert \frac{\nabla}{dt}\xi\left( t\right) \right\vert =\left\vert \nabla_{\dot{\sigma}\left( t\right) }X\right\vert \leq\left\vert \nabla X\right\vert _{M}\left\vert \dot{\sigma}\left( t\right) \right\vert . \] Therefore by Lemma \ref{lem.2.4}, \begin{equation} \left\vert \left\vert X\left( p\right) \right\vert -\left\vert X\left( m\right) \right\vert \right\vert \leq\int_{0}^{1}\left\vert \frac{\nabla} {dt}\xi\left( t\right) \right\vert dt\leq\int_{0}^{1}\left\vert \nabla X\right\vert _{M}\left\vert \dot{\sigma}\left( t\right) \right\vert dt=\left\vert \nabla X\right\vert _{M}\cdot\ell\left( \sigma\right) . \label{e.2.6} \end{equation} Taking the infimum of the last term in this inequality over all paths joining $m$ to $p$ gives Eq. (\ref{e.2.5}). \end{proof}
\begin{theorem} [Distance estimates]\label{thm.2.11}Suppose that $\left[ 0,T\right] \ni t\rightarrow Y_{t}\in\Gamma\left( TM\right) $ smoothly varying time dependent vector field and $\left( a,b\right) \subset\left[ 0,T\right] $ and $\sigma:\left( a,b\right) \rightarrow M$ solves \[ \dot{\sigma}\left( t\right) =Y_{t}\left( \sigma\left( t\right) \right) \text{ for all }t\in\left( a,b\right) \] where $\dot{\sigma}\left( a\right) $ and $\dot{\sigma}\left( b\right) $ are interpreted as appropriate one sided derivatives. Then for any $s,t\in\left( a,b\right) ,$ \begin{align*} d\left( \sigma\left( t\right) ,\sigma\left( s\right) \right) & \leq\left\vert Y\right\vert _{J\left( s,t\right) }^{\ast}\leq\left\vert Y\right\vert _{T}^{\ast}\text{ and}\\ d\left( \sigma\left( t\right) ,\sigma\left( s\right) \right) & \leq e^{\left\vert \nabla Y\right\vert _{J\left( s,t\right) }^{\ast}}\left\vert Y_{\cdot}\left( m\right) \right\vert _{J\left( s,t\right) }^{\ast}\leq e^{\left\vert \nabla Y\right\vert _{T}^{\ast}}\cdot\left\vert Y_{\cdot}\left( m\right) \right\vert _{T}^{\ast}.\text{ } \end{align*}
\end{theorem}
\begin{proof} Without loss of generality we may assume that $s\leq t.$ Since $d\left(
\sigma\left( t\right) ,\sigma\left( s\right) \right) $ is no more than the length of $\sigma|_{\left[ s,t\right] }$ we immediately find, \[ d\left( \sigma\left( t\right) ,\sigma\left( s\right) \right) \leq \int_{s}^{t}\left\vert \dot{\sigma}\left( \tau\right) \right\vert d\tau =\int_{s}^{t}\left\vert Y_{\tau}\left( \sigma\left( \tau\right) \right) \right\vert d\tau\left\vert Y\right\vert _{J\left( s,t\right) }^{\ast} \leq\left\vert Y\right\vert _{T}^{\ast} \] which gives the first inequality. To prove the second inequality we use the estimate in Eq. (\ref{e.2.6}) with $X=Y_{t}$ to find, \begin{align}
\left\vert \dot{\sigma}\left( t\right) \right\vert & =\left\vert Y_{t}\left( \sigma\left( t\right) \right) \right\vert \leq\left\vert Y_{t}\left( \sigma\left( s\right) \right) \right\vert +\left\vert \nabla Y_{t}\right\vert _{M}\ell\left( \sigma|_{\left[ s,t\right] }\right) \nonumber\\ & =\left\vert Y_{t}\left( \sigma\left( s\right) \right) \right\vert +\left\vert \nabla Y_{t}\right\vert _{M}\int_{s}^{t}\left\vert \dot{\sigma }\left( r\right) \right\vert dr. \label{e.2.7} \end{align} If we define \[ \psi\left( \tau\right) :=\int_{s}^{s+\tau}\left\vert \dot{\sigma}\left( r\right) \right\vert dr\text{ for }0\leq\tau\leq b-s, \] then the inequality in Eq. (\ref{e.2.7}) may be rewritten as, \[ \dot{\psi}\left( \tau\right) =\left\vert \dot{\sigma}\left( s+\tau\right) \right\vert \leq\left\vert Y_{s+\tau}\left( m\right) \right\vert +\left\vert \nabla Y_{s+\tau}\right\vert _{M}\psi\left( \tau\right) \text{ with } \psi\left( 0\right) =0. \] By Gronwall's inequality (see Proposition \ref{pro.9.1}) and a simple change of variables we find, \begin{align*} \int_{s}^{s+\tau}\left\vert \dot{\sigma}\left( r\right) \right\vert dr=\psi\left( \tau\right) \leq & \int_{0}^{\tau}e^{\int_{r}^{\tau }\left\vert \nabla Y_{s+\tau}\right\vert _{M}ds}\cdot\left\vert Y_{s+r}\left( m\right) \right\vert dr\\ & =\int_{0}^{\tau}e^{\int_{s+r}^{s+\tau}\left\vert \nabla Y_{\sigma }\right\vert _{M}d\sigma}\cdot\left\vert Y_{s+r}\left( m\right) \right\vert dr. \end{align*} Choosing $\tau$ so that $s+\tau=t$ and making another translational change of variables yields \begin{align*} d\left( \sigma\left( t\right) ,\sigma\left( s\right) \right) & \leq\int_{s}^{t}\left\vert \dot{\sigma}\left( r\right) \right\vert dr\leq\int_{0}^{t-s}e^{\int_{s+r}^{t}\left\vert \nabla Y_{\sigma}\right\vert _{M}d\sigma}\cdot\left\vert Y_{s+r}\left( m\right) \right\vert dr\\ & =\int_{s}^{t}e^{\int_{\tau}^{t}\left\vert \nabla Y_{\sigma}\right\vert _{M}ds}\cdot\left\vert Y_{\tau}\left( m\right) \right\vert d\tau\leq e^{\left\vert \nabla Y\right\vert _{J\left( s,t\right) }^{\ast}}\left\vert Y_{\cdot}\left( m\right) \right\vert _{J\left( s,t\right) }^{\ast} \end{align*} which gives the second inequality. \end{proof}
\begin{corollary} \label{cor.2.12}If $\left( M,g\right) $ is a complete Riemannian manifold and either $\left\vert Y\right\vert _{T}^{\ast}<\infty$ or $\left\vert \nabla Y\right\vert _{T}^{\ast}<\infty,$ then $Y$ is complete. \end{corollary}
\begin{proof} Suppose that $s\in\left[ 0,T\right] $ and $m\in M$ are given and that $\sigma:\left( a,b\right) \rightarrow M$ is a maximal solution to the ODE, \[ \dot{\sigma}\left( t\right) =Y_{t}\left( \sigma\left( t\right) \right) \text{ with }\sigma\left( s\right) =m. \] In order to handle both cases at once, let $R:=\left\vert Y\right\vert _{T}^{\ast}$ or $R=e^{\left\vert \nabla Y\right\vert _{T}^{\ast}}\left\vert Y_{\cdot}\left( m\right) \right\vert _{T}^{\ast}$ so that according to Theorem \ref{thm.2.11}, $\sigma\left( t\right) \in K:=\overline{B\left( 0,R\right) }$ for all $t\in\left( a,b\right) .$ Since $R<\infty$ and $M$ is complete we know that $K$ is compact and hence \[ \left\vert \sigma^{\prime}\left( t\right) \right\vert =\left\vert Y_{t}\left( \sigma\left( t\right) \right) \right\vert \leq C_{K} :=\max_{0\leq s\leq T~\&~m\in K}\left\vert Y_{s}\left( m\right) \right\vert <\infty. \] Thus it follows that \[ d\left( \sigma\left( t\right) ,\sigma\left( s\right) \right) \leq C_{K}\left\vert t-s\right\vert \text{ for }s,t\in\left( a,b\right) . \] From this we conclude that $\,\lim_{t\uparrow b}\sigma\left( t\right) $ exists as $\left\{ \sigma\left( t\right) :t\uparrow\tau\right\} $ is a Cauchy and $\left( M,g\right) $ is complete and similarly, $\lim _{t\downarrow a}\sigma\left( t\right) $ exists and we may extend $\sigma$ to a continuous function on $\left[ a,b\right] .$
We now claim that the one sided derivatives of $\sigma\left( t\right) $ at $t=a$ and $t=b$ exist and are given by $Y_{a}\left( \sigma\left( a\right) \right) $ and $Y_{b}\left( \sigma\left( b\right) \right) $ respectively. Indeed, if $\lim_{t\uparrow b}\sigma\left( t\right) =p=:\sigma\left( b\right) $ and $f\in C^{\infty}\left( M\right) ,$ then for $a<t<b$ \begin{align*} f\left( \sigma\left( b\right) \right) -f\left( \sigma\left( t\right) \right) & =\lim_{\tau\uparrow b}f\left( \sigma\left( \tau\right) \right) -f\left( \sigma\left( t\right) \right) \\ & =\lim_{\tau\uparrow b}\int_{t}^{\tau}df\left( \sigma^{\prime}\left( r\right) \right) dr\\ & =\lim_{\tau\uparrow b}\int_{t}^{\tau}\left( Y_{r}f\right) \left( \sigma\left( r\right) \right) dr=\int_{t}^{b}\left( Y_{r}f\right) \left( \sigma\left( r\right) \right) dr \end{align*} and hence \[ \lim_{t\uparrow b}\frac{f\left( \sigma\left( b\right) \right) -f\left( \sigma\left( t\right) \right) }{b-t}=\lim_{t\uparrow b}\frac{1}{b-t} \int_{t}^{b}\left( Y_{r}f\right) \left( \sigma\left( r\right) \right) dr=\left( Y_{b}f\right) \left( \sigma\left( b\right) \right) . \] Since this holds for all $f\in C^{\infty}\left( M\right) ,$ it follows that $\sigma$ has a left derivative at $b$ given by $Y_{b}\left( \sigma\left( b\right) \right) .$ Similarly, one shows the right derivative of $\sigma\left( t\right) $ exists at $t=a$ and is given by $Y_{a}\left( \sigma\left( a\right) \right) .$
To complete the proof, for the sake of contradiction, suppose that $b<T.$ By local existence of ODEs we may find $\gamma:\left( b-\varepsilon ,b+\varepsilon\right) \rightarrow M$ such that \[ \dot{\gamma}\left( t\right) =Y_{t}\left( \gamma\left( t\right) \right) \text{ with }\gamma\left( b\right) =p=\sigma\left( b\right) . \] The path \[ \tilde{\sigma}\left( t\right) :=\left\{ \begin{array} [c]{ccc} \sigma\left( t\right) & \text{if} & 0\leq t\leq b\\ \gamma\left( t\right) & \text{if} & b\leq t<b+\varepsilon \end{array} \right. \] then satisfies the ODE on a longer time interval which violates the maximality of the solution and so we in fact must have $b=T.$ Similarly, one shows that $a$ must be $0$ as well and hence $Y$ is complete. \end{proof}
\begin{corollary} [$\left\vert \nabla X\right\vert _{M}<\infty$ growth implications] \label{cor.2.13}If $X\in\Gamma\left( TM\right) $ is a complete time independent vector field, then for all $m\in M,$ \begin{align} d\left( e^{X}\left( m\right) ,m\right) & \leq\left\vert X\right\vert _{M},\text{ and }\label{e.2.8}\\ d\left( e^{X}\left( m\right) ,m\right) & \leq\left\vert X\left( m\right) \right\vert \cdot e^{\left\vert \nabla X\right\vert _{M}}. \label{e.2.9} \end{align}
\end{corollary}
\begin{proof} This follows immediately from Theorem \ref{thm.2.11} with $Y_{t}=X$ for all $t$ and $\sigma\left( t\right) =e^{tX}\left( m\right) .$ The inequalities in the theorem are applied with $t=1$ and $s=0.$ \end{proof}
\subsection{Flows\label{sec.2.2}}
The next theorem recalls some basic properties of flows associated to complete time dependent vector fields.
\begin{theorem} \label{thm.2.14}Suppose that $J=\left[ 0,T\right] \ni t\rightarrow Y_{t} \in\Gamma\left( TM\right) $ is a smoothly varying complete vector field on $M$ and for fixed $s\in J$ and $m\in M,$ $J\ni t\rightarrow\mu_{t,s}\left( m\right) $ is the unique solution to the ODE, \[ \frac{d}{dt}\mu_{t,s}\left( m\right) =Y_{t}\left( \mu_{t,s}\left( m\right) \right) \text{ with }\mu_{s,s}\left( m\right) =m. \] Then;
\begin{enumerate} \item $J\times J\times M\ni\left( t,s,m\right) \rightarrow\mu_{t,s}\left( m\right) \in M$ is smooth.
\item For all $r,s,t\in J,$ \begin{equation} \mu_{t,s}\circ\mu_{s,r}=\mu_{t,r}. \label{e.2.10} \end{equation}
\item For all $s,t\in J,$ $\mu_{t,s}\in\mathrm{Diff}\left( M\right) ,$ $\mu_{t,s}^{-1}=\mu_{s,t},$ the map $\left( s,t,m\right) \rightarrow \mu_{t,s}^{-1}\left( m\right) $ is smooth and \begin{equation} \frac{d}{dt}\mu_{t,s}^{-1}=\dot{\mu}_{s,t}=-\left( \mu_{s,t}\right) _{\ast }Y_{t}=-\left( \mu_{t,s}^{-1}\right) _{\ast}Y_{t}. \label{e.2.11} \end{equation}
\item For all $s,t\in J,$ \begin{equation} \mu_{t,s}=\mu_{t,0}\circ\mu_{s,0}^{-1}. \label{e.2.12} \end{equation}
\end{enumerate} \end{theorem}
\begin{proof} We take each item in turn.
\begin{enumerate} \item The smoothness of $\left( t,s,m\right) \rightarrow\mu_{t,s}\left( m\right) $ is a basic consequence of the fact that ODE's depend smoothly on parameters and initial conditions. For example $\sigma\left( \tau\right) =\mu_{T\left( \tau\right) ,s}\left( m\right) $ and $T\left( \tau\right) =\tau+s,$ then \[ \frac{d}{d\tau}\left( \begin{array} [c]{c} \sigma\left( \tau\right) \\ T\left( \tau\right) \end{array} \right) =\left( \begin{array} [c]{c} Y_{\tau}\left( \sigma\left( \tau\right) \right) \\ 1 \end{array} \right) \text{ with }\left( \begin{array} [c]{c} \sigma\left( 0\right) \\ T\left( 0\right) \end{array} \right) =\left( \begin{array} [c]{c} m\\ T\left( 0\right) =s \end{array} \right) \] and hence $\sigma\left( \tau,m,s\right) =\mu_{\tau+s,s}\left( m\right) $ depends smoothly on $\left( \tau,m,s\right) $ and hence $\mu$ depends smoothly on all of its variables.
\item To prove Eq. (\ref{e.2.10}) simply notice that $t\rightarrow\mu _{t,s}\circ\mu_{s,r}$ and $t\rightarrow\mu_{t,r}$ both satisfy the differential equation, \[ \frac{d}{dt}\nu_{t}=Y_{t}\circ\nu_{t}\text{ with }\nu_{s}=\mu_{s,r} \]
\item Taking $r=t$ in Eq. (\ref{e.2.10}) gives, \begin{equation} \mu_{t,s}\circ\mu_{s,t}=\mu_{t,t}=Id_{M}\text{ for all }s,t\in J. \label{e.2.13} \end{equation} By interchanging $s$ and $t$ in the above equation may also be written as \begin{equation} \mu_{s,t}\circ\mu_{t,s}=Id_{M}\text{ for all }s,t\in J. \label{e.2.14} \end{equation} From these last two equations we see that $\mu_{t,s}\in\mathrm{Diff}\left( M\right) $ for all $s,t\in J$ and moreover that $\mu_{t,s}^{-1}=\mu_{s,t}$ which also shows the map $\left( s,t,m\right) \rightarrow\mu_{t,s} ^{-1}\left( m\right) $ is smooth. To prove Eq. (\ref{e.2.11}) we differentiate Eq. (\ref{e.2.13}) with respect to $t$ to find, \begin{align*} 0 & =\dot{\mu}_{t,s}\circ\mu_{s,t}+\left( \mu_{t,s}\right) _{\ast}\dot {\mu}_{s,t}=Y_{t}\circ\mu_{t,s}\circ\mu_{s,t}+\left( \mu_{t,s}\right) _{\ast}\dot{\mu}_{s,t}\\ & =Y_{t}+\left( \mu_{t,s}\right) _{\ast}\dot{\mu}_{s,t} \end{align*} and hence \[ \dot{\mu}_{s,t}=-\left( \mu_{t,s}\right) _{\ast}^{-1}Y_{t}=-\left( \mu_{s,t}\right) _{\ast}Y_{t}. \]
\item This one is easily deduced by what has already been proved; \[ \mu_{t,s}=\mu_{t,0}\circ\mu_{0,s}=\mu_{t,0}\circ\mu_{s,0}^{-1}. \]
\end{enumerate} \end{proof}
\subsection{$\mathrm{Diff}\left( M\right) $-Adjoint Action\label{sec.2.3}}
\begin{definition} [Adjoint actions and Lie derivatives]\label{def.2.16}If $f\in\mathrm{Diff} \left( M\right) $ and $Y\in\Gamma\left( TM\right) ,$ let $\operatorname{Ad}_{f}Y=f_{\ast}Y\circ f^{-1}\in\Gamma\left( TM\right) ,$ i.e. $\operatorname{Ad}_{f}Y$ is the vector field defined by \begin{equation} \left( \operatorname{Ad}_{f}Y\right) \left( m\right) =f_{\ast}Y\left( f^{-1}\left( m\right) \right) \in T_{m}M\text{ }\forall~m\in M. \label{e.2.17} \end{equation} If $X,Y\in\Gamma\left( TM\right) ,$ then the \textbf{Lie derivative }of $Y$ with respect to $X$ is \begin{equation}
L_{X}Y:=\frac{d}{dt}|_{0}\operatorname{Ad}_{e^{-tX}}Y=\frac{d}{dt}|_{0} e_{\ast}^{-tX}Y\circ e^{tX}. \label{e.2.18} \end{equation} We further let \begin{equation}
\operatorname{ad}_{X}:=\frac{d}{dt}|_{0}\operatorname{Ad}_{e^{tX}}=-L_{X}. \label{e.2.19} \end{equation}
\end{definition}
\begin{remark} \label{rem.2.17}The following identities are well known and easy to prove.
\begin{enumerate} \item If $f,g\in\mathrm{Diff}\left( M\right) $ then $\operatorname{Ad} _{f\circ g}=\operatorname{Ad}_{f}\operatorname{Ad}_{g}.$
\item The Lie derivative, $L_{X}Y,$ is again a vector field on $M$ which may also be computed using \[ L_{X}Y=\left[ X,Y\right] =XY-YX. \]
\item If $X,Y\in\Gamma\left( TM\right) $ and $f\in\mathrm{Diff}\left( M\right) ,$ then \begin{equation} \operatorname{Ad}_{f}\left[ X\,Y\right] =\left[ \operatorname{Ad} _{f}X,\operatorname{Ad}_{f}Y\right] . \label{e.2.20} \end{equation}
\end{enumerate}
For example, to verify Eq. (\ref{e.2.20}), observe that $\operatorname{Ad} _{f}Y$ is the unique vector field on $M$ such that \begin{equation} f_{\ast}Y=\left( \operatorname{Ad}_{f}Y\right) \circ f, \label{e.2.21} \end{equation} i.e. such that $Y$ and $\operatorname{Ad}_{f}Y$ are \textquotedblleft $f$-related.\textquotedblright\ Since the commutator of two $f$-related vector fields are $f$-related, it follows that $\left[ X,Y\right] $ and $\left[ \operatorname{Ad}_{f}X,\operatorname{Ad}_{f}Y\right] $ are $f$-related, i.e. \[ f_{\ast}\left[ X,Y\right] =\left[ \operatorname{Ad}_{f}X,\operatorname{Ad} _{f}Y\right] \circ f\implies\operatorname{Ad}_{f}\left[ X\,Y\right] =f_{\ast}\left[ X,Y\right] \circ f^{-1}=\left[ \operatorname{Ad} _{f}X,\operatorname{Ad}_{f}Y\right] . \]
\end{remark}
\begin{proposition} [Adjoint flow equations]\label{pro.2.18}Suppose $Y\in\Gamma\left( TM\right) $ and $\nu_{t}\in\mathrm{Diff}\left( M\right) $ is smoothly varying in $t.$ If we define \[ W_{t}:=\dot{\nu}_{t}\circ\nu_{t}^{-1}\in\Gamma\left( TM\right) \text{ and }\tilde{W}_{t}=\left( \nu_{t}^{-1}\right) _{\ast}\dot{\nu}_{t}\in \Gamma\left( TM\right) , \] then the \textbf{adjoint flows} of $Y$, $\operatorname{Ad}_{\nu_{t}}Y$ and $\operatorname{Ad}_{\nu_{t}^{-1}}Y,$ satisfy \begin{equation} \frac{d}{dt}\operatorname{Ad}_{\nu_{t}}Y=\left[ \operatorname{Ad}_{\nu_{t} }Y,W_{t}\right] =\operatorname{Ad}_{\nu_{t}}\left[ Y,\tilde{W}_{t}\right] \label{e.2.22} \end{equation} and \begin{equation} \frac{d}{dt}\operatorname{Ad}_{\nu_{t}^{-1}}Y=\left[ \tilde{W}_{t} ,\operatorname{Ad}_{\nu_{t}^{-1}}Y\right] =\operatorname{Ad}_{\nu_{t}^{-1} }\left[ W_{t},Y\right] . \label{e.2.23} \end{equation}
\end{proposition}
\begin{proof} Let $Y_{t}:=\operatorname{Ad}_{\nu_{t}}Y$ for $t\in\mathbb{R}$ (as in Eq. (\ref{e.2.21})) so that, \[ Y_{t}\circ\nu_{t}=\left( \operatorname{Ad}_{\nu_{t}}Y\right) \circ\nu _{t}=\nu_{t\ast}Y. \] Hence, if $\varphi\in C^{\infty}\left( M,\mathbb{R}\right) ,$ then \[ \left( Y_{t}\varphi\right) \circ\nu_{t}=\left( \nu_{t\ast}Y_{t}\right) \varphi=Y\left( \varphi\circ\nu_{t}\right) \] and differentiating this equation in $t$ gives \[ \left( \dot{Y}_{t}\varphi\right) \circ\nu_{t}+\left( W_{t}Y_{t} \varphi\right) \circ\nu_{t}=Y\left( \left( W_{t}\varphi\right) \circ \nu_{t}\right) =\left( \nu_{t\ast}Y\right) \left( W_{t}\varphi\right) =\left( Y_{t}W_{t}\varphi\right) \circ\nu_{t}. \] The last equation is equivalent to the first equality in Eq. (\ref{e.2.22}). Since $\operatorname{Ad}_{\nu_{t}}\tilde{W}_{t}=W_{t},$ \[ \left[ Y_{t},W_{t}\right] =\left[ \operatorname{Ad}_{\nu_{t}} Y,\operatorname{Ad}_{\nu_{t}}\tilde{W}_{t}\right] =\operatorname{Ad}_{\nu _{t}}\left[ Y,\tilde{W}_{t}\right] \] which gives the second equality in Eq. (\ref{e.2.22}).
As, by Theorem \ref{thm.2.14}, \[ \frac{d}{dt}\nu_{t}^{-1}=-\nu_{t\ast}^{-1}W_{t}=-\nu_{t\ast}^{-1}W_{t}\circ \nu_{t}\circ\nu_{t}^{-1}=-\tilde{W}_{t}\circ\nu_{t}^{-1}, \] it follows from Eq. (\ref{e.2.22}) that \begin{align*} \frac{d}{dt}\left( \operatorname{Ad}_{\nu_{t}^{-1}}Y\right) & =\left[ \operatorname{Ad}_{\nu_{t}^{-1}}Y,-\tilde{W}_{t}\right] =\left[ \tilde {W}_{t},\operatorname{Ad}_{\nu_{t}^{-1}}Y\right] \text{ and}\\ \frac{d}{dt}\left( \operatorname{Ad}_{\nu_{t}^{-1}}Y\right) & =\operatorname{Ad}_{\nu_{t}^{-1}}\left[ Y,-W_{t}\right] =\operatorname{Ad} _{\nu_{t}^{-1}}\left[ W_{t},Y\right] , \end{align*} which proves both equalities Eq. (\ref{e.2.23}). \end{proof}
\begin{corollary} \label{cor.2.19}If $Y,X\in\Gamma\left( TM\right) $ with $X$ being complete, then \[ \frac{d}{dt}\operatorname{Ad}_{e^{tX}}Y=\operatorname{Ad}_{e^{tX} }\operatorname{ad}_{X}Y=\operatorname{ad}_{X}\operatorname{Ad}_{e^{tX}}Y \] where $\operatorname{ad}_{X}=-L_{X}.$ \end{corollary}
\begin{proof} The result follows directly from Eq. (\ref{e.2.22}) by taking $\nu_{t}=e^{tX}$ and noting that $W_{t}=X=\tilde{W}_{t}$ in this case as $\operatorname{Ad} _{e^{tX}}X=X$ for all $t\in\mathbb{R}.$ \end{proof}
\subsection{Vector field differentiation of flows\label{sec.2.4}}
\begin{definition} [Differentiating $\mu^{X}$ in $X$]\label{def.2.20}Let $X_{t},Y_{t}\in \Gamma\left( TM\right) $ be smoothly varying time dependent vector fields on $M.$ We say $\mu^{X}$ \textbf{is differentiable relative to} $Y$ if there exists $\left\{ X_{t}^{\varepsilon}\right\} _{\varepsilon,t}\subset \Gamma\left( TM\right) $ such that $\left( t,\varepsilon,m\right) \rightarrow X_{t}^{\varepsilon}\left( m\right) \in TM$ is smooth and $X_{\cdot}^{\varepsilon}$ is complete for $\varepsilon$ near $0,$ $X_{t}
^{0}=X_{t},$ and $\frac{d}{d\varepsilon}|_{0}X_{t}^{\varepsilon}=Y_{t}.$ If all of this holds we let \begin{equation}
\partial_{Y}\mu_{t,s}^{X}:=\frac{d}{d\varepsilon}|_{0}\mu_{t,s} ^{X^{\varepsilon}}. \label{e.2.24} \end{equation}
\end{definition}
\begin{theorem} \label{thm.2.21}If $X_{t},Y_{t}\in\Gamma\left( TM\right) $ are as in Definition \ref{def.2.20} so that $\partial_{Y}\mu_{t,s}^{X}=\frac
{d}{d\varepsilon}|_{0}\mu_{t,s}^{X^{\varepsilon}}$ exists, then \begin{align} \partial_{Y}\mu_{t,s}^{X} & =\int_{s}^{t}\mu_{t,\tau\ast}^{X}\left[ Y_{\tau}\circ\mu_{\tau,s}^{X}\right] d\tau\label{e.2.25}\\ & =\left( \int_{s}^{t}\operatorname{Ad}_{\mu_{t,\tau}^{X}}Y_{\tau} d\tau\right) \circ\mu_{t,s}^{X}\label{e.2.26}\\ & =\left( \mu_{t,s}^{X}\right) _{\ast}\int_{s}^{t}\operatorname{Ad} _{\mu_{s,\tau}^{X}}Y_{\tau}d\tau. \label{e.2.27} \end{align}
\end{theorem}
\begin{proof} Let $V_{t,s}:=\left( \mu_{s,t}^{X}\right) _{\ast}\partial_{Y}\mu_{t,s} ^{X}\in\Gamma\left( TM\right) $ so that \begin{equation}
\partial_{Y}\mu_{t,s}^{X}=\frac{d}{d\varepsilon}|_{0}\mu_{t,s}^{X^{\varepsilon }}=\left( \mu_{t,s}^{X}\right) _{\ast}V_{t,s} \label{e.2.28} \end{equation} Notice that Eq. (\ref{e.2.28}) is equivalent to, for all $f\in C^{\infty }\left( M\right) $, \begin{align}
\frac{d}{d\varepsilon}|_{0}\left[ f\circ\mu_{t,s}^{X^{\varepsilon}}\right]
& =df\left( \frac{d}{d\varepsilon}|_{0}\mu_{t,s}^{X^{\varepsilon}}\right) =df\left( \partial_{Y}\mu_{t,s}^{X}\right) \nonumber\\ & =df\left( \left( \mu_{t,s}^{X}\right) _{\ast}V_{t,s}\right) =V_{t,s}\left[ f\circ\mu_{t,s}^{X}\right] . \label{e.2.29} \end{align} So, on one hand, \begin{align}
\frac{d}{d\varepsilon}|_{0}\frac{d}{dt}\left[ f\circ\mu_{t,s}^{X^{\varepsilon
}}\right] & =\frac{d}{dt}\frac{d}{d\varepsilon}|_{0}\left[ f\circ\mu _{t,s}^{X^{\varepsilon}}\right] =\frac{d}{dt}V_{t,s}\left[ f\circ\mu _{t,s}^{X}\right] \nonumber\\ & =\dot{V}_{t,s}\left[ f\circ\mu_{t,s}^{X}\right] +V_{t,s}\left[ X_{t}f\circ\mu_{t,s}^{X}\right] . \label{e.2.30} \end{align} On the other hand, \[ \frac{d}{dt}\left[ f\circ\mu_{t,s}^{X^{\varepsilon}}\right] =\left( X_{t}^{\varepsilon}f\right) \circ\mu_{t,s}^{X^{\varepsilon}} \] and differentiating this equation in $\varepsilon$ implies while using Eq. (\ref{e.2.29}) with $f$ replaced by $X_{t}f$ implies, \begin{align}
\frac{d}{d\varepsilon}|_{0}\frac{d}{dt}\left[ f\circ\mu_{t,s}^{X^{\varepsilon
}}\right] & =Y_{t}f\circ\mu_{t,s}^{X}+\frac{d}{d\varepsilon}|_{0}\left[ \left( X_{t}f\right) \circ\mu_{t,s}^{X^{\varepsilon}}\right] \nonumber\\ & =Y_{t}f\circ\mu_{t,s}^{X}+V_{t,s}\left[ X_{t}f\circ\mu_{t,s}^{X}\right] . \label{e.2.31} \end{align} Comparing Eqs. (\ref{e.2.30}) and (\ref{e.2.31}) shows, \[ \left( \mu_{t,s\ast}^{X}\dot{V}_{t,s}\right) f=\dot{V}_{t,s}\left[ f\circ\mu_{t,s}^{X}\right] =Y_{t}f\circ\mu_{t,s}^{X}=\left( Y_{t}\circ \mu_{t,s}^{X}\right) f\text{ ~}\forall~f\in C^{\infty}\left( M\right) \] which implies, \begin{equation} \dot{V}_{t,s}=\mu_{s,t\ast}^{X}Y_{t}\circ\mu_{t,s}^{X}. \label{e.2.32} \end{equation} Since $\mu_{s,s}^{X}=Id_{M}$ we know that $\partial_{Y}\mu_{s,s}^{X}=0$ and hence $V_{s,s}=0$ and so integrating Eq. (\ref{e.2.32}) implies, \[ V_{t,s}=\int_{s}^{t}\mu_{s,\tau\ast}^{X}Y_{t}\circ\mu_{\tau,s}^{X}d\tau =\int_{s}^{t}\operatorname{Ad}_{\mu_{s,\tau}^{X}}Y_{\tau}d\tau. \] This equality along with Eq. (\ref{e.2.28}) proves Eq. (\ref{e.2.27}). The proofs of Eqs. (\ref{e.2.25}) and (\ref{e.2.26}) now easily follows since \begin{align*} \left( \mu_{t,s}^{X}\right) _{\ast}V_{t,s} & =\int_{s}^{t}\left( \mu_{t,s}^{X}\right) _{\ast}\mu_{s,\tau\ast}^{X}Y_{\tau}\circ\mu_{\tau,s} ^{X}d\tau\\ & =\int_{s}^{t}\mu_{t,\tau\ast}^{X}Y_{\tau}\circ\mu_{\tau,s}^{X}d\tau\\ & =\int_{s}^{t}\mu_{t,\tau\ast}^{X}Y_{\tau}\circ\mu_{\tau,t}^{X}\circ \mu_{t,\tau}^{X}\circ\mu_{\tau,s}^{X}d\tau=\left( \int_{s}^{t} \operatorname{Ad}_{\mu_{t,\tau}^{X}}Y_{\tau}d\tau\right) \circ\mu_{t,s}^{X}. \end{align*}
\end{proof}
The following theorem is an important special case of Theorem \ref{thm.2.21}.
\begin{theorem} [Differential of $e^{tX}$ in $X$]\label{thm.2.22}Suppose that $M$ is a smooth manifold and for each $\sigma\in\mathbb{R},$ $\left\{ X^{\varepsilon }\right\} \subset\Gamma\left( TM\right) $ is a smooth varying one parameter family of complete vector fields on $M$ and let $X:=X^{0}$ and $Y:=\frac
{d}{d\varepsilon}|_{0}X^{\varepsilon}.$ Then \begin{align}
\partial_{Y}e^{tX} & =\frac{d}{d\varepsilon}|_{0}e^{tX^{\varepsilon} }=e_{\ast}^{tX}\int_{0}^{t}e_{\ast}^{-\tau X}Y\circ e^{\tau X}d\tau \label{e.2.33}\\ & =\int_{0}^{t}e_{\ast}^{\left( t-\tau\right) X}Y\circ e^{\tau X} d\tau\label{e.2.34}\\ & =\left[ \int_{0}^{t}\operatorname{Ad}_{e^{\tau X}}Yd\tau\right] \circ e^{tX}. \label{e.2.35} \end{align}
\end{theorem}
\begin{notation} \label{not.2.23}To each smooth path, $t\rightarrow Z_{t}\in\Gamma\left( TM\right) ,$ of complete vector fields, let \begin{equation} W_{t}^{Z}:=\int_{0}^{1}e_{\ast}^{sZ_{t}}\dot{Z}_{t}\circ e^{-sZ_{t}} ds=\int_{0}^{1}\operatorname{Ad}_{e^{sZ_{t}}}\dot{Z}_{t}~ds \label{e.2.39} \end{equation}
\end{notation}
\begin{corollary} \label{cor.2.24}If $t\rightarrow Z_{t}\in\Gamma\left( TM\right) $ is a smooth path of complete vector fields, then \begin{equation} \frac{d}{dt}e^{Z_{t}}=W_{t}^{Z}\circ e^{Z_{t}}. \label{e.2.40} \end{equation}
\end{corollary}
\begin{proof} Theorem \ref{thm.2.22} with $t=1$ and $X_{s}=Z_{t+s},$ gives \begin{equation} \frac{d}{dt}e^{Z_{t}}=\left[ \int_{0}^{1}\operatorname{Ad}_{e^{\tau X_{0}} }X_{0}^{\prime}d\tau\right] \circ e^{X_{0}}=\left[ \int_{0}^{1} \operatorname{Ad}_{e^{\tau Z_{t}}}\dot{Z}_{t}d\tau\right] \circ e^{Z_{t} }.\nonumber \end{equation}
\end{proof}
\iffalse \textbf{Note: }stuff that used to be here has been moved to Section \ref{sec.50.2}. \fi
\subsection{Jacobian formulas and estimates for flows\label{sec.2.5}}
\begin{notation} \label{not.2.25}Let $\mathbb{R}\ni t\rightarrow W_{t}\in\Gamma\left( TM\right) $ be a smoothly varying time dependent vector field and suppose that $\mathbb{R}\times M\ni\left( t,m\right) \rightarrow\nu_{t}\left( m\right) \in M$ is in $C^{\infty}\left( \mathbb{R}\times M,M\right) $ satisfies the ordinary differential equation, \begin{equation} \dot{\nu}_{t}=W_{t}\circ\nu_{t}. \label{e.2.43} \end{equation}
\end{notation}
Notice that if $W_{\left( \cdot\right) }$ is complete, then $\nu_{t} =\mu_{t,s}^{W}\circ\nu_{s}$ for any $s\in\mathbb{R}.$ The general goal of this section is to find estimates on $\nu_{t},$ $\nu_{t\ast},$ and $\nabla \nu_{t\ast}$ (see Definition \ref{def.5.26} below) expressed in terms of the geometry of $M$ and $W_{t}.$ The next key proposition records the ordinary differential equation satisfied by $\nu_{t\ast}.$
\begin{proposition} \label{pro.2.26}If $W_{t}\in\Gamma\left( TM\right) $ and $\nu_{t}\in C^{\infty}\left( M,M\right) $ are as in Notation \ref{not.2.25}, then \begin{equation} \frac{\nabla}{dt}\nu_{t\ast}v=\nabla_{\nu_{t\ast}v}W_{t}\text{ }\forall~v\in TM. \label{e.2.44} \end{equation}
\end{proposition}
\begin{proof} If $\sigma\left( s\right) $ is a curve in $M$ so that $\sigma^{\prime
}\left( 0\right) =\frac{d}{ds}|_{0}\sigma\left( s\right) =v,$ then \begin{align*}
\frac{\nabla}{dt}\nu_{t\ast}v & =\frac{\nabla}{dt}\frac{d}{ds}|_{0}\nu _{t}\left( \sigma\left( s\right) \right) =\frac{\nabla}{ds}|_{0}\frac {d}{dt}\nu_{t}\left( \sigma\left( s\right) \right) \\
& =\frac{\nabla}{ds}|_{0}W_{t}\left( \nu_{t}\left( \sigma\left( s\right) \right) \right) =\nabla_{\nu_{t\ast}v}W_{t}. \end{align*}
\end{proof}
\begin{corollary} \label{cor.2.27}If $W_{t}\in\Gamma\left( TM\right) $ and $\nu_{t}\in C^{\infty}\left( M,M\right) $ are as in Notation \ref{not.2.25}, then \begin{equation} \left\vert \nu_{t\ast}\right\vert _{m}\leq\left\vert \nu_{s\ast}\right\vert _{m}\cdot e^{\int_{J\left( s,t\right) }\left\vert \nabla W_{\tau}\right\vert _{\nu_{\tau}\left( m\right) }d\tau}\leq\left\vert \nu_{s\ast}\right\vert _{m}\cdot e^{\left\vert \nabla W_{\cdot}\right\vert _{J}^{\ast}} \label{e.2.45} \end{equation} and in particular, \begin{equation} \operatorname{Lip}\left( \nu_{t}\right) =\left\vert \nu_{t\ast}\right\vert _{M}\leq\left\vert \nu_{s\ast}\right\vert _{M}\cdot e^{\left\vert \nabla W_{\cdot}\right\vert _{J\left( s,t\right) }^{\ast}}. \label{e.2.46} \end{equation} We also have the following time derivative estimates, \begin{equation} \left\vert \frac{\nabla}{dt}\nu_{t\ast}\right\vert _{m}\leq\left\vert \nu_{s\ast}\right\vert _{m}\left\vert \nabla W_{t}\right\vert _{\nu_{t}\left( m\right) }\cdot e^{\int_{J\left( s,t\right) }\left\vert \nabla W_{\tau }\right\vert _{\nu_{\tau}\left( m\right) }d\tau} \label{e.2.47} \end{equation} and \begin{equation} \left\vert \frac{\nabla}{dt}\nu_{t\ast}\right\vert _{M}\leq\left\vert \nu_{s\ast}\right\vert _{M}\left\vert \nabla W_{t}\right\vert _{M}\cdot e^{\left\vert \nabla W_{\cdot}\right\vert _{J}^{\ast}}. \label{e.2.48} \end{equation}
\end{corollary}
\begin{proof} If we define $H_{t}v:=\nabla_{v}W_{t}$ for all $v\in TM,$ then Proposition \ref{pro.2.26} states, for any $v\in TM,$ that \begin{equation} \frac{\nabla}{dt}\nu_{t\ast}v=H_{t}\nu_{t\ast}v. \label{e.2.49} \end{equation} Therefore by the geometric Bellman-Gronwall's inequality in Corollary \ref{cor.9.3} (with $G\equiv0)$, \[ \left\vert \nu_{t\ast}v\right\vert \leq e^{\int_{J\left( s,t\right) }\left\Vert H_{\tau}\right\Vert _{op}d\tau}\left\vert \nu_{s\ast}v\right\vert =e^{\int_{J\left( s,t\right) }\left\vert \nabla W_{\tau}\right\vert _{\nu_{\tau}\left( m\right) }d\tau}\left\vert \nu_{s\ast}v\right\vert \] which proves Eqs. (\ref{e.2.45}) and (\ref{e.2.46}). By Eq. (\ref{e.2.49}), \begin{align*} \left\vert \frac{\nabla}{dt}\nu_{t\ast}v\right\vert & \leq\left\Vert H_{t}\right\Vert _{op}\cdot\left\vert \nu_{t\ast}v\right\vert =\left\vert \nabla W_{t}\right\vert _{\nu_{t}\left( m\right) }\cdot\left\vert \nu _{t\ast}v\right\vert \\ & \leq\left\vert \nabla W_{t}\right\vert _{\nu_{t}\left( m\right) }\left\vert \nu_{s\ast}v\right\vert \cdot e^{\int_{J\left( s,t\right) }\left\vert \nabla W_{\tau}\right\vert _{\nu_{\tau}\left( m\right) }d\tau }\leq\left\vert \nabla W_{t}\right\vert _{\nu_{t}\left( m\right) }\left\vert \nu_{s\ast}v\right\vert \cdot e^{\left\vert \nabla W_{\cdot}\right\vert _{J\left( s,t\right) }^{\ast}}. \end{align*} Taking the supremum of this inequality over $v\in T_{m}M$ with $\left\vert v\right\vert =1$ gives the estimate in Eq. (\ref{e.2.47}) which then easily implies Eq. (\ref{e.2.48}). \end{proof}
The following corollary records the results in Proposition \ref{pro.2.26} and Corollary \ref{cor.2.27} when $W_{t}=X\in\Gamma\left( TM\right) $ is a complete vector field and $\nu_{t}=e^{tX}.$
\begin{corollary} \label{cor.2.28}If $X\in\Gamma\left( TM\right) $ is a complete vector field and $t\in\mathbb{R}$, then \begin{equation} \frac{\nabla}{dt}e_{\ast}^{tX}v_{m}=\nabla_{e_{\ast}^{tX}v_{m}}X\text{ with }e_{\ast}^{0X}v_{m}=v_{m}, \label{e.2.52} \end{equation} \begin{align*} \left\vert e_{\ast}^{tX}\right\vert _{m} & \leq e^{\int_{J\left( 0,t\right) }\left\vert \nabla X\right\vert _{e^{\tau X}\left( m\right) }d\tau}\leq e^{\left\vert t\right\vert \left\vert \nabla X\right\vert _{M}},\\ \left\vert \frac{\nabla}{dt}e_{\ast}^{tX}\right\vert _{m} & \leq\left\vert \nabla X\right\vert _{e^{tX}\left( m\right) }\cdot e^{\int_{J\left( 0,t\right) }\left\vert \nabla X\right\vert _{e^{\tau X}\left( m\right) }d\tau},\\ \operatorname{Lip}\left( e^{tX}\right) & =\left\vert e_{\ast} ^{tX}\right\vert _{M}\leq e^{\left\vert t\right\vert \left\vert \nabla X\right\vert _{M}},\text{ and}\\ \left\vert \frac{\nabla}{dt}e_{\ast}^{tX}\right\vert _{M} & \leq\left\vert \nabla X\right\vert _{M}\cdot e^{\left\vert t\right\vert \left\vert \nabla X\right\vert _{M}}. \end{align*}
\end{corollary}
\begin{corollary} \label{cor.2.29}If $\mathbb{R}\ni t\rightarrow W_{t}\in\Gamma\left( TM\right) $ is a complete time dependent vector field and $Z\in\Gamma\left( TM\right) ,$ then \begin{align} \left\vert Ad_{\mu_{t,s}^{W}}Z\right\vert _{m} & \leq e^{\int_{J\left( s,t\right) }\left\vert \nabla W_{\tau}\right\vert _{\mu_{\tau,t}^{W}\left( m\right) }d\tau}\cdot\left\vert Z\right\vert _{\mu_{s,t}^{W}\left( m\right) }\nonumber\\ & \leq e^{\int_{J\left( s,t\right) }\left\vert \nabla W_{\tau}\right\vert _{M}d\tau}\cdot\left\vert Z\right\vert _{M}. \label{e.2.53} \end{align} As a special case, if $X\in\Gamma\left( TM\right) $ is complete, then \begin{equation} \left\vert Ad_{e^{X}}Z\right\vert _{m}\leq e^{\int_{0}^{1}\left\vert \nabla X\right\vert _{e^{-\tau X}\left( m\right) }d\tau}\cdot\left\vert Z\right\vert _{e^{-X}\left( m\right) }\leq e^{\left\vert \nabla X\right\vert _{M}}\cdot\left\vert Z\right\vert _{M}. \label{e.2.54} \end{equation}
\end{corollary}
\begin{proof} Let $\nu_{t}:=\mu_{t,s}^{W},$ then $\nu_{t}^{-1}=\mu_{s,t}^{W},$ $\nu _{s}=Id_{M},$ and \begin{align*} \left\vert \left( Ad_{\mu_{t,s}^{W}}Z\right) \left( m\right) \right\vert = & \left\vert \left( \nu_{t}\right) _{\ast}Z\left( \nu_{t}^{-1}\left( m\right) \right) \right\vert \leq\left\vert \left( \nu_{t}\right) _{\ast }\right\vert _{\nu_{t}^{-1}\left( m\right) }\cdot\left\vert Z\right\vert _{\nu_{t}^{-1}\left( m\right) }\\ \leq & \left\vert \nu_{s\ast}\right\vert _{\nu_{t}^{-1}\left( m\right) }\cdot e^{\int_{J\left( s,t\right) }\left\vert \nabla W_{\tau}\right\vert _{\nu_{\tau}\left( \nu_{t}^{-1}\left( m\right) \right) }d\tau} \cdot\left\vert Z\right\vert _{\nu_{t}^{-1}\left( m\right) }\\ & =e^{\int_{J\left( s,t\right) }\left\vert \nabla W_{\tau}\right\vert _{\mu_{\tau,t}^{W}\left( m\right) }d\tau}\cdot\left\vert Z\right\vert _{\mu_{s,t}^{W}\left( m\right) }. \end{align*} For the second assertion we take Eq. (\ref{e.2.53}) with $s=0,$ $t=1,$ and $W_{t}=X$ for all $t$ to find, \[ \left\vert Ad_{e^{X}}Z\right\vert _{m}\leq e^{\int_{0}^{1}\left\vert \nabla X\right\vert _{e^{\left( \tau-1\right) X}\left( m\right) }d\tau} \cdot\left\vert Z\right\vert _{e^{-X}\left( m\right) }=e^{\int_{0} ^{1}\left\vert \nabla X\right\vert _{e^{-\tau X}\left( m\right) }d\tau} \cdot\left\vert Z\right\vert _{e^{-X}\left( m\right) } \]
\end{proof}
\subsection{Distance estimates for flows\label{sec.2.6}}
This subsection is devoted to estimating the distance between two flows, $\mu^{X}$ and $\mu^{Y}.$ A key observation in the proofs to follow is, given $t\in\left[ 0,T\right] ,$ that \begin{equation} \left[ 0,t\right] \ni s\rightarrow\Theta_{s}\left( m\right) :=\mu _{t,s}^{X}\circ\mu_{s,0}^{Y}\left( m\right) \label{e.2.55} \end{equation} is a natural path in $M$ which interpolates between $\Theta_{0}\left( m\right) =\mu_{t,0}^{X}\left( m\right) $ at $s=0$ and $\Theta_{t}\left( m\right) =\mu_{t,0}^{Y}\left( m\right) $ at $s=t.$
\begin{theorem} \label{thm.2.30}Let $J=\left[ 0,T\right] \ni t\rightarrow X_{t,}Y_{t} \in\Gamma\left( TM\right) $ be two smooth complete time dependent vector fields on $M$ and $\mu^{X}$ and $\mu^{Y}$ be their corresponding flows. Then for $t>0$ (for notational simplicity) \begin{equation} d\left( \mu_{t,0}^{X}\left( m\right) ,\mu_{t,0}^{Y}\left( m\right) \right) \leq\int_{0}^{t}e^{\int_{s}^{t}\left\vert \nabla X_{\sigma }\right\vert _{\mu_{\sigma,s}^{X}\left( m\right) }d\sigma}\cdot\left\vert Y_{s}-X_{s}\right\vert _{\mu_{s,0}^{Y}\left( m\right) }~ds \label{e.2.56} \end{equation} and in particular, \begin{equation} d_{M}\left( \mu_{t,0}^{X},\mu_{t,0}^{Y}\right) \leq e^{\left\vert \nabla X\right\vert _{t}^{\ast}}\cdot\left\vert Y-X\right\vert _{t}^{\ast}. \label{e.2.57} \end{equation}
\end{theorem}
\begin{proof} Fix $t\in\left[ 0,T\right] .$ If $\Theta_{s}\left( m\right) $ is as in Eq. (\ref{e.2.55}), then \begin{equation} d\left( \mu_{t,0}^{X}\left( m\right) ,\mu_{t,0}^{Y}\left( m\right) \right) \leq\int_{0}^{t}\left\vert \Theta_{s}^{\prime}\left( m\right) \right\vert ds. \label{e.2.58} \end{equation} Making use of Theorem \ref{thm.2.14} we find, \begin{align} \Theta_{s}^{\prime}\left( m\right) & =\left( \frac{d}{ds}\mu_{t,s} ^{X}\right) \circ\mu_{s,0}^{Y}\left( m\right) +\left( \mu_{t,s} ^{X}\right) _{\ast}\left( \frac{d}{ds}\mu_{s,0}^{Y}\left( m\right) \right) \nonumber\\ & =\left( \mu_{t,s}^{X}\right) _{\ast}\left[ -X_{s}+Y_{s}\right] \circ \mu_{s,0}^{Y}\left( m\right) . \label{e.2.59} \end{align} The Jacobian estimate in Corollary \ref{cor.2.27} with $\nu_{t}=\mu_{t,s}^{X}$ states that, \begin{equation} \left\vert \mu_{t,s\ast}^{X}\right\vert _{m}\leq e^{\int_{s}^{t}\left\vert \nabla X_{\sigma}\right\vert _{\mu_{\sigma,s}^{X}\left( m\right) }d\sigma }\leq e^{\int_{s}^{t}\left\vert \nabla X_{\sigma}\right\vert _{M}d\sigma}. \label{e.2.60} \end{equation} By this Jacobian estimate and Eq. (\ref{e.2.59}), we find that \begin{align} \left\vert \Theta_{s}^{\prime}\left( m\right) \right\vert & \leq\left\vert \left( \mu_{t,s}^{X}\right) _{\ast}\right\vert _{\mu_{s,0}^{Y}\left( m\right) }\left\vert Y_{s}-X_{s}\right\vert _{\mu_{s,0}^{Y}\left( m\right) }\nonumber\\ & \leq e^{\int_{s}^{t}\left\vert \nabla X_{\sigma}\right\vert _{\mu _{\sigma,s}^{X}\left( m\right) }d\sigma}\cdot\left\vert Y_{s}-X_{s} \right\vert _{\mu_{s,0}^{Y}\left( m\right) }\leq e^{\left\vert \nabla X\right\vert _{t}^{\ast}}\left\vert Y_{s}-X_{s}\right\vert _{M} \label{e.2.61} \end{align} which then substituted back into Eq. (\ref{e.2.58}) completes the proof. \end{proof}
\begin{corollary} \label{cor.2.31}Let $J=\left[ 0,T\right] \ni t\rightarrow Y_{t}\in \Gamma\left( TM\right) $ be a smooth complete time dependent vector field on $M$ and $\mu^{Y}$ be the corresponding flow. Then for $t>0$ \begin{equation} d\left( m,\mu_{t,0}^{Y}\left( m\right) \right) \leq\int_{0}^{t}\left\vert Y_{s}\right\vert _{\mu_{s,0}^{Y}\left( m\right) }~ds\leq\left\vert Y\right\vert _{t}^{\ast} \label{e.2.62} \end{equation} and \begin{equation} d\left( \mu_{t,0}^{Y}\left( m\right) ,m\right) \leq\int_{0}^{t}e^{\int _{s}^{t}\left\vert \nabla Y_{\sigma}\right\vert _{\mu_{\sigma,s}^{Y}\left( m\right) }d\sigma}\cdot\left\vert Y_{s}\right\vert _{m}~ds\leq e^{\left\vert \nabla Y\right\vert _{t}^{\ast}}\int_{0}^{t}\left\vert Y_{s}\left( m\right) \right\vert ds. \label{e.2.63} \end{equation}
\end{corollary}
\begin{proof} This corollary easily follows from Theorem \ref{thm.2.11}. Alternatively, taking $X_{\cdot}\equiv0$ in Theorem \ref{thm.2.30} gives Eq. (\ref{e.2.62}) while taking $Y_{\cdot}\equiv0$ shows \[ d\left( \mu_{t,0}^{X}\left( m\right) ,m\right) \leq\int_{0}^{t}e^{\int _{s}^{t}\left\vert \nabla X_{\sigma}\right\vert _{\mu_{\sigma,s}^{X}\left( m\right) }d\sigma}\cdot\left\vert X_{s}\right\vert _{m}~ds\leq e^{\left\vert \nabla X\right\vert _{t}^{\ast}}\int_{0}^{t}\left\vert X_{s}\left( m\right) \right\vert ds. \] Equation (\ref{e.2.63}) now follows by relabeling $X$ to $Y.$ \end{proof}
\section{Nilpotent Lie Algebras (Group) Results\label{sec.3}}
Suppose that $\mathcal{A}$ is a non-commutative associative algebra with unit, $1,$ over $\mathbb{R}$ such that: 1) $\dim_{\mathbb{R}}\mathcal{A}<\infty,$ 2) $\mathcal{A}=\mathbb{R}\cdot1\oplus\mathfrak{g}$ where $\mathfrak{g}$ is a sub-algebra of $\mathcal{A}$ without unit, and 3) there exists $\kappa \in\mathbb{N}$ such that $\xi_{1}\dots\xi_{\kappa+1}=0$ whenever $\xi _{1},\dots,\xi_{\kappa+1}\in\mathfrak{g.}$ We make $\mathcal{A}$ into a Lie algebra using the commutator, $\left[ \xi,\eta\right] :=\xi\eta-\eta\xi$ for all $\xi,\eta\in\mathcal{A},$ as the Lie bracket. Note that $\mathfrak{g}$ is a Lie-subalgebra of $\mathcal{A}$ and as usual we let $\operatorname{ad}_{\xi }:\mathcal{A}\rightarrow\mathcal{A}$ be the linear operator defined by $\operatorname{ad}_{\xi}\eta=\left[ \xi,\eta\right] $. See Example \ref{sec.3.2} below for the key example of this setup that is used in the bulk of this paper.
\subsection{Calculus and functional calculus on $\mathcal{A}$\label{sec.3.1}}
\begin{definition} \label{def.3.1}Let $\mathcal{H}_{0}$ denote the germs of functions which are analytic in a neighborhood of $0\in\mathbb{C}$ and for $f\left( z\right) =\sum_{k=0}^{\infty}a_{k}z^{k}\in\mathcal{H}_{0}$ and $\xi\in\mathfrak{g},$ let \[ f\left( \xi\right) :=\sum_{k=0}^{\infty}a_{k}\xi^{k}=\sum_{k=0}^{\kappa }a_{k}\xi^{k}\in\mathcal{A} \] and \[ f\left( \operatorname{ad}_{\xi}\right) :=\sum_{k=0}^{\infty}a_{k} \operatorname{ad}_{\xi}^{k}=\sum_{k=0}^{\kappa-1}a_{k}\operatorname{ad}_{\xi }^{k}:\mathcal{A}\rightarrow\mathcal{A}. \]
\end{definition}
In most of the results below, we describe properties of $f\left( \xi\right) $ for $f\in\mathcal{H}_{0}$ with the understanding that similar results hold equally as well for $f\left( \operatorname{ad}_{\xi}\right) .$
\begin{proposition} \label{pro.3.2}For each fixed $\xi\in\mathfrak{g},$ the map \begin{equation} \mathcal{H}_{0}\ni f\rightarrow f\left( \xi\right) \in\mathcal{A} \label{e.3.1} \end{equation} is an algebra homomorphism and for each fixed $f\in\mathcal{A}_{0},$ the map \[ \mathfrak{g}\ni\xi\rightarrow f\left( \xi\right) \in\mathcal{A} \] is smooth, i.e. it is infinitely continuously differentiable. Moreover, if $J:=\left( a,b\right) \ni t\rightarrow\xi\left( t\right) \in\mathfrak{g}$ is differentiable with $\left[ \xi\left( t\right) ,\xi\left( s\right) \right] =0$ for $s,t\in J,$ then \begin{equation} \frac{d}{dt}f\left( \xi\left( t\right) \right) =f^{\prime}\left( \xi\left( t\right) \right) \dot{\xi}\left( t\right) =\dot{\xi}\left( t\right) f^{\prime}\left( \xi\left( t\right) \right) \text{~} \forall~\text{ }t\in J. \label{e.3.2} \end{equation}
\end{proposition}
\begin{proof} The standard fact that the map in Eq. (\ref{e.3.1}) is an algebra homomorphism is easily seen to be a direct consequence of the multiplication rules for power series. The smoothness of $f:\mathfrak{g}\rightarrow\mathcal{A}$ is a consequence of the fact that $f\left( \xi\right) $ is a finite linear combination of the smooth multi-linear maps, $\mathfrak{g\ni\xi}\rightarrow \xi^{k}\in\mathcal{A},$ for each $k\in\left\{ 0,1,2,\dots,\kappa\right\} .$ For arbitrary $\xi,\eta\in\mathfrak{g}$ we have \[ \partial_{\eta}\xi^{k}=\sum_{j=0}^{k-1}\xi^{j}\eta\xi^{k-1-j} \] which simplifies to \[ \partial_{\eta}\xi^{k}=k\eta\xi^{k-1}=k\xi^{k-1}\eta\text{ when }\left[ \xi,\eta\right] =0. \]
With these observations the proof of Eq. (\ref{e.3.2}) is a consequence of the following simple computation, \begin{align*} \frac{d}{dt}f\left( \xi\left( t\right) \right) & =\sum_{k=0}^{\kappa }a_{k}\frac{d}{dt}\xi\left( t\right) ^{k}=\sum_{k=0}^{\kappa}a_{k}k\dot{\xi }\left( t\right) \xi\left( t\right) ^{k-1}\\ & =\sum_{k=0}^{\infty}a_{k}k\dot{\xi}\left( t\right) \xi\left( t\right) ^{k-1}=f^{\prime}\left( \xi\left( t\right) \right) \dot{\xi}\left( t\right) =\dot{\xi}\left( t\right) f^{\prime}\left( \xi\left( t\right) \right) . \end{align*}
\end{proof}
For our purposes, the functions, $e^{z},$ $\left( 1+z\right) ^{-1} ,\log\left( 1+z\right) ,$ \begin{align} \psi\left( z\right) & :=\frac{e^{z}-1}{z}=\sum_{k=1}^{\infty}\frac {z^{k-1}}{k!}=\sum_{k=0}^{\infty}\frac{z^{k}}{\left( k+1\right) !},\label{e.3.3}\\ \psi_{-}\left( z\right) & :=\frac{1}{\psi\left( -z\right) }=\frac {z}{1-e^{-z}}=\frac{e^{z}\cdot z}{e^{z}-1},\text{ and}\label{e.3.4}\\ \mathcal{L}\left( z\right) & =\psi_{-}\left( \log\left( 1+z\right) \right) :=\frac{\left( 1+z\right) \log\left( 1+z\right) }{z} \label{e.3.5}\\ & =1+\sum_{j=2}^{\infty}\frac{\left( -1\right) ^{j}}{j\cdot\left( j-1\right) }z^{j-1} \label{e.3.6} \end{align} are the most important functions in $\mathcal{H}_{0}.$
\begin{lemma} \label{lem.3.3}The subset, \[ G:=1+\mathfrak{g}=\left\{ 1+\xi:\xi\in\mathfrak{g}\right\} , \] equipped with the algebra multiplication law forms a group where the inverse operation is given by \[ \left( 1+\xi\right) ^{-1}=\frac{1}{1+\xi}. \]
\end{lemma}
The following corollary follows directly from Proposition \ref{pro.3.2}.
\begin{corollary} \label{cor.3.4}The three map, $\mathfrak{g\ni\xi}\rightarrow e^{\xi}\in G,$ $\mathfrak{g}\ni\xi\rightarrow\left( 1+\xi\right) ^{-1}\in G,$ and $\mathfrak{g}\ni\xi\rightarrow\log\left( 1+\xi\right) \in\mathfrak{g}$ are smooth and these maps satisfy the following natural identities.
\begin{enumerate} \item For all $\xi\in\mathfrak{g},$ \begin{align*} \frac{d}{dt}e^{t\xi} & =\xi e^{t\xi},~\text{and}\\ \frac{d}{dt}\log\left( 1+t\xi\right) & =\xi\frac{1}{1+\xi}=\frac{\xi }{1+\xi}. \end{align*} Moreover generally, if $t\rightarrow\xi\left( t\right) \in\mathfrak{g}$ is differentiable near $t_{0}\in\mathbb{R}$ and $\left[ \xi\left( t\right) ,\xi\left( s\right) \right] =0$ for $s$ and $t$ near $t_{0},$ then \begin{align*} \frac{d}{dt}e^{\xi\left( t\right) } & =\dot{\xi}\left( t\right) e^{\xi\left( t\right) }\text{ and }\\ \frac{d}{dt}\log\left( 1+\xi\left( t\right) \right) & =\frac{\dot{\xi }\left( t\right) }{1+\xi\left( t\right) }=\dot{\xi}\left( t\right) \left( 1+\xi\left( t\right) \right) ^{-1}. \end{align*}
\item For all $s,t\in\mathbb{R},$ $e^{t\xi}e^{s\xi}=e^{\left( t+s\right) \xi}$ and $e^{-t\xi}=\left[ e^{t\xi}\right] ^{-1}.$ \end{enumerate} \end{corollary}
\begin{proposition} \label{pro.3.5}The map, \[ \mathfrak{g\ni\xi}\rightarrow e^{\xi}\in G, \] is a diffeomorphism and the map, \[ G\ni g=1+\xi\rightarrow\log\left( g\right) =\log\left( 1+\xi\right) \in\mathfrak{g,} \] is its inverse map. \end{proposition}
\begin{proof} By Corollary \ref{cor.3.4}, \[ \frac{d}{dt}\log\left( e^{t\xi}\right) =\left[ e^{t\xi}\right] ^{-1} \frac{d}{dt}e^{t\xi}=e^{-t\xi}e^{t\xi}\xi=\xi, \] from which it follows that $\log\left( e^{t\xi}\right) =\log\left( 1\right) +t\xi=t\xi.$ Taking $t=1$ in this identity shows $\log\left( e^{\xi}\right) =\xi.$
Similarly, by Corollary \ref{cor.3.4}, if we let $g\left( t\right) :=e^{\log\left( 1+t\xi\right) }\in G,$ then \[ \dot{g}\left( t\right) =\frac{d}{dt}e^{\log\left( 1+t\xi\right) } =e^{\log\left( 1+t\xi\right) }\cdot\frac{d}{dt}\log\left( 1+t\xi\right) =g\left( t\right) \frac{\xi}{1+t\xi}\text{ with }g\left( 0\right) =1. \] Since $t\rightarrow\left( 1+t\xi\right) $ satisfies the same equation as $g\left( t\right) ,$ by uniqueness of solutions we conclude that $g\left( t\right) =1+t\xi$ for all $t\in\mathbb{R}$ and in particular taking $t=1$ shows \[ e^{\log\left( 1+\xi\right) }=g\left( 1\right) =1+\xi. \]
\end{proof}
For $k\in G,$ let $L_{k}\in\mathrm{Diff}\left( G\right) $ be defined by $L_{k}g=kg$ for all $g\in G.$ For $\xi\in\mathfrak{g},$ let $G\ni g\rightarrow\tilde{\xi}\left( g\right) :=L_{g\ast}\xi\in T_{g}G$ be the left invariant vector field on $G$ associated to $\xi\in\mathfrak{g.}$ If $f:G+1+\mathfrak{g}\cong\mathfrak{g}\rightarrow\mathbb{R}$ is a smooth function, then \[
\left( \tilde{\xi}f\right) \left( g\right) =\frac{d}{dt}|_{0}f\left( ge^{t\xi}\right) =\left( \partial_{g\xi}f\right) \left( g\right) =f^{\prime}\left( g\right) g\xi. \] Thus if $\xi,\eta\in\mathfrak{g},$ \[ \left( \tilde{\eta}\tilde{\xi}f\right) \left( g\right) =f^{\prime\prime }\left( g\right) \left[ g\eta\otimes g\xi\right] +f^{\prime}\left( g\right) g\eta\xi \] and since $f^{\prime\prime}\left( g\right) $ is symmetric, \[ \left( \left[ \tilde{\eta},\tilde{\xi}\right] f\right) \left( g\right) =f^{\prime}\left( g\right) g\left( \eta\xi-\xi\eta\right) =\left( \left( \eta\xi-\xi\eta\right) ^{\sim}f\right) \left( g\right) . \] Therefore the standard left invariant vector-field Lie algebra associated to $G$ has bracket, \[ \left[ \eta,\xi\right] =\left[ \tilde{\eta},\tilde{\xi}\right] \left( 1\right) =\eta\xi-\xi\eta \] which is the same as the Lie algebra associated to the algebra multiplication law.
\begin{definition} \label{def.3.6}For $\xi\in C^{1}\left( \left[ 0,T\right] ,\mathfrak{g} \right) ,$ let $g^{\xi}\left( t\right) \in G$ denote the unique solution to the linear differential equation, \begin{equation} \dot{g}^{\xi}\left( t\right) =g^{\xi}\left( t\right) \dot{\xi}\left( t\right) =\widetilde{\xi\left( t\right) }\left( g\left( t\right) \right) \text{ with }g^{\xi}\left( 0\right) =1\in G \label{e.3.7} \end{equation} and further let \begin{equation} C^{\xi}\left( t\right) =\log\left( g^{\xi}\left( t\right) \right) . \label{e.3.8} \end{equation}
\end{definition}
\begin{remark} \label{rem.3.7}If $\xi\in C^{1}\left( \left[ 0,T\right] ,\mathfrak{g} \right) ,$ then $t\rightarrow\widetilde{\dot{\xi}\left( t\right) }\in \Gamma\left( TG\right) $ is a $C^{0}$-varying vector field on $G$ with associated flow, $\mu_{t,s}^{\widetilde{\dot{\xi}}},$ which satisfies $L_{k}\circ\mu_{t,s}^{\widetilde{\dot{\xi}}}=\mu_{t,s}^{\widetilde{\dot{\xi}} }\circ L_{k}.$ Applying this equation to$1\in G$ shows \[ \mu_{t,s}^{\widetilde{\dot{\xi}}}\left( k\right) =k\cdot\mu_{t,s} ^{\widetilde{\dot{\xi}}}\left( 1\right) \in G\text{ for all }k\in G \] where $\mu_{t,s}^{\widetilde{\dot{\xi}}}\left( 1\right) \in G$ satisfies the ODE, \[ \frac{d}{dt}\mu_{t,s}^{\widetilde{\dot{\xi}}}\left( 1\right) =\widetilde{\dot{\xi}\left( t\right) }\circ\mu_{t,s}^{\widetilde{\dot{\xi}} }\left( 1\right) =\mu_{t,s}^{\widetilde{\dot{\xi}}}\left( 1\right) \dot{\xi}\left( t\right) \text{ with }\mu_{s,s}^{\widetilde{\dot{\xi}} }\left( 1\right) =1. \] As $g^{\xi}\left( s\right) ^{-1}g^{\xi}\left( t\right) $ satisfies this same differential equation, it follows that \[ \mu_{t,s}^{\widetilde{\dot{\xi}}}\left( k\right) =kg^{\xi}\left( s\right) ^{-1}g^{\xi}\left( t\right) =R_{g^{\xi}\left( s\right) ^{-1}g^{\xi}\left( t\right) }k\text{ }\forall~s,t\in\left[ 0,T\right] . \] In particular if $\xi\in\mathfrak{g}$ is constant, then $e^{t\tilde{\xi} }=R_{e^{t\xi}},$ i.e. \[ e^{t\tilde{\xi}}\left( k\right) =ke^{t\xi}\text{ for all }k\in G \]
\end{remark}
\begin{proposition} \label{pro.3.8}For $\xi,\eta\in\mathfrak{g},$ \[ \partial_{\xi}e^{\eta}=e^{\eta}\int_{0}^{1}\operatorname{Ad}_{e^{-t\eta}}\xi dt=\left[ \int_{0}^{1}\operatorname{Ad}_{e^{t\eta}}\xi dt\right] e^{\eta}. \] Consequently if $C\left( t\right) \in\mathfrak{g}$ is a smooth curve then \begin{align*} \frac{d}{dt}e^{C\left( t\right) } & =\left[ \int_{0}^{1}\operatorname{Ad} _{e^{sC\left( t\right) }}\dot{C}\left( t\right) ds\right] e^{C\left( t\right) }\\ & =e^{C\left( t\right) }\left[ \int_{0}^{1}\operatorname{Ad} _{e^{-sC\left( t\right) }}\dot{C}\left( t\right) ds\right] \end{align*}
\end{proposition}
\begin{proof} \textbf{First proof. }Differentiating the identity, \[ \frac{d}{dt}e^{t\left( \eta+s\xi\right) }=\left( \eta+s\xi\right) e^{t\left( \eta+s\xi\right) }, \] in $s$ shows \begin{align*}
\frac{d}{dt}\frac{d}{ds}|_{0}e^{t\left( \eta+s\xi\right) } & =\frac{d}
{ds}|_{0}\frac{d}{dt}e^{t\left( \eta+s\xi\right) }=\frac{d}{ds}|_{0}\left[ \left( \eta+s\xi\right) e^{t\left( \eta+s\xi\right) }\right] \\
& =\xi e^{t\eta}+\eta\frac{d}{ds}|_{0}e^{t\left( \eta+s\xi\right) }\text{
with }\frac{d}{ds}|_{0}e^{0\left( \eta+s\xi\right) }=0. \end{align*} Solving this equation by Duhamel's principle gives, \[
\frac{d}{ds}|_{0}e^{\left( \eta+s\xi\right) }=\int_{0}^{1}e^{\left( 1-t\right) \eta}\xi e^{t\eta}dt, \] i.e. \[ \partial_{\xi}e^{\eta}=e^{\eta}\int_{0}^{1}\operatorname{Ad}_{e^{-t\eta}}\xi dt=\left[ \int_{0}^{1}\operatorname{Ad}_{e^{t\eta}}\xi dt\right] e^{\eta}. \]
\textbf{Second proof. } This proof relies on the fact that the statement of this proposition is in fact a special case of Theorem \ref{thm.2.22}. Indeed using this theorem along with Remark \ref{rem.3.7} shows, \begin{align*} \partial_{\eta}e^{t\xi} & =\partial_{\tilde{\eta}}e^{t\tilde{\xi}}\left( 1\right) =\left[ \int_{0}^{t}\operatorname{Ad}_{e^{\tau\tilde{\xi}}} \tilde{\eta}d\tau\right] \circ e^{t\tilde{\xi}}\left( 1\right) \\ & =\left[ \int_{0}^{t}\operatorname{Ad}_{e^{\tau\tilde{\xi}}}\tilde{\eta }d\tau\right] \left( e^{t\xi}\right) \end{align*} where \begin{align*} \left( \operatorname{Ad}_{e^{\tau\tilde{\xi}}}\tilde{\eta}\right) \left( k\right) & =\left( e_{\ast}^{\tau\tilde{\xi}}\tilde{\eta}\circ e^{-\tau\tilde{\xi}}\right) \left( k\right) =e_{\ast}^{\tau\tilde{\xi} }\left( L_{e^{-\tau\tilde{\xi}}\left( k\right) }\right) _{\ast}\eta\\ & =e_{\ast}^{\tau\tilde{\xi}}\left( L_{ke^{-\tau\xi}}\right) _{\ast} \eta=\left( R_{e^{\tau\xi}}\right) _{\ast}\left( L_{ke^{-\tau\xi}}\right) _{\ast}\eta\\ & =ke^{-\tau\xi}\eta e^{\tau\xi}. \end{align*} Since \[ \left[ \int_{0}^{t}\operatorname{Ad}_{e^{\tau\tilde{\xi}}}\tilde{\eta} d\tau\right] \left( e^{t\xi}\right) =e^{t\xi}\int_{0}^{t}e^{-\tau\xi}\eta e^{\tau\xi}d\tau=\int_{0}^{t}e^{\left( t-\tau\right) \xi}\eta e^{\tau\xi }d\tau, \] the result is again proved. \end{proof}
\begin{lemma} \label{lem.3.9}For $\eta\in\mathfrak{g}$ and $t\in\mathbb{R},$ $\operatorname{Ad}_{e^{t\eta}}=e^{t\operatorname{ad}_{\eta}}$ and \begin{equation} \int_{0}^{1}\operatorname{Ad}_{e^{t\eta}}dt=\psi\left( \operatorname{ad} _{\eta}\right) \label{e.3.9} \end{equation} where $\psi$ is as in Eq. (\ref{e.3.3}). \end{lemma}
\begin{proof} Let $\xi\in\mathfrak{g}.$ Since \begin{align*} \frac{d}{dt}\left[ \operatorname{Ad}_{e^{t\eta}}\xi\right] & =\frac{d} {dt}\left[ e^{t\eta}\xi e^{-t\eta}\right] =\eta e^{t\eta}\xi e^{-t\eta }-e^{t\eta}\xi e^{-t\eta}\eta\\ & =\operatorname{ad}_{\eta}\operatorname{Ad}_{e^{t\eta}}\xi \end{align*} and $e^{t\operatorname{ad}_{\eta}}\xi$ solves the same equation with the same initial condition of $\xi$ at $t=0,$ we conclude that $\operatorname{Ad} _{e^{t\eta}}\xi=e^{t\operatorname{ad}_{\eta}}\xi.$ As this is true for all $\xi\in\mathfrak{g,}$ it follows that $\operatorname{Ad}_{e^{t\eta} }=e^{t\operatorname{ad}_{\eta}}.$ The last equality is now proved by integrating the series expansion for $e^{t\operatorname{ad}_{\eta}};$ \begin{align*} \int_{0}^{1}\operatorname{Ad}_{e^{t\eta}}dt & =\int_{0}^{1} e^{t\operatorname{ad}_{\eta}}dt=\int_{0}^{1}\sum_{n=0}^{\infty}\frac{t^{n} }{n!}\operatorname{ad}_{\eta}^{n}dt\\ & =\sum_{n=0}^{\infty}\int_{0}^{1}\frac{t^{n}}{n!}\operatorname{ad}_{\eta }^{n}dt=\sum_{n=0}^{\infty}\frac{1}{\left( n+1\right) !}\operatorname{ad} _{\eta}^{n}=\psi\left( \operatorname{ad}_{\eta}\right) . \end{align*}
\end{proof}
\begin{corollary} \label{cor.3.10}Let $g\left( t\right) $ be a smooth curve in $G,$ \[ \dot{\xi}\left( t\right) :=L_{g\left( t\right) ^{-1}\ast}\dot{g}\left( t\right) =g\left( t\right) ^{-1}\dot{g}\left( t\right) \in\mathfrak{g} \text{ and }C\left( t\right) :=\log\left( g\left( t\right) \right) \in\mathfrak{g.} \] Then $C\left( t\right) :=\log\left( g\left( t\right) \right) \in\mathfrak{g}$ is the unique solution to the ODE, \begin{equation} \psi\left( -\operatorname{ad}_{C\left( t\right) }\right) \dot{C}\left( t\right) =\int_{0}^{1}\operatorname{Ad}_{e^{-sC\left( t\right) }}\dot {C}\left( t\right) ds=\dot{\xi}\left( t\right) \text{ with }C\left( 0\right) =\log\left( g\left( 0\right) \right) \label{e.3.10} \end{equation} or equivalently (in more standard form) $C\left( t\right) $ satisfies, \begin{equation} \dot{C}\left( t\right) =\psi_{-}\left( \operatorname{ad}_{C\left( t\right) }\right) \dot{\xi}\left( t\right) =\text{\textquotedblleft} \frac{\operatorname{ad}_{C\left( t\right) }}{I-e^{-\operatorname{ad} _{C\left( t\right) }}}\dot{\xi}\left( t\right) \text{\textquotedblright \ with }C\left( 0\right) =\log\left( g\left( 0\right) \right) , \label{e.3.11} \end{equation} where $\psi_{-}\left( z\right) =1/\psi\left( -z\right) $ as in Eq. (\ref{e.3.4}). \end{corollary}
\begin{proof} Since $g\left( t\right) =e^{C\left( t\right) },$ it follows from Proposition \ref{pro.3.8} and Lemmas \ref{lem.3.9} that \begin{align*} g\left( t\right) \dot{\xi}\left( t\right) & =\dot{g}\,\left( t\right) =\frac{d}{dt}e^{C\left( t\right) }=e^{C\left( t\right) }\int_{0} ^{1}\operatorname{Ad}_{e^{-sC\left( t\right) }}\dot{C}\left( t\right) ds\\ & =g\left( t\right) \int_{0}^{1}\operatorname{Ad}_{e^{-sC\left( t\right) }}\dot{C}\left( t\right) ds=g\left( t\right) \psi\left( -\operatorname{ad}_{C\left( t\right) }\right) \dot{C}\left( t\right) . \end{align*} Multiplying this identity on the left by $g\left( t\right) ^{-1}$ gives the Eq. (\ref{e.3.10}) while Eq. (\ref{e.3.11}) then follows by multiplying Eq. (\ref{e.3.10}) on the left by $\psi_{-}\left( \operatorname{ad}_{C\left( t\right) }\right) .$ \end{proof}
\begin{definition} \label{def.3.11}Let $\Gamma:\mathfrak{g}\times\mathfrak{g\rightarrow g}$ be the function defined by \[ \Gamma\left( \xi,\eta\right) :=\log\left( e^{\xi}e^{\eta}\right) \in\mathfrak{g}\text{ for all }\xi,\eta\in\mathfrak{g}. \]
\end{definition}
The next proposition deals with Lie sub-algebras of $\mathfrak{g}$ and simply connected Lie subgroups of $G.$
\begin{proposition} \label{pro.3.12}Let $\mathfrak{g}_{0}$ be a Lie subalgebra of $\mathfrak{g}$ and $G_{0}\subset G$ be the unique connected Lie subgroup of $G$ which has $\mathfrak{g}_{0}$ as its Lie algebra. If $g\left( t\right) \in G_{0}$ is a smooth curve connecting $1$ to $g\in G_{0}$ and $\dot{\xi}\left( t\right) :=g\left( t\right) ^{-1}\dot{g}\left( t\right) \in\mathfrak{g}_{0},$ then $C\left( t\right) :=\log\left( g\left( t\right) \right) \in \mathfrak{g}_{0}.$ \end{proposition}
\begin{proof} We know that $C\left( t\right) $ may be characterized as the solution to the ODE \[ \dot{C}\left( t\right) =F_{\xi}\left( t,C\left( t\right) \right) \text{ with }C\left( 0\right) =0, \] where \[ F_{\xi}\left( t,\eta\right) \frac{\operatorname{ad}_{\eta}} {I-e^{-\operatorname{ad}_{\eta}}}\dot{\xi}\left( t\right) =\frac{1}{\psi }\left( -\operatorname{ad}_{\eta}\right) \dot{\xi}\left( t\right) . \] As $F_{\xi}\left( t,\cdot\right) :\mathfrak{g}_{0}\rightarrow\mathfrak{g} _{0},$ it follows that $C\left( t\right) \in\mathfrak{g}_{0}$ as required. \end{proof}
\begin{corollary} \label{cor.3.13}If we continue the assumptions and notation in Proposition \ref{pro.3.12}, then $\log\left( G_{0}\right) =\mathfrak{g}_{0}$ and
$\log|_{G_{0}}:G_{0}\rightarrow\mathfrak{g}_{0}$ is a diffeomorphism with inverse given by \[ \mathfrak{g}_{0}\ni A\rightarrow e^{A}\in G_{0}. \]
\end{corollary}
\begin{proof} These assertions follow directly using $e^{A}\in G_{0}$ for $A\in \mathfrak{g}_{0}$ along with Proposition \ref{pro.3.5} and Proposition \ref{pro.3.12}. \end{proof}
For later purposes it is useful to record an \textquotedblleft explicit\textquotedblright\ formula for $g^{\xi}\left( t\right) $ as defined in Definition \ref{def.3.6}.
\begin{proposition} \label{pro.3.14}The path, $g^{\xi}\left( t\right) \in G,$ as in Definition \ref{def.3.6} may be expressed as \begin{equation} g^{\xi}\left( t\right) =1+\sum_{k=1}^{\kappa}\int_{0\leq s_{1}\leq s_{2} \leq\dots\leq s_{k}\leq t}\dot{\xi}\left( s_{1}\right) \dots\dot{\xi}\left( s_{k}\right) ds_{1}\dots ds_{k}. \label{e.3.12} \end{equation}
\end{proposition}
\begin{proof} From Definition \ref{def.3.6} and the fundamental theorem of calculus, \[ g^{\xi}\left( t\right) =1+\int_{0}^{t}g^{\xi}\left( \tau\right) \dot{\xi }\left( \tau\right) d\tau. \] Feeding this equation back into itself then shows, \begin{align} g^{\xi}\left( t\right) & =1+\int_{0}^{t}\left[ 1+\int_{0}^{\tau}g^{\xi }\left( s\right) \dot{\xi}\left( s\right) ds\right] \dot{\xi}\left( \tau\right) d\tau\nonumber\\ & =1+\int_{0}^{t}\dot{\xi}\left( \tau\right) d\tau+\int_{0}^{t}d\tau \int_{0}^{\tau}dsg^{\xi}\left( s\right) \dot{\xi}\left( s\right) \dot{\xi }\left( \tau\right) .\nonumber \end{align} Continuing this way inductively shows for any $m\in\mathbb{N}$ that, \begin{equation} g^{\xi}\left( t\right) =1+\sum_{k=1}^{m-1}\int_{0\leq s_{1}\leq s_{2} \leq\dots\leq s_{k}\leq t}\dot{\xi}\left( s_{1}\right) \dots\dot{\xi}\left( s_{k}\right) d\mathbf{s+}R_{m}\left( t\right) \label{e.3.13} \end{equation} where $\sum_{k=1}^{m-1}\left[ \dots\right] \equiv0$ when $m=1$ and \[ R_{m}\left( t\right) :=\int_{0\leq s_{1}\leq s_{2}\leq\dots\leq s_{m}\leq t}g^{\xi}\left( s_{1}\right) \dot{\xi}\left( s_{1}\right) \dots\dot{\xi }\left( s_{m}\right) d\mathbf{s} \] where $d\mathbf{s}$ is short hand for $ds_{1}\dots ds_{m}$ in the above formula. Since \[ \dot{\xi}\left( s_{1}\right) \dots\dot{\xi}\left( s_{\kappa+1}\right) =0\in\mathcal{A}, \] it follows that $R_{\kappa+1}\left( t\right) =0$ and so Eq. (\ref{e.3.13}) with $m=\kappa+1$ gives Eq. (\ref{e.3.12}). \end{proof}
\begin{corollary} \label{cor.3.15}If $g^{\xi}\left( t\right) \in G$ is as in Definition \ref{def.3.6}, then \begin{equation} \operatorname{Ad}_{g^{\xi}\left( t\right) }=I+\sum_{k=1}^{\kappa}\int_{0\leq s_{1}\leq s_{2}\leq\dots\leq s_{k}\leq t}\operatorname{ad}_{\dot{\xi}\left( s_{1}\right) }\dots\operatorname{ad}_{\dot{\xi}\left( s_{k}\right) } ds_{1}\dots ds_{k}. \label{e.3.14} \end{equation}
\end{corollary}
\begin{proof} Since \[ \frac{d}{dt}\operatorname{Ad}_{g^{\xi}\left( t\right) }=\operatorname{Ad} _{g^{\xi}\left( t\right) }\operatorname{ad}_{\dot{\xi}\left( t\right) }\text{ with }\operatorname{Ad}_{g^{\xi}\left( t\right) }=Id_{\mathfrak{g} }, \] the proof of Eq. (\ref{e.3.14}) is exactly the same as the proof of Eq. (\ref{e.3.12}) provided the reader changes $g^{\xi}$ to $\operatorname{Ad} _{g^{\xi}}$ and $\dot{\xi}$ to $\operatorname{ad}_{\dot{\xi}}$ everywhere. \end{proof}
\begin{notation} \label{not.3.16}For $j\in\mathbb{N},$ $a:[0,\infty)^{j}\rightarrow\mathbb{R}$ is a bounded measurable function, $t\in\left[ 0,T\right] ,$ and $\xi\in L^{1}\left( \left[ 0,T\right] ,\mathfrak{g}\right) ,$ let \[ \hat{a}_{t}\left( \xi\right) =\int_{\left[ 0,t\right] ^{j}}a\left( s_{1},\dots,s_{j}\right) \xi\left( s_{1}\right) \dots\xi\left( s_{j}\right) ds_{1}\dots ds_{j}\in\mathfrak{g} \] and $\hat{a}_{t}\left( \operatorname{ad}_{\xi}\right) :\mathfrak{g} \rightarrow\mathfrak{g}$ be the linear transformation defined by \[ \hat{a}_{t}\left( \operatorname{ad}_{\xi}\right) :=\int_{\left[ 0,t\right] ^{j}}a\left( s_{1},\dots,s_{j}\right) \operatorname{ad}_{\xi\left( s_{1}\right) }\dots\operatorname{ad}_{\xi\left( s_{j}\right) }ds_{1}\dots ds_{j}. \] Note that $\hat{a}_{t}\left( \xi\right) =0$ if $j>\kappa$ and $\hat{a} _{t}\left( \operatorname{ad}_{\xi}\right) \equiv0$ if $j\geq\kappa.$ \end{notation}
The proof of the following lemma is elementary and is left to the reader.
\begin{lemma} \label{lem.3.17}If $a:[0,\infty)^{j}\rightarrow\mathbb{R}$ and $b:[0,\infty )^{k}\rightarrow\mathbb{R}$ are bounded and measurable functions, $t\in\left[ 0,T\right] ,$ and $\xi\in L^{1}\left( \left[ 0,T\right] ,\mathfrak{g} \right) ,$ then \begin{align*} \hat{a}_{t}\left( \xi\right) \hat{b}_{t}\left( \xi\right) & =\widehat{\left[ a\otimes b\right] }_{t}\left( \xi\right) \text{ and }\\ \hat{a}_{t}\left( \operatorname{ad}_{\xi}\right) \hat{b}_{t}\left( \operatorname{ad}_{\xi}\right) & =\widehat{\left[ a\otimes b\right] } _{t}\left( \operatorname{ad}_{\xi}\right) \end{align*} where $a\otimes b:[0,\infty)^{j+k}\rightarrow\mathbb{R}$ is the bounded measurable function defined by \[ a\otimes b\left( s_{1},\dots,s_{j},t_{1},\dots,t_{k}\right) =a\left( s_{1},\dots,s_{j}\right) b\left( t_{1},\dots,t_{k}\right) . \]
\end{lemma}
\begin{proposition} \label{pro.3.18}If $g\left( t\right) =g^{\xi}\left( t\right) \in G$ and $C\left( t\right) =C^{\xi}\left( t\right) =\log\left( g\left( t\right) \right) \in\mathfrak{g}$ are as in Definition \ref{def.3.6} and $f\in\mathcal{H}_{0},\ $then for each $j\in\mathbb{N\cap}\left[ 1,\kappa-1\right] ,$ there exists bounded measurable functions, $\mathbf{f}^{j}:[0,\infty)^{j}\rightarrow\mathbb{R}$ such that \begin{equation} f\left( \operatorname{ad}_{C\left( t\right) }\right) =f\left( 0\right) Id_{\mathfrak{g}}+\sum_{j=1}^{\kappa-1}\widehat{\mathbf{f}^{j}}_{t}\left( \operatorname{ad}_{\dot{\xi}}\right) . \label{e.3.15} \end{equation} Moreover, each function $\mathbf{f}^{j}$ depends linearly on $\left( f\left( 0\right) ,\dots,f^{\left( \kappa-1\right) }\left( 0\right) \right) .$\footnote{The fact that the $\mathbf{f}^{j}$ depend linearly on first $\left( \kappa-1\right) $-derivatives of $f$ is easily understood from the identity,$f\left( \operatorname{ad}_{C\left( t\right) }\right) =\sum _{j=0}^{\kappa-1}\left( f^{\left( j\right) }\left( 0\right) /j!\right) \operatorname{ad}_{C\left( t\right) }^{j}.$} \end{proposition}
\begin{proof} For $\lambda\in\mathbb{R},$ let $u\left( w\right) :=f\left( \log\left( 1+w\right) \right) $ and observe by a simple exercise in differentiation shows there exists $\alpha_{n,k}\in\mathbb{Z}$ such that $u^{\left( n\right) }\left( 0\right) =\sum_{k=0}^{n}\alpha_{n,k}f^{\left( k\right) }\left( 0\right) .$ [For example, one has $u\left( 0\right) =f\left( 0\right) ,$ $u^{\prime}\left( 0\right) =f^{\prime}\left( 0\right) ,$ $u^{\prime\prime }\left( 0\right) =f^{\prime\prime}\left( 0\right) -f^{\prime}\left( 0\right) ,$ and $u^{\left( 3\right) }\left( 0\right) =f^{\left( 3\right) }\left( 0\right) -3f^{\prime\prime}\left( 0\right) +2f^{\prime }\left( 0\right) .]$
Since $g\left( t\right) =e^{C\left( t\right) },$ it follows that $\operatorname{Ad}_{g\left( t\right) }=\operatorname{Ad}_{e^{C\left( t\right) }}=e^{\operatorname{ad}_{C\left( t\right) }}$ and therefore \[ \operatorname{ad}_{C\left( t\right) }=\log\left( \operatorname{Ad} _{g\left( t\right) }\right) =\log\left( Id_{\mathfrak{g}}+\left[ \operatorname{Ad}_{g\left( t\right) }-Id_{\mathfrak{g}}\right] \right) \] and hence, \begin{align*} f\left( \operatorname{ad}_{C\left( t\right) }\right) & =f\circ \log\left( Id_{\mathfrak{g}}+\left[ \operatorname{Ad}_{g\left( t\right) }-Id_{\mathfrak{g}}\right] \right) \\ & =u\left( I+\left[ \operatorname{Ad}_{g\left( t\right) }-I\right] \right) \\ & =\sum_{j=0}^{\infty}\frac{u^{\left( j\right) }\left( 0\right) } {j!}\left( \operatorname{Ad}_{g\left( t\right) }-I\right) ^{j}\\ & =f\left( 0\right) Id_{\mathfrak{g}}+\sum_{j=1}^{\kappa-1}\frac{u^{\left( j\right) }\left( 0\right) }{j!}\left( \operatorname{Ad}_{g\left( t\right) }-Id_{\mathfrak{g}}\right) ^{j}. \end{align*} By Corollary \ref{cor.3.15}, \[ \operatorname{Ad}_{g\left( t\right) }-Id_{\mathfrak{g}}=\sum_{k=1} ^{\kappa-1}\hat{b}_{t}^{k}\left( \operatorname{ad}_{\dot{\xi}}\right) \] where \[ b^{k}\left( s_{1},\dots,s_{k}\right) :=1_{0\leq s_{1}\leq s_{2}\leq\dots\leq s_{k}} \] and so it follows that \[ f\left( \operatorname{ad}_{C\left( t\right) }\right) =f\left( 0\right) I+\sum_{j=1}^{\kappa-1}\frac{u^{\left( j\right) }\left( 0\right) }{j!} \sum_{k_{1},\dots,k_{j}=1}^{\kappa-1}\hat{b}_{t}^{k_{1}}\left( \operatorname{ad}_{\dot{\xi}}\right) \dots\hat{b}_{t}^{k_{j}}\left( \operatorname{ad}_{\dot{\xi}}\right) . \] By repeated use of Lemma \ref{lem.3.17}, the last identity may be written in the form described in Eq. (\ref{e.3.15}).
\iffalse If $u_{\lambda}\left( w\right) =f\left( \lambda\log\left( 1+w\right) \right) ,$ then \begin{align*} u_{\lambda}^{\prime}\left( w\right) & =\frac{\lambda f^{\prime}\left( \lambda\log\left( 1+w\right) \right) }{1+w}=\left( 1+w\right) ^{-1} \cdot\lambda f^{\prime}\left( \lambda\log\left( 1+w\right) \right) ,\\ u_{\lambda}^{\prime\prime}\left( w\right) & =\frac{\lambda^{2} f^{\prime\prime}\left( \lambda\log\left( 1+w\right) \right) -\lambda f^{\prime}\left( \lambda\log\left( 1+w\right) \right) }{\left( 1+w\right) ^{2}},\\ u_{\lambda}^{\prime\prime\prime}\left( w\right) & =\frac{\lambda ^{3}f^{\prime\prime\prime}\left( \lambda\log\left( 1+w\right) \right) -\lambda^{2}f^{\prime\prime}\left( \lambda\log\left( 1+w\right) \right) -2\left( \lambda^{2}f^{\prime\prime}\left( \lambda\log\left( 1+w\right) \right) -\lambda f^{\prime}\left( \lambda\log\left( 1+w\right) \right) \right) }{\left( 1+w\right) ^{3}}\\ & =\frac{\lambda^{3}f^{\prime\prime\prime}\left( \lambda\log\left( 1+w\right) \right) -3\lambda^{2}f^{\prime\prime}\left( \lambda\log\left( 1+w\right) \right) +2\lambda f^{\prime}\left( \lambda\log\left( 1+w\right) \right) }{\left( 1+w\right) ^{3}},\\ & \vdots \end{align*} so that $u_{\lambda}\left( 0\right) =f\left( 0\right) ,$ $u_{\lambda }^{\prime}\left( 0\right) =\lambda f^{\prime}\left( 0\right) ,$ $u_{\lambda}^{\prime\prime}\left( 0\right) =\lambda^{2}f^{\prime\prime }\left( 0\right) -\lambda f^{\prime}\left( 0\right) ,$ $u_{\lambda }^{\prime\prime\prime}\left( 0\right) =\lambda^{3}f^{\prime\prime\prime }\left( 0\right) -3\lambda^{2}f^{\prime\prime}\left( 0\right) +2\lambda f^{\prime}\left( 0\right) ,$ $\dots$ We have only used these results with $\lambda=1.$ \fi
The formula for $\dot{C}^{\xi}\left( t\right) $ now follows directly from Eqs. (\ref{e.3.11}) and (\ref{e.3.16}). \end{proof}
\begin{corollary} \label{cor.3.19}If $g\left( t\right) =g^{\xi}\left( t\right) \in G$ and $C\left( t\right) =C^{\xi}\left( t\right) =\log\left( g\left( t\right) \right) \in\mathfrak{g}$ are as in Definition \ref{def.3.6}, then there exists bounded measurable functions, $\Delta^{j}:[0,\infty)^{j-1} \rightarrow\mathbb{R}$ for $j\in\mathbb{N\cap}\left[ 2,\kappa\right] $ such that \begin{equation} \psi_{-}\left( \operatorname{ad}_{C\left( t\right) }\right) =Id_{\mathfrak{g}}+\sum_{j=2}^{\kappa}\hat{\Delta}_{t}^{j}\left( \operatorname{ad}_{\xi}\right) \label{e.3.16} \end{equation} and \begin{equation} \dot{C}^{\xi}\left( t\right) =\dot{\xi}\left( t\right) +\sum_{j=2} ^{\kappa}\hat{\Delta}_{t}^{j}\left( \operatorname{ad}_{\xi}\right) \dot{\xi }\left( t\right) \label{e.3.17} \end{equation}
\end{corollary}
\begin{proof} Applying Proposition \ref{pro.3.18} with $f=\psi_{-}$ and $\lambda=1$ gives Eq. (\ref{e.3.16}). Equation (\ref{e.3.17}) then follows from Eq. (\ref{e.3.16}) and Eq. (\ref{e.3.11}). \end{proof}
\begin{remark} \label{rem.3.20}It is possible, see for example \cite{Strichartz1987a}, to work out explicit formula for the functions $\Delta^{j}$ in Corollary \ref{cor.3.19} and this would lead to a proof of Eq. (\ref{e.1.2}). For our purposes, these explicit formula are not needed. \end{remark}
\begin{corollary} \label{cor.3.21}If $g\left( t\right) =g^{\xi}\left( t\right) \in G$ and $C\left( t\right) =C^{\xi}\left( t\right) =\log\left( g\left( t\right) \right) \in\mathfrak{g}$ are as in Definition \ref{def.3.6}, there exists bounded measurable functions, $c^{j}:[0,\infty)^{j}\rightarrow\mathbb{R}$ for $j\in\mathbb{N\cap}\left[ 2,\kappa\right] $ such that \[ C^{\xi}\left( t\right) =C\left( 0\right) +\xi\left( t\right) +\sum _{j=2}^{\kappa}\hat{c}_{t}^{j}\left( \dot{\xi}\right) . \]
\end{corollary}
\begin{proof} Integrating Eq. (\ref{e.3.17}) shows, \begin{equation} C^{\xi}\left( t\right) =C\left( 0\right) +\xi\left( t\right) +\sum _{j=2}^{\kappa}\int_{0}^{t}\hat{\Delta}_{\tau}^{j}\left( \operatorname{ad} _{\dot{\xi}}\right) \dot{\xi}\left( \tau\right) d\tau\label{e.3.18} \end{equation} where \begin{align*} \int_{0}^{t} & \hat{\Delta}_{\tau}^{j}\left( \operatorname{ad}_{\dot{\xi} }\right) \dot{\xi}\left( \tau\right) d\tau\\ & =\int_{0}^{t}d\tau\int_{\left[ 0,\tau\right] ^{j-1}}ds_{1}\dots ds_{j-1}\Delta^{j}\left( s_{1},\dots,s_{j-1}\right) \operatorname{ad} _{\dot{\xi}\left( s_{1}\right) }\dots\operatorname{ad}_{\dot{\xi}\left( s_{j-1}\right) }\dot{\xi}\left( \tau\right) \\ & =\int_{\left[ 0,t\right] ^{j}}\tilde{\Delta}^{j}\left( s_{1},\dots ,s_{j}\right) \operatorname{ad}_{\dot{\xi}\left( s_{1}\right) } \dots\operatorname{ad}_{\dot{\xi}\left( s_{j-1}\right) }\dot{\xi}\left( s_{j}\right) ds_{1}\dots ds_{j} \end{align*} and \[ \tilde{\Delta}^{j}\left( s_{1},\dots,s_{j}\right) :=\Delta^{j}\left( s_{1},\dots,s_{j-1}\right) 1_{\max\left\{ s_{1},\dots s_{j-1}\right\} \leq s_{j}}. \] By expanding out all of the commutators and permuting the variables of integration in each of the resulting terms we may rewrite the previous expression in the form $\hat{c}_{t}^{j}\left( \xi\right) $ for some bounded measurable function, $c^{j}:[0,\infty)^{j}\rightarrow\mathbb{R}.$
\textbf{Alternatively}: simply apply $\log$ to Eq. (\ref{e.3.12}) and then repeatedly use Lemma \ref{lem.3.17} to arrive at the stated assertion. \end{proof}
\subsection{Truncated tensor algebra estimates\label{sec.3.2}}
We now apply the above results with $\mathfrak{g}=\mathfrak{g}^{\left( \kappa\right) }\subset\mathcal{A}=T^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) $ as in Notation \ref{not.1.14}. In what follows we will make use of the simple estimates in the following remark without further mention.
\begin{remark} \label{rem.3.22}For any $m,n\in\mathbb{N\cap}\left[ 1,2\kappa\right] $ with $m<n,$ it is easy to show, for $\mu,\lambda\geq0,$ that \begin{align} Q_{(m,n]}\left( \lambda\right) & \asymp\sum_{k=m+1}^{n}\lambda ^{k},\nonumber\\ Q_{\left[ m,n\right] }\left( \lambda\right) & \asymp\sum_{k=m} ^{n}\lambda^{k},\text{ and}\nonumber\\ Q_{(m,n]}\left( \lambda+\mu\right) & \asymp Q_{(m,n]}\left( \lambda\right) +Q_{(m,n]}\left( \mu\right) . \label{e.3.19} \end{align} For example, the first estimate follows from the more precise estimate, \[ Q_{(m,n]}\left( \lambda\right) \leq\sum_{k=m+1}^{n}\lambda^{k}\leq\left( n-m\right) Q_{(m,n]}\left( \lambda\right) . \]
\end{remark}
Recalling from Definition \ref{def.1.16} that $N\left( A\right) :=\max_{1\leq k\leq\kappa}\left\vert A_{k}\right\vert ^{1/k}$ for $A\in\mathfrak{g}^{\left( \kappa\right) },$ we find \begin{equation} \left\vert A\right\vert \leq\sum_{k=1}^{\kappa}\left\vert A_{k}\right\vert \leq\sum_{k=1}^{\kappa}N\left( A\right) ^{k}\leq\kappa Q_{\left[ 1,\kappa\right] }\left( N\left( A\right) \right) . \label{e.3.20} \end{equation} Similarly if $f\in C\left( \left[ 0,t\right] ,\mathfrak{g}^{\left( \kappa\right) }\right) ,$ then \begin{equation} \left\vert f\right\vert _{t}^{\ast}\leq\sum_{k=1}^{\kappa}\left\vert f_{k}\right\vert _{t}^{\ast}\leq\sum_{k=1}^{\kappa}N_{t}^{\ast}\left( f\right) ^{k}\leq\kappa Q_{\left[ 1,\kappa\right] }\left( N_{t}^{\ast }\left( f\right) \right) . \label{e.3.21} \end{equation} Let us also recall that if $a=\left( a_{j}\right) _{j=1}^{N}$ is a sequence ($N=\infty$ allowed), then \[ \left\Vert a\right\Vert _{p}:=\left( \sum_{j=1}^{N}\left\vert a_{j} \right\vert ^{p}\right) ^{1/p} \] is a decreasing function of $p\in\lbrack1,\infty).$ In particular using $\left\Vert a\right\Vert _{p}\leq\left\Vert a\right\Vert _{1}$ with $a_{j}$ replaced by $a_{j}^{1/p}$ it follows (as is easily proved directly) that \begin{equation} \left( \sum_{i=1}^{m}a_{j}\right) ^{1/p}\leq\sum_{j=1}^{N}a_{j}^{1/p}\text{ when }a_{j}\geq0\text{ and }p\geq1. \label{e.3.22} \end{equation}
\begin{lemma} \label{lem.3.23}If $\left\{ A\left( j\right) \right\} _{j=1}^{r} \subset\mathfrak{g}^{\left( \kappa\right) },$ then \begin{equation} N\left( \sum_{j=1}^{r}A\left( j\right) \right) \leq\sum_{j=1}^{r}N\left( A\left( j\right) \right) . \label{e.3.23} \end{equation} If $A,B\in\mathfrak{g}^{\left( \kappa\right) }$ and $2\leq k\leq2\kappa,$ then \begin{equation} \left\vert \left[ A\otimes B\right] _{k}\right\vert \leq N\left( A\right) N\left( B\right) \cdot\left( N\left( A\right) +N\left( B\right) \right) ^{k-2}. \label{e.3.24} \end{equation}
\end{lemma}
\begin{proof} For $1\leq k\leq\kappa,$ \[ \left\vert \left[ \sum_{j=1}^{r}A\left( j\right) \right] _{k}\right\vert ^{1/k}\leq\left( \sum_{j=1}^{r}\left\vert A\left( j\right) _{k}\right\vert \right) ^{1/k}\leq\sum_{j=1}^{r}\left\vert A\left( j\right) _{k}\right\vert ^{1/k}\leq\sum_{j=1}^{r}N\left( A\left( j\right) \right) \] wherein we have used Eq. (\ref{e.3.22}) with $p=k$ for the second inequality. Since this is true for all $1\leq k\leq\kappa,$ Eq. (\ref{e.3.23}) is proved. The proof of the second inequality follows by the simple estimates; \begin{align*} \left\vert \left[ A\otimes B\right] _{k}\right\vert & =\left\vert \sum_{m,n=1}^{\kappa}1_{m+n=k}\cdot A_{m}\otimes B_{n}\right\vert \leq \sum_{m,n=1}^{\kappa}1_{m+n=k}\cdot\left\vert A_{m}\otimes B_{n}\right\vert \\ & \leq\sum_{m,n=1}^{\kappa}1_{m+n=k}\cdot\left\vert A_{m}\right\vert \left\vert B_{n}\right\vert \leq\sum_{m,n=1}^{\kappa}1_{m+n=k}\cdot N\left( A\right) ^{m}N\left( B\right) ^{n}\\ & =N\left( A\right) N\left( B\right) \cdot\sum_{m,n=0}^{\kappa -1}1_{m+n=k-2}\cdot N\left( A\right) ^{m}N\left( B\right) ^{n}\\ & \leq N\left( A\right) N\left( B\right) \left( N\left( A\right) +N\left( B\right) \right) ^{k-2}, \end{align*} wherein we have used all the coefficients in the binomial formula are greater than or equal to $1$ for the last inequality. \end{proof}
Recall from Notation \ref{not.3.16} with $\mathfrak{g}=\mathfrak{g}^{\left( \kappa\right) }$ that if $1\leq\ell\leq\kappa,$ $\Delta:\left[ 0,T\right] ^{\ell}\rightarrow\mathbb{R}$ is a bounded measurable function, and $\xi\in L^{1}\left( \left[ 0,T\right] ,\mathfrak{g}^{\left( \kappa\right) }\right) ,$ then we let \begin{equation} \hat{\Delta}_{t}\left( \xi\right) :=\int_{\left[ 0,t\right] ^{\ell}} \Delta\left( s_{1},\dots,s_{\ell}\right) \xi\left( s_{1}\right) \dots \xi\left( s_{\ell}\right) d\mathbf{s}\in\mathfrak{g}^{\left( \kappa\right) }\text{ }\forall~t\in\left[ 0,T\right] , \label{e.3.25} \end{equation} where $d\mathbf{s}:=ds_{1}\dots ds_{\ell}.$
\begin{proposition} \label{pro.3.24}Suppose that $1\leq\ell\leq\kappa,$ $\Delta:\left[ 0,T\right] ^{\ell}\rightarrow\mathbb{R},$ and $\xi\in L^{1}\left( \left[ 0,T\right] ,\mathfrak{g}^{\left( \kappa\right) }\right) ,$ and \[ \hat{\Delta}_{t}\left( \xi\right) =\sum_{k=1}^{\kappa}\left[ \hat{\Delta }_{t}\left( \xi\right) \right] _{k}\in\oplus_{k=\ell}^{\kappa}\left[ \mathbb{R}^{d}\right] ^{\otimes k} \] are as above. Then \begin{equation} \left\vert \left[ \hat{\Delta}_{t}\left( \xi\right) \right] _{k} \right\vert \leq\#\left( \Lambda_{k,\ell}\right) \cdot\left\Vert \Delta\right\Vert _{\infty}\cdot N_{t}^{\ast}\left( \xi\right) ^{k} \label{e.3.26} \end{equation} \begin{equation} N\left( \hat{\Delta}_{t}\left( \xi\right) \right) \leq C\left( \kappa\right) \max\left( \left\Vert \Delta\right\Vert _{\infty}^{1/\ell },\left\Vert \Delta\right\Vert _{\infty}^{1/\kappa}\right) \cdot N_{t}^{\ast }\left( \xi\right) \text{ for all }0\leq t\leq T \label{e.3.27} \end{equation} where $\left\Vert \Delta\right\Vert _{\infty}$ is the essential supremum of $\Delta$ on $\left[ 0,T\right] ^{\ell},$ \[ \Lambda_{k,\ell}:=\left\{ \left( j_{1},\dots j_{\ell}\right) \in \mathbb{N}^{\ell}:\sum_{i=1}^{k}j_{i}=k\right\} , \] and \[ C\left( \kappa\right) :=\max_{1\leq\ell\leq\kappa}\max_{\ell\leq k\leq \kappa}\left[ \left[ \#\left( \Lambda_{k,\ell}\right) \right] ^{1/k}\right] . \]
\end{proposition}
\begin{proof} For $k\in\left[ \ell,\kappa\right] \cap\mathbb{N},$ \begin{align*} \left\vert \left[ \hat{\Delta}_{t}\left( \xi\right) \right] _{k} \right\vert & =\left\vert \sum_{\left( j_{1},\dots j_{\ell}\right) \in\Lambda_{k,\ell}}\int_{\left[ 0,t\right] ^{\ell}}\Delta\left( s_{1},\dots,s_{\ell}\right) \xi_{j_{1}}\left( s_{1}\right) \dots \xi_{j_{\ell}}\left( s_{\ell}\right) d\mathbf{s}\right\vert \\ & \leq\left\Vert \Delta\right\Vert _{\infty}\sum_{\left( j_{1},\dots j_{\ell}\right) \in\Lambda_{k,\ell}}\int_{\left[ 0,t\right] ^{\ell} }\left\vert \xi_{j_{1}}\left( s_{1}\right) \dots\xi_{j_{\ell}}\left( s_{\ell}\right) \right\vert d\mathbf{s}\\ & =\left\Vert \Delta\right\Vert _{\infty}\sum_{\left( j_{1},\dots j_{\ell }\right) \in\Lambda_{k,\ell}}\prod_{i=1}^{\ell}\left\vert \xi_{j_{i} }\right\vert _{t}^{\ast}\leq\left\Vert \Delta\right\Vert _{\infty} \sum_{\left( j_{1},\dots j_{\ell}\right) \in\Lambda_{k,\ell}}\prod _{i=1}^{\ell}N_{t}^{\ast}\left( \xi\right) ^{j_{i}}\\ & \leq\left\Vert \Delta\right\Vert _{\infty}\cdot\#\left( \Lambda_{k,\ell }\right) \cdot N_{t}^{\ast}\left( \xi\right) ^{k} \end{align*} which proves Eq. (\ref{e.3.26}). Equation (\ref{e.3.27}) is an easy consequence of Eq. (\ref{e.3.26}) and the observation that \[ \left[ \#\left( \Lambda_{k,\ell}\right) \right] ^{1/k}\cdot\left\Vert \Delta\right\Vert _{\infty}^{1/k}\leq C\left( \kappa\right) \max\left( \left\Vert \Delta\right\Vert _{\infty}^{1/\ell},\left\Vert \Delta\right\Vert _{\infty}^{1/\kappa}\right) . \]
\end{proof}
\begin{proposition} \label{pro.3.25}Suppose that $1\leq\ell\leq\kappa,$ $\Delta:\left[ 0,T\right] ^{\ell}\rightarrow\mathbb{R},$ and $\xi\in L^{1}\left( \left[ 0,T\right] ,\mathfrak{g}^{\left( \kappa\right) }\right) ,$ then \[ \int_{0}^{T}\left\vert \left[ \hat{\Delta}_{t}\left( \operatorname{ad}_{\xi }\right) \xi\left( t\right) \right] _{k}\right\vert dt\lesssim\left\Vert \Delta\right\Vert _{\infty}N_{T}^{\ast}\left( \xi\right) ^{k}. \]
\end{proposition}
\begin{proof} For $k\in(\ell,\kappa]\cap\mathbb{N},$ let \[ \Lambda_{k,\ell}:=\left\{ \left( j_{0},j_{1},\dots j_{\ell}\right) \in\mathbb{N}^{\ell+1}:\sum_{i=0}^{k}j_{i}=k\right\} . \] We then have \begin{align*} & \left\vert \left[ \hat{\Delta}_{t}\left( \operatorname{ad}_{\xi}\right) \xi\left( t\right) \right] _{k}\right\vert \\ & =\left\vert \sum_{\left( j_{0},j_{1},\dots j_{\ell}\right) \in\Lambda }\int_{\left[ 0,t\right] ^{\ell}}\Delta\left( s_{1},\dots,s_{\ell}\right) \operatorname{ad}_{\xi_{j_{1}}\left( s_{1}\right) }\dots\operatorname{ad} _{\xi_{j_{\ell}}\left( s_{\ell}\right) }\xi_{j_{0}}\left( t\right) d\mathbf{s}\right\vert \\ & \leq2^{\ell}\left\Vert \Delta\right\Vert _{\infty}\sum_{\left( j_{0} ,j_{1},\dots j_{\ell}\right) \in\Lambda_{k,\ell}}\int_{\left[ 0,t\right] ^{\ell}}\left\vert \xi_{j_{1}}\left( s_{1}\right) \right\vert \dots \left\vert \xi_{j_{\ell}}\left( s_{\ell}\right) \right\vert \left\vert \xi_{j_{0}}\left( t\right) \right\vert d\mathbf{s}\\ & \leq2^{\ell}\left\Vert \Delta\right\Vert _{\infty}\sum_{\left( j_{0} ,j_{1},\dots j_{\ell}\right) \in\Lambda_{k,\ell}}\int_{\left[ 0,T\right] ^{\ell}}\left\vert \xi_{j_{1}}\left( s_{1}\right) \right\vert \dots \left\vert \xi_{j_{\ell}}\left( s_{\ell}\right) \right\vert \left\vert \xi_{j_{0}}\left( t\right) \right\vert d\mathbf{s} \end{align*} Integrating this estimate on $t\in\left[ 0,T\right] $ shows, \begin{align*} \int_{0}^{T} & \left\vert \left[ \hat{\Delta}_{t}\left( \operatorname{ad} _{\xi}\right) \xi\left( t\right) \right] _{k}\right\vert dt\\ & \leq2^{\ell}\left\Vert \Delta\right\Vert _{\infty}\sum_{\left( j_{0} ,j_{1},\dots j_{\ell}\right) \in\Lambda_{k,\ell}}\int_{0}^{T}dt\int_{\left[ 0,T\right] ^{\ell}}\left\vert \xi_{j_{1}}\left( s_{1}\right) \right\vert \dots\left\vert \xi_{j_{\ell}}\left( s_{\ell}\right) \right\vert \left\vert \xi_{j_{0}}\left( t\right) \right\vert d\mathbf{s}\\ & =2^{\ell}\left\Vert \Delta\right\Vert _{\infty}\sum_{\left( j_{0} ,j_{1},\dots j_{\ell}\right) \in\Lambda_{k,\ell}}\prod_{i=0}^{\ell}\left\vert \xi_{j_{i}}\right\vert _{T}^{\ast}\\ & \leq2^{\ell}\left\Vert \Delta\right\Vert _{\infty}\sum_{\left( j_{0} ,j_{1},\dots j_{\ell}\right) \in\Lambda_{k,\ell}}N_{t}^{\ast}\left( \xi\right) ^{k}=2^{\ell}\#\left( \Lambda_{k,\ell}\right) \left\Vert \Delta\right\Vert _{\infty}\cdot N_{t}^{\ast}\left( \xi\right) ^{k}. \end{align*}
\end{proof}
We end this section with a few key estimates that we will need in the remainder of the paper. In each of the next three results we assume that $\xi\in C^{1}\left( \left[ 0,T\right] ,F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \right) $ and $C\left( t\right) =C^{\xi}\left( t\right) =\log\left( g^{\xi}\left( t\right) \right) \in F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) $ are as in Definition \ref{def.1.25}.
\begin{proposition} \label{pro.3.26}To each $f\in\mathcal{H}_{0}$ there exists $K\left( f\right) >0$ depending linearly on $\left( \left\vert f\left( 0\right) \right\vert ,\dots,\left\vert f^{\left( \kappa-1\right) }\left( 0\right) \right\vert \right) $ and independent of $\xi$ such that \begin{equation} \int_{0}^{T}\left\vert \left( f\left( \operatorname{ad}_{C\left( t\right) }\right) \dot{\xi}\left( t\right) \right) _{n}\right\vert dt\leq K\left( f\right) N_{T}^{\ast}\left( \dot{\xi}\right) ^{n} \label{e.3.28} \end{equation}
\end{proposition}
\begin{proof} Recall that Proposition \ref{pro.3.18} asserts that \[ f\left( \operatorname{ad}_{C\left( t\right) }\right) \dot{\xi}\left( t\right) =f\left( 0\right) \dot{\xi}\left( t\right) +\sum_{j=1} ^{\kappa-1}\widehat{\mathbf{f}^{j}}_{t}\left( \operatorname{ad}_{\dot{\xi} }\right) \dot{\xi}\left( t\right) \] where the functions $\mathbf{f}^{j}$ depend linearly on $\left( f\left( 0\right) ,\dots,f^{\left( \kappa-1\right) }\left( 0\right) \right) .$ So by repeated application of Proposition \ref{pro.3.25} with $\xi$ replaced by $\dot{\xi}$ shows, \begin{align*} \int_{0}^{T} & \left\vert \left( f\left( \operatorname{ad}_{C\left( t\right) }\right) \dot{\xi}\left( t\right) \right) _{n}\right\vert dt\\ & \leq\left\vert f\left( 0\right) \right\vert \int_{0}^{T}\left\vert \dot{\xi}_{n}\left( t\right) \right\vert dt+\sum_{j=1}^{\kappa-1}\int _{0}^{T}\left\vert \left[ \widehat{\mathbf{f}^{j}}_{t}\left( \operatorname{ad}_{\dot{\xi}}\right) \dot{\xi}\left( t\right) \right] _{n}\right\vert dt\\ & \lesssim\left\vert f\left( 0\right) \right\vert N_{T}^{\ast}\left( \dot{\xi}\right) ^{n}+\sum_{j=1}^{\kappa-1}\left\Vert \mathbf{f} ^{j}\right\Vert _{\infty}N_{T}^{\ast}\left( \dot{\xi}\right) ^{n}\leq K\left( f\right) N_{T}^{\ast}\left( \dot{\xi}\right) ^{n} \end{align*} where $K\left( f\right) >0$ may be chosen to depend linearly on $\left( \left\vert f\left( 0\right) \right\vert ,\dots,\left\vert f^{\left( \kappa-1\right) }\left( 0\right) \right\vert \right) .$ \end{proof}
\begin{corollary} \label{cor.3.27}If $\xi\in C^{1}\left( \left[ 0,T\right] ,F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \right) $ and $C^{\xi}\left( t\right) =\log\left( g^{\xi}\left( t\right) \right) \in F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) $ are as above, then \begin{equation} N_{T}^{\ast}\left( \dot{C}^{\xi}\right) \lesssim N_{T}^{\ast}\left( \dot{\xi}\right) , \label{e.3.29} \end{equation} \begin{equation} \left\vert C^{\xi}\left( \cdot\right) _{n}\right\vert _{\infty,T}\lesssim N_{T}^{\ast}\left( \dot{\xi}\right) ^{n}\text{ }\forall~n\in\left[ 1,\kappa\right] \cap\mathbb{N},\text{ and} \label{e.3.30} \end{equation} \begin{equation} \left\vert C^{\xi}\left( \cdot\right) \right\vert _{\infty,T}\lesssim Q_{\left[ 1,\kappa\right] }\left( N_{T}^{\ast}\left( \dot{\xi}\right) \right) . \label{e.3.31} \end{equation}
\end{corollary}
\begin{proof} By Corollary \ref{cor.3.10}, $\dot{C}^{\xi}\left( t\right) =f\left( \operatorname{ad}_{C\left( t\right) }\right) \dot{\xi}\left( t\right) $ where $f\left( z\right) =1/\psi\left( -z\right) $ and so by Proposition \ref{pro.3.26}, \[ \left\vert \dot{C}_{n}^{\xi}\right\vert _{T}^{\ast}=\int_{0}^{T}\left\vert \left( f\left( \operatorname{ad}_{C\left( t\right) }\right) \dot{\xi }\left( t\right) \right) _{n}\right\vert dt\leq K\left( f\right) N_{T}^{\ast}\left( \dot{\xi}\right) ^{n}\text{ for }1\leq n\leq\kappa. \] This proves Eq. (\ref{e.3.29}) and also Eqs. (\ref{e.3.30}) and (\ref{e.3.31}) since (as $C^{\xi}\left( 0\right) =0),$ \[ \left\vert C^{\xi}\left( \cdot\right) _{n}\right\vert _{\infty,T} \leq\left\vert \dot{C}_{n}^{\xi}\right\vert _{T}^{\ast}\leq K\left( f\right) N_{T}^{\ast}\left( \dot{\xi}\right) ^{n} \] and hence \[ \left\vert C^{\xi}\left( \cdot\right) \right\vert _{\infty,T}\leq\sum _{n=1}^{\kappa}\left\vert C_{n}^{\xi}\left( \cdot\right) \right\vert _{\infty,T}\lesssim\sum_{n=1}^{\kappa}N_{T}^{\ast}\left( \dot{\xi}\right) ^{n}\lesssim Q_{\left[ 1,\kappa\right] }\left( N_{T}^{\ast}\left( \dot {\xi}\right) \right) . \]
\end{proof}
\begin{corollary} \label{cor.3.28}Suppose $\xi\in C^{1}\left( \left[ 0,T\right] ,F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \right) $ and $C\left( t\right) =C^{\xi}\left( t\right) =\log\left( g^{\xi}\left( t\right) \right) \in F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) $ are as in Definition \ref{def.1.25} and $f\in\mathcal{H}_{0}.$ Then there exists $K\left( f\right) >0$ such that $K\left( f\right) $ depends linearly on $\left( \left\vert f\left( 0\right) \right\vert ,\dots,\left\vert f^{\left( \kappa-1\right) }\left( 0\right) \right\vert \right) $ and on $\kappa$ such that \begin{equation} \int_{0}^{T}\left\vert C^{\xi}\left( t\right) _{m}\right\vert \left\vert \left( f\left( \operatorname{ad}_{C\left( t\right) }\right) \dot{\xi }\left( t\right) \right) _{n}\right\vert dt\leq K\left( f\right) N_{T}^{\ast}\left( \dot{\xi}\right) ^{m+n}\text{ }\forall~m,n\in\left[ 1,\kappa\right] \cap\mathbb{N}. \label{e.3.32} \end{equation}
\end{corollary}
\begin{proof} Making use of the estimates in Proposition \ref{pro.3.26} and Corollary \ref{cor.3.28} we find, \begin{align*} \int_{0}^{T} & \left\vert C^{\xi}\left( t\right) _{m}\right\vert \left\vert \left( f\left( \operatorname{ad}_{C\left( t\right) }\right) \dot{\xi}\left( t\right) \right) _{n}\right\vert dt\\ & \leq\left\vert C^{\xi}\left( \cdot\right) _{m}\right\vert _{\infty ,T}\cdot\int_{0}^{T}\left\vert \left( f\left( \operatorname{ad}_{C\left( t\right) }\right) \dot{\xi}\left( t\right) \right) _{n}\right\vert dt\\ & \lesssim N_{T}^{\ast}\left( \dot{\xi}\right) ^{m}\cdot K\left( f\right) N_{T}^{\ast}\left( \dot{\xi}\right) ^{n}=K\left( f\right) N_{T}^{\ast }\left( \dot{\xi}\right) ^{m+n}. \end{align*}
\end{proof}
\section{Logarithm Approximation Problem\label{sec.4}}
Recall from Definition \ref{def.1.20} that a $d$\textbf{-dimensional dynamical system} on $M$ is a linear map, $\mathbb{R}^{d}\ni w\rightarrow V_{w}\in \Gamma\left( TM\right) .$ For $A\in F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) $ we know that $V_{A}\in\Gamma\left( TM\right) $ by Example \ref{ex.1.21}. Let us again emphasize that we assume Assumption \ref{ass.1} is in force, i.e. $V$ is $\kappa$-complete.
To help motivate the next key theorem, let $A$ and $B$ be in the full free Lie algebra, $F\left( \mathbb{R}^{d}\right) $. Working heuristically (using $\sim$ to indicate equality of formal series), we should have \begin{align*} \operatorname{Ad}_{e^{sV_{A}}}V_{B} & =e^{s\operatorname{ad}_{V_{A}}} V_{B}=e^{-sL_{V_{A}}}V_{B}\\ & \sim\sum_{k=0}^{\infty}\frac{\left( -1\right) ^{k}s^{k}}{k!}L_{V_{A}} ^{k}V_{B}\sim\sum_{k=0}^{\infty}\frac{\left( -1\right) ^{k}s^{k}} {k!}V_{\operatorname{ad}_{A}^{k}B}\\ & \sim V_{\sum_{k=0}^{\infty}\frac{\left( -1\right) ^{k}s^{k}} {k!}\operatorname{ad}_{A}^{k}B}=V_{e^{-s\operatorname{ad}_{A}}B}. \end{align*} Integrating this formal identity then suggests, \[ \int_{0}^{1}\operatorname{Ad}_{e^{sV_{A}}}V_{B}ds\sim\int_{0}^{1} e^{-sL_{V_{A}}}V_{B}ds\sim\int_{0}^{1}V_{e^{-s\operatorname{ad}_{A}}B}ds\sim V_{\int_{0}^{1}e^{-s\operatorname{ad}_{A}}Bds}. \] Although the above series need not converge, this computation is suggestive of the following key Taylor type approximation theorem for $\int_{0} ^{1}\operatorname{Ad}_{e^{sV_{A}}}V_{B}ds\in\Gamma\left( TM\right) $ when $A,B\in F^{\left( \mathfrak{\kappa}\right) }\left( \mathbb{R}^{d}\right) .$
\begin{theorem} \label{thm.4.1}Let $\psi\left( z\right) $ be as in Eq. (\ref{e.3.3}) and $V:\mathbb{R}^{d}\rightarrow\Gamma\left( TM\right) $ be a dynamical system satisfying Assumption \ref{ass.1} so that in particular, $V_{A}\in \Gamma\left( TM\right) $ is complete for all $A\in\mathfrak{g} _{0}=F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) .$ Then for all $A,B\in\mathfrak{g}_{0},$ \begin{align} \int_{0}^{1} & \operatorname{Ad}_{e^{sV_{A}}}V_{B}ds\nonumber\\ & =V_{\left[ \int_{0}^{1}\operatorname{Ad}_{e^{-sA}}B~ds\right] }+\int _{0}^{1}\operatorname{Ad}_{e^{sV_{A}}}V_{\pi_{>\kappa}\left[ A,\psi\left( \left( s-1\right) \operatorname{ad}_{A}\right) B\right] _{\otimes}}\left( s-1\right) ds\label{e.4.1}\\ & =V_{\psi\left( -\operatorname{ad}_{A}\right) B}+\int_{0}^{1} \operatorname{Ad}_{e^{sV_{A}}}V_{\pi_{>\kappa}\left[ A,\psi\left( \left( s-1\right) \operatorname{ad}_{A}\right) B\right] _{\otimes}}\left( s-1\right) ds. \label{e.4.2} \end{align}
\end{theorem}
\begin{proof} The heart of the proof is to show, for $0\leq l\leq\kappa,$ that \begin{align} \int_{0}^{1}\operatorname{Ad}_{e^{sV_{A}}}V_{B}ds= & V_{\sum_{k=1} ^{l}\left( -1\right) ^{k+1}\frac{\operatorname{ad}_{A}^{k-1}}{k!}B}+\frac {1}{l!}\int_{0}^{1}\operatorname{Ad}_{e^{sV_{A}}}V_{\operatorname{ad}_{A} ^{l}B}\left( s-1\right) ^{l}ds\nonumber\\ & +\int_{0}^{1}\operatorname{Ad}_{e^{sV_{A}}}V_{\pi_{>\kappa}\left[ A,\left( \sum_{k=1}^{l}\frac{\left( s-1\right) ^{k}}{k!}\operatorname{ad} _{A}^{k-1}\right) B\right] _{\otimes}}ds, \label{e.4.3} \end{align} where $\sum_{k=1}^{l}\left[ \dots\right] =0$ and $l!=1$ when $l=0.$ The proof of these identities will be by induction on $l.$ In the proof of this identity we will use Corollary \ref{cor.2.19} which in this context implies, \[ \frac{d}{ds}\operatorname{Ad}_{e^{sV_{A}}}V_{C}=\operatorname{Ad}_{e^{sV_{A}} }\operatorname{ad}_{V_{A}}V_{C}=-\operatorname{Ad}_{e^{sV_{A}}}\left[ V_{A},V_{C}\right] =-\operatorname{Ad}_{e^{sV_{A}}}V_{\left[ A,C\right] _{\otimes}} \] for all $A,C\in F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) .$ In the proof to follow, $C=\operatorname{ad}_{A}^{l}B$ for some $l.$
When $l=0,$ there is nothing to prove. For the induction step, we integrate by parts the middle term on the right side of Eq. (\ref{e.4.3}), \begin{align*} \frac{1}{l!}\int_{0}^{1} & \operatorname{Ad}_{e^{sV_{A}}} V_{\operatorname{ad}_{A}^{l}B}\left( s-1\right) ^{l}ds\\ = & \frac{1}{\left( l+1\right) !}\int_{0}^{1}\operatorname{Ad}_{e^{sV_{A} }}V_{\operatorname{ad}_{A}^{l}B}d\left( s-1\right) ^{l+1}\\ = & \frac{1}{\left( l+1\right) !}\operatorname{Ad}_{e^{sV_{A}}
}V_{\operatorname{ad}_{A}^{l}B}\left( s-1\right) ^{l+1}|_{0}^{1}\\ & -\frac{1}{\left( l+1\right) !}\int_{0}^{1}\left( \frac{d}{ds} \operatorname{Ad}_{e^{sV_{A}}}V_{\operatorname{ad}_{A}^{l}B}\right) \left( s-1\right) ^{l+1}ds\\ = & \frac{\left( -1\right) ^{l}}{\left( l+1\right) !}\operatorname{Ad} _{e^{sV_{A}}}V_{\operatorname{ad}_{A}^{l}B}\\ & +\frac{1}{\left( l+1\right) !}\int_{0}^{1}\operatorname{Ad}_{e^{sV_{A}} }\left[ V_{A},V_{\operatorname{ad}_{A}^{l}B}\right] \left( s-1\right) ^{l+1}ds\\ = & \frac{\left( -1\right) ^{l}}{\left( l+1\right) !}\operatorname{Ad} _{e^{sV_{A}}}V_{\operatorname{ad}_{A}^{l}B}\\ & +\frac{1}{\left( l+1\right) !}\int_{0}^{1}\operatorname{Ad}_{e^{sV_{A}} }V_{\left[ A,\operatorname{ad}_{A}^{l}B\right] _{\otimes}}\left( s-1\right) ^{l+1}ds. \end{align*} Combining this result with Eq. (\ref{e.4.3}) and the fact that \[ \left[ A,\operatorname{ad}_{A}^{l}B\right] _{\otimes}=\left[ A,\operatorname{ad}_{A}^{l}B\right] +\pi_{>\kappa}\left[ A,\operatorname{ad} _{A}^{l}B\right] _{\otimes} \] completes the inductive step.
To finish the proof observe that \[ \left( s-1\right) \psi\left( \left( s-1\right) \operatorname{ad} _{A}\right) =\sum_{k=1}^{\kappa}\frac{\left( s-1\right) ^{k}} {k!}\operatorname{ad}_{A}^{k-1} \] and taking $s=0$ in this equation also shows, \[ \sum_{k=1}^{\kappa}\frac{\left( -1\right) ^{k+1}}{k!}\operatorname{ad} _{A}^{k-1}=\psi\left( -\operatorname{ad}_{A}\right) =\int_{0}^{1} \operatorname{Ad}_{e^{-sA}}ds, \] where the last equality comes from Eq. (\ref{e.4.1}). So from the last two displayed equations and Eq. (\ref{e.4.3}) with $l=\kappa$ and the fact that $\operatorname{ad}_{A}^{\kappa}B=0$ so that \[ \frac{1}{\kappa!}\int_{0}^{1}\operatorname{Ad}_{e^{sV_{A}}} V_{\operatorname{ad}_{A}^{\kappa}B}\left( s-1\right) ^{\kappa}ds=0, \] we find \begin{align*} \int_{0}^{1} & \operatorname{Ad}_{e^{sV_{A}}}V_{B}\,ds\\ & =V_{\psi\left( -\operatorname{ad}_{A}\right) B}+\int_{0}^{1} \operatorname{Ad}_{e^{sV_{A}}}V_{\pi_{>\kappa}\left[ A,\psi\left( \left( s-1\right) \operatorname{ad}_{A}\right) B\right] _{\otimes}}\left( s-1\right) ds\\ & =V_{\left[ \int_{0}^{1}\operatorname{Ad}_{e^{-sA}}B~ds\right] }+\int _{0}^{1}\operatorname{Ad}_{e^{sV_{A}}}V_{\pi_{>\kappa}\left[ A,\psi\left( \left( s-1\right) \operatorname{ad}_{A}\right) B\right] _{\otimes}}\left( s-1\right) ds. \end{align*} \end{proof}
\begin{remark} [Signs]\label{rem.4.2}The expression, $\operatorname{Ad}_{e^{sV_{A}}}V_{B},$ appears on the left side of Eq. (\ref{e.4.1}) while on the right side we have the expression, $\operatorname{Ad}_{e^{-sA}}B$ which involves a change of $s$ to $-s.$ This change of sign is a simple consequence of the fact that vector-fields on $M$ may naturally be identified with \textbf{right invariant} vector fields on $\mathrm{\mathrm{Diff}}\left( M\right) $ while on the other hand we have chosen to view $A,B\in F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) $ as \textbf{left invariant }vector fields on $G_{0}=G_{geo}^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) .$ This left right interchange is the reason for the sign changes in Eq. (\ref{e.4.1}). \end{remark}
\begin{notation} \label{not.4.4}To each $C\in C^{1}\left( \left[ 0,T\right] ,F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \right) ,$let \begin{equation} W_{t}^{C}:=\int_{0}^{1}\operatorname{Ad}_{e^{sV_{C\left( t\right) }}} V_{\dot{C}\left( t\right) }~ds\in\Gamma\left( TM\right) . \label{e.4.4} \end{equation}
\end{notation}
\begin{notation} \label{not.4.5}For $\xi\in C^{1}\left( \left[ 0,T\right] ,F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \right) ,$ let $g\left( t\right) =g^{\xi}\left( t\right) \in G_{0}$ be as in Definition \ref{def.3.6}, $C^{\xi}\left( t\right) :=\log\left( g^{\xi}\left( t\right) \right) \in F^{\left( \kappa\right) }\left( \mathbb{R} ^{d}\right) ,$ and $\mu_{t,s}^{\xi}:=\mu_{t,s}^{V_{\dot{\xi}}}\in \mathrm{Diff}\left( M\right) $ denote the flow defined by \[ \dot{\mu}_{t,s}^{\xi}=V_{\dot{\xi}\left( t\right) }\circ\mu_{t,s}^{\xi }\text{ with }\mu_{s,s}^{\xi}=Id_{M}. \]
\end{notation}
Our goal is now to estimate the distance between $\mu_{t,0}^{\xi}$ and $e^{V_{\log\left( g^{\xi}\left( t\right) \right) }}=e^{V_{C^{\xi}\left( t\right) }}.$ Since (by Corollary \ref{cor.2.24} with $Z_{t}=V_{C^{\xi }\left( t\right) }\in\Gamma\left( TM\right) )$ \begin{equation} \frac{d}{dt}e^{V_{C^{\xi}\left( t\right) }}=W_{t}^{C^{\xi}}\circ e^{V_{C^{\xi}\left( t\right) }},\nonumber \end{equation} the desired distance estimates will be a consequence of applying Theorem \ref{thm.2.30} with $X_{t}=V_{\dot{\xi}\left( t\right) }$ and $Y_{t} =W_{t}^{C^{\xi}}.$ Before carrying out the details we need to develop a few auxiliary results first.
\begin{notation} \label{not.4.6}For $0\leq s\leq1,$ let $u\left( s,\cdot\right) \in\mathcal{H}_{0}$ be defined by $u\left( s,z\right) :=\psi\left( \left( s-1\right) z\right) /\psi\left( -z\right) .$ \end{notation}
\begin{lemma} \label{lem.4.7}Let $\xi$ and $C=C^{\xi}$ be as in Notation \ref{not.4.5} and $W^{C^{\xi}}$ be as in Eq. (\ref{e.4.4}). Then the difference vector field, \begin{equation} U_{t}^{\xi}:=Y_{t}-X_{t}=W_{t}^{C^{\xi}}-V_{\dot{\xi}\left( t\right) } \in\Gamma\left( TM\right) , \label{e.4.5} \end{equation} may be expresses as \begin{equation} U_{t}^{\xi}=\int_{0}^{1}\operatorname{Ad}_{e^{sV_{C\left( t\right) }}} V_{\pi_{>\kappa}\left[ C\left( t\right) ,u\left( s,\operatorname{ad} _{C\left( t\right) }\right) \dot{\xi}\left( t\right) \right] _{\otimes} }\left( s-1\right) ds. \label{e.4.6} \end{equation}
\end{lemma}
\begin{proof} By Corollary \ref{cor.3.10}, \begin{equation} \psi\left( -\operatorname{ad}_{C\left( t\right) }\right) \dot{C}\left( t\right) =\int_{0}^{1}\operatorname{Ad}_{e^{-sC\left( t\right) }}\dot {C}\left( t\right) ds=\dot{\xi}\left( t\right) \text{ with }C\left( 0\right) =0, \label{e.4.7} \end{equation} which combined with Theorem \ref{thm.4.1} with $A=C\left( t\right) $ and $B=\dot{C}\left( t\right) $ implies \begin{equation} W_{t}^{C}=V_{\dot{\xi}\left( t\right) }+\int_{0}^{1}\operatorname{Ad} _{e^{sV_{C\left( t\right) }}}V_{\pi_{>\kappa}\left[ C\left( t\right) ,\psi\left( \left( s-1\right) \operatorname{ad}_{C\left( t\right) }\right) \dot{C}\left( t\right) \right] _{\otimes}}\left( s-1\right) ds. \label{e.4.8} \end{equation} Since $\psi\left( \left( s-1\right) z\right) =u\left( s,z\right) \psi\left( -z\right) ,$ it follows (with the aid of Eq. (\ref{e.4.7}) that \[ \psi\left( \left( s-1\right) \operatorname{ad}_{C\left( t\right) }\right) \dot{C}\left( t\right) =u\left( s,\operatorname{ad}_{C\left( t\right) }\right) \psi\left( -\operatorname{ad}_{C\left( t\right) }\right) \dot{C}\left( t\right) =u\left( s,\operatorname{ad}_{C\left( t\right) }\right) \dot{\xi}\left( t\right) \] which combined with Eq. (\ref{e.4.8}) gives Eq. (\ref{e.4.6}). \end{proof}
\begin{corollary} \label{cor.4.8}If $\left\{ V_{a}:a\in\mathbb{R}^{d}\right\} $ generates a \textbf{step-}$\kappa$\textbf{ nilpotent Lie sub-algebra} of $\Gamma\left( TM\right) ,$ then \begin{equation} \mu_{t,0}^{V_{\dot{\xi}}}=e^{V_{C^{\xi}\left( t\right) }}\text{ for all } \xi\in C^{1}\left( \left[ 0,T\right] ,F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \right) . \label{e.4.9} \end{equation} Moreover, for any $A,B\in F^{\left( \kappa\right) }\left( \mathbb{R} ^{d}\right) ,$ we have \begin{equation} e^{V_{B}}\circ e^{V_{A}}=e^{V_{\log\left( e^{A}e^{B}\right) }}. \label{e.4.10} \end{equation}
\end{corollary}
\begin{proof} The given assumption implies $V_{\pi_{>\kappa}\left[ C\left( t\right) ,\psi\left( \left( s-1\right) \operatorname{ad}_{C\left( t\right) }\right) \dot{C}\left( t\right) \right] _{\otimes}}\equiv0$ and hence $U^{\xi}\equiv0$ and the Eq. (\ref{e.4.9}) now follows from Theorem \ref{thm.2.30}. To prove the second assertion $\xi:[0,\infty)\rightarrow F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) $ be defined by \begin{equation} \xi\left( t\right) :=\left\{ \begin{array} [c]{ccc} tA & \text{if} & 0\leq t\leq1\\ A+\left( t-1\right) B & \text{if} & 1\leq t<\infty \end{array} .\right. \label{e.4.11} \end{equation} With this choice of $\xi$ we have; $\dot{\xi}\left( t\right) =1_{t\leq 1}A+1_{t>1}B$ (for $t\neq1),$ \[ g^{\xi}\left( t\right) =\left\{ \begin{array} [c]{ccc} e^{tA} & \text{if} & 0\leq t\leq1\\ e^{A}e^{\left( t-1\right) B} & \text{if} & 1\leq t<\infty, \end{array} \right. \] \[ \mu_{t,0}^{\xi}=\left\{ \begin{array} [c]{ccc} e^{tV_{A}} & \text{if} & 0\leq t\leq1\\ e^{\left( t-1\right) V_{B}}\circ e^{V_{A}} & \text{if} & 1\leq t<\infty, \end{array} \right. \] all of which is valid where $V$ is step-$\kappa$ nilpotent or not. If $V$ is step-$\kappa$ nilpotent we \textquotedblleft apply\textquotedblright\ Eq. (\ref{e.4.9}) at $t=2,$ to find, \[ e^{V_{B}}\circ e^{V_{A}}=\mu_{2,0}^{\xi}=e^{V_{C^{\xi}\left( 2\right) } }=e^{V_{\log\left( e^{A}e^{B}\right) }}. \]
The slight flaw in this argument is that $\xi\left( \cdot\right) $ is not continuously differentiable at $t=1.$ To correct this flaw, choose $\varphi\in C_{c}^{\infty}\left( \mathbb{R},[0,\infty)\right) $ which is supported in $\left( 0,1\right) $ and satisfies $\int_{0}^{1}\varphi\left( t\right) dt=1.$ We then run the above argument with $\xi\in C^{\infty}\left( [0,\infty),F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \right) $ defined so that \begin{equation} \dot{\xi}\left( t\right) =\varphi\left( t\right) A+\varphi\left( t-1\right) B\text{ with }\xi\left( 0\right) =0. \label{e.4.12} \end{equation} In more detail, if we let \begin{equation} \bar{\varphi}\left( t\right) :=\int_{-\infty}^{t}\varphi\left( \tau\right) d\tau, \label{e.4.13} \end{equation} then \begin{equation} \xi\left( t\right) =\bar{\varphi}\left( t\right) A+\bar{\varphi}\left( t-1\right) B, \label{e.4.14} \end{equation} \begin{align} g^{\xi}\left( t\right) & =e^{\bar{\varphi}\left( t\right) \cdot A}e^{\bar{\varphi}\left( t-1\right) B},\text{ and }\label{e.4.15}\\ \mu_{t,0}^{\xi} & =e^{\bar{\varphi}\left( t-1\right) V_{B}}\circ e^{\bar{\varphi}\left( t\right) V_{A}} \label{e.4.16} \end{align} and in particular at $t=2$ we again have, \begin{equation} e^{V_{B}}\circ e^{V_{A}}=\mu_{2,0}^{\xi}\text{ and }\,C^{\xi}\left( 2\right) =\log\left( g^{\xi}\left( 2\right) \right) =\log\left( e^{A}e^{B}\right) . \label{e.4.17} \end{equation} Thus when $V$ is step-$\kappa$ nilpotent we are now justified in applying Eq. (\ref{e.4.9}) at $t=2$ to arrive at Eq. (\ref{e.4.10}). \end{proof}
\begin{notation} [Commutator bounds]\label{not.4.9}If $V:\mathbb{R}^{d}\rightarrow\Gamma\left( TM\right) $ is a dynamical system and $m,n\in\mathbb{N}$ with $\kappa <m+n\leq2\kappa,$ let \[ \mathcal{S}_{m,n}:=\left\{ \left( A,B\right) \in F_{m}^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \times F_{n}^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) :\text{ }\left\vert A\right\vert =1=\left\vert B\right\vert \right\} , \] \begin{align*} \mathcal{C}_{m,n}^{0}\left( V^{\left( \kappa\right) }\right) & :=\sup\left\{ \left\vert \left[ V_{A},V_{B}\right] \right\vert _{M}:\left( A,B\right) \in\mathcal{S}_{m,n}\right\} ,\\ \mathcal{C}_{m,n}^{1}\left( V^{\left( \kappa\right) }\right) & :=\sup\left\{ \left\vert \nabla\left[ V_{A},V_{B}\right] \right\vert _{M}:\left( A,B\right) \in\mathcal{S}_{m,n}\right\} , \end{align*} and \[ \mathcal{C}^{j}\left( V^{\left( \kappa\right) }\right) :=\sum _{m,n=1}^{\kappa}1_{m+n>\kappa}\mathcal{C}_{m,n}^{j}\left( V^{\left( \kappa\right) }\right) \text{ for }j=0,1. \]
\end{notation}
Since \begin{align*} \left[ V_{A},V_{B}\right] & =\nabla_{V_{A}}V_{B}-\nabla_{V_{B}}V_{A}\text{ and}\\ \nabla_{v}\left[ V_{A},V_{B}\right] & =\nabla_{v\otimes V_{A}}^{2} V_{B}+\nabla_{\nabla_{v}V_{A}}V_{B}-\left( A\longleftrightarrow B\right) \end{align*} it follows that \begin{align*} \mathcal{C}_{m,n}^{0}\left( V^{\left( \kappa\right) }\right) & \leq2\left\vert V^{\left( \kappa\right) }\right\vert _{M}\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\text{ and }\\ \mathcal{C}_{m,n}^{1}\left( V^{\left( \kappa\right) }\right) & \leq2\left( \left\vert \nabla^{2}V^{\left( \kappa\right) }\right\vert _{M}\cdot\left\vert V^{\left( \kappa\right) }\right\vert _{M}+\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}^{2}\right) \end{align*} and therefore \begin{align} \mathcal{C}^{0}\left( V^{\left( \kappa\right) }\right) & \leq \kappa\left( \kappa+1\right) \left\vert V^{\left( \kappa\right) }\right\vert _{M}\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\text{ and }\label{e.4.18}\\ \mathcal{C}^{1}\left( V^{\left( \kappa\right) }\right) & \leq \kappa\left( \kappa+1\right) \left( \left\vert \nabla^{2}V^{\left( \kappa\right) }\right\vert _{M}\cdot\left\vert V^{\left( \kappa\right) }\right\vert _{M}+\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}^{2}\right) . \label{e.4.19} \end{align} The previous estimates are in general not sharp. For example if $V$ is $\kappa$-nilpotent, then $\mathcal{C}^{0}\left( V^{\left( \kappa\right) }\right) \equiv0$ while $2\left\vert V^{\left( \kappa\right) }\right\vert _{M}\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}$ will typically be positive.
\begin{lemma} \label{lem.4.10}If $\xi\in C^{1}\left( \left[ 0,T\right] ,F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \right) $ and $C^{\xi}\left( t\right) =\log\left( g^{\xi}\left( t\right) \right) \in F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) $ are as in Definition \ref{def.1.25} or Notation \ref{not.4.5} and $u\left( s,z\right) $ is as in Notation \ref{not.4.6}, then \begin{equation} \int_{0}^{1}ds\left( 1-s\right) \int_{0}^{T}dt\left\vert V_{\pi_{>\kappa }\left[ C\left( t\right) ,u\left( s,\operatorname{ad}_{C\left( t\right) }\right) \dot{\xi}\left( t\right) \right] _{\otimes}}\right\vert _{M}\lesssim\mathcal{C}^{0}\left( V^{\left( \kappa\right) }\right) Q_{(\kappa,2\kappa]}\left( N_{T}^{\ast}\left( \dot{\xi}\right) \right) . \label{e.4.20} \end{equation} and \begin{equation} \int_{0}^{1}ds\left( 1-s\right) \int_{0}^{T}dt\left\vert \nabla V_{\pi_{>\kappa}\left[ C\left( t\right) ,u\left( s,\operatorname{ad} _{C\left( t\right) }\right) \dot{\xi}\left( t\right) \right] _{\otimes} }\right\vert _{M}\lesssim\mathcal{C}^{1}\left( V^{\left( \kappa\right) }\right) Q_{(\kappa,2\kappa]}\left( N_{T}^{\ast}\left( \dot{\xi}\right) \right) . \label{e.4.21} \end{equation}
\end{lemma}
\begin{proof} Applying the triangle inequality to the identity, \begin{align} V_{\pi_{>\kappa}\left[ C\left( t\right) ,u\left( s,\operatorname{ad} _{C\left( t\right) }\right) \dot{\xi}\left( t\right) \right] _{\otimes }} & =\sum_{m,n=1}^{\kappa}1_{m+n>\kappa}V_{\left[ C\left( t\right) _{m},\left( u\left( s,\operatorname{ad}_{C\left( t\right) }\right) \dot{\xi}\left( t\right) \right) _{n}\right] _{\otimes}}\nonumber\\ & =\sum_{m,n=1}^{\kappa}1_{m+n>\kappa}\left[ V_{C\left( t\right) _{m} },V_{\left( u\left( s,\operatorname{ad}_{C\left( t\right) }\right) \dot{\xi}\left( t\right) \right) _{n}}\right] , \label{e.4.22} \end{align} while using Corollaries \ref{cor.3.27} and \ref{cor.3.28} and the definition of $\mathcal{C}^{0}\left( V^{\left( \kappa\right) }\right) $ shows, \begin{align} \int_{0}^{T} & \left\vert V_{\pi_{>\kappa}\left[ C\left( t\right) ,\psi\left( \left( s-1\right) \operatorname{ad}_{C\left( t\right) }\right) \dot{C}\left( t\right) \right] _{\otimes}}\right\vert _{M}dt\nonumber\\ & \leq\sum_{m,n=1}^{\kappa}1_{m+n>\kappa}\int_{0}^{T}\left\vert \left[ V_{C\left( t\right) _{m}},V_{\left( u\left( s,\operatorname{ad}_{C\left( t\right) }\right) \dot{\xi}\left( t\right) \right) _{n}}\right] \right\vert _{M}dt\nonumber\\ & \leq\sum_{m,n=1}^{\kappa}1_{m+n>\kappa}\mathcal{C}_{m,n}^{0}\left( V^{\left( \kappa\right) }\right) \int_{0}^{T}\left\vert C_{m}\right\vert _{\infty,T}\cdot\left\vert \left( u\left( s,\operatorname{ad}_{C\left( t\right) }\right) \dot{\xi}\left( t\right) \right) _{n}\right\vert dt\nonumber\\ & \lesssim K\left( u\left( s,\cdot\right) \right) \sum_{m,n=1}^{\kappa }1_{m+n>\kappa}\mathcal{C}_{m,n}^{0}\left( V^{\left( \kappa\right) }\right) N_{T}^{\ast}\left( \dot{\xi}\right) ^{m+n}\nonumber\\ & \quad\leq K\left( u\left( s,\cdot\right) \right) \sum_{m,n=1}^{\kappa }1_{m+n>\kappa}\mathcal{C}_{m,n}^{0}\left( V^{\left( \kappa\right) }\right) Q_{(\kappa,2\kappa]}\left( N_{T}^{\ast}\left( \dot{\xi}\right) \right) \nonumber\\ & \quad=K\left( u\left( s,\cdot\right) \right) \mathcal{C}^{0}\left( V^{\left( \kappa\right) }\right) Q_{(\kappa,2\kappa]}\left( N_{T}^{\ast }\left( \dot{\xi}\right) \right) . \label{e.4.23} \end{align}
A simple differentiation exercise shows $p_{n}\left( s\right) :=\left(
\frac{d}{dz}\right) ^{n}u\left( s,z\right) |_{z=0}$ is a degree $n$ -polynomial function of $s$ with $p_{0}\left( s\right) =1.$ As $K\left( u\left( s,\cdot\right) \right) $ depends linearly on $\left\{ \left(
\frac{d}{dz}\right) ^{j}u\left( s,z\right) |_{z=0}\right\} _{j=0} ^{\kappa-1}$ it follows that $K\left( u\left( s,\cdot\right) \right) $ is bounded by a polynomial function of $s$ and in particular, \[ \int_{0}^{1}K\left( u\left( s,\cdot\right) \right) \left( 1-s\right) ds<\infty. \] Thus multiplying Eq. (\ref{e.4.23}) by $\left( 1-s\right) $ and then integrating on $s\in\left[ 0,1\right] $ completes the proof of Eq. (\ref{e.4.20}). The proof of Eq. (\ref{e.4.21}) is very similar. Simply apply $\nabla$ to both sides of Eq. (\ref{e.4.22}) and then continue the estimates as above with $\mathcal{C}_{m,n}^{0}\left( V^{\left( \kappa\right) }\right) $ and $\mathcal{C}^{0}\left( V^{\left( \kappa\right) }\right) $ replaced by $\mathcal{C}_{m,n}^{1}\left( V^{\left( \kappa\right) }\right) $ and $\mathcal{C}^{1}\left( V^{\left( \kappa\right) }\right) $ respectively. \end{proof}
\begin{theorem} \label{thm.4.11}If $\xi\in C^{1}\left( \left[ 0,T\right] ,F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \right) $ and $C^{\xi}\left( t\right) =\log\left( g^{\xi}\left( t\right) \right) \in F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) $ be as in Definition \ref{def.1.25} or Notation \ref{not.4.5} and $U_{t}^{\xi}\in\Gamma\left( TM\right) $ as in Eq. (\ref{e.4.6}) of Lemma \ref{lem.4.7}, then \begin{equation} \left\vert U^{\xi}\right\vert _{T}^{\ast}\lesssim\mathcal{C}^{0}\left( V^{\left( \kappa\right) }\right) e^{\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\left\vert C^{\xi}\right\vert _{\infty,T} }Q_{(\kappa,\kappa+1]}\left( N_{T}^{\ast}\left( \dot{\xi}\right) \right) \label{e.4.24} \end{equation} which combined with Eq. (\ref{e.3.31}) shows there exists $C\left( \kappa\right) <\infty$ such that \begin{equation} \left\vert U^{\xi}\right\vert _{T}^{\ast}\lesssim\mathcal{C}^{0}\left( V^{\left( \kappa\right) }\right) e^{C\left( \kappa\right) \left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}Q_{\left[ 1,\kappa\right] }\left( N_{T}^{\ast}\left( \dot{\xi}\right) \right) }Q_{(\kappa,\kappa +1]}\left( N_{T}^{\ast}\left( \dot{\xi}\right) \right) . \label{e.4.25} \end{equation}
\end{theorem}
\begin{proof} By Corollary \ref{cor.2.28}, if $Y\in\Gamma\left( TM\right) ,$ then \begin{align*} \left\vert \operatorname{Ad}_{e^{sV_{C\left( \tau\right) }}}Y\right\vert _{M} & =\left\vert e_{\ast}^{sV_{C\left( \tau\right) }}Y\circ e^{-sV_{C\left( \tau\right) }}\right\vert _{M}=\left\vert e_{\ast }^{sV_{C\left( \tau\right) }}Y\right\vert _{M}\\ & \leq e^{s\left\vert \nabla V_{C\left( \tau\right) }\right\vert _{M} }\left\vert Y\right\vert _{M}\leq e^{s\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\left\vert C\left( \tau\right) \right\vert }\left\vert Y\right\vert _{M} \end{align*} and so (see Eq. (\ref{e.4.6})), \begin{align*} \left\vert U_{t}^{\xi}\right\vert _{M} & \leq\int_{0}^{1}\left\vert \operatorname{Ad}_{e^{sV_{C\left( t\right) }}}V_{\pi_{>\kappa}\left[ C\left( t\right) ,u\left( s,\operatorname{ad}_{C\left( t\right) }\right) \dot{\xi}\left( t\right) \right] _{\otimes}}\right\vert _{M}\left( s-1\right) ds\\ & \leq\int_{0}^{1}e^{s\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\left\vert C\left( t\right) \right\vert }\left\vert V_{\pi_{>\kappa}\left[ C\left( t\right) ,u\left( s,\operatorname{ad} _{C\left( t\right) }\right) \dot{\xi}\left( t\right) \right] _{\otimes} }\right\vert _{M}\left( s-1\right) ds \end{align*} and so \begin{equation} \left\vert U^{\xi}\right\vert _{T}^{\ast}\leq e^{\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\left\vert C\right\vert _{\infty,T}}\int _{0}^{1}ds\left( 1-s\right) \int_{0}^{T}dt\left\vert V_{\pi_{>\kappa}\left[ C\left( t\right) ,u\left( s,\operatorname{ad}_{C\left( t\right) }\right) \dot{\xi}\left( t\right) \right] _{\otimes}}\right\vert _{M} \label{e.4.26} \end{equation} which combined with Lemma \ref{lem.4.10} proves Eq. (\ref{e.4.24}). \end{proof}
\begin{theorem} [Approximate log-estimate]\label{thm.4.12}If $\xi\in C^{1}\left( \left[ 0,T\right] ,F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \right) ,$ then\footnote{We will see in Theorem \ref{thm.8.4} below that a similar estimate holds for the distance between the differentials of $\mu_{T,0}^{V_{\dot{\xi}}}$ and $e^{V_{\log\left( g^{\xi}\left( T\right) \right) }}.$} \begin{equation} d_{M}\left( \mu_{T,0}^{V_{\dot{\xi}}},e^{V_{\log\left( g^{\xi}\left( T\right) \right) }}\right) \lesssim\mathcal{C}^{0}\left( V^{\left( \kappa\right) }\right) e^{C\left( \kappa\right) \left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}Q_{\left[ 1,\kappa\right] }\left( N_{T}^{\ast}\left( \dot{\xi}\right) \right) }Q_{(\kappa,\kappa +1]}\left( N_{T}^{\ast}\left( \dot{\xi}\right) \right) . \label{e.4.27} \end{equation}
\end{theorem}
\begin{proof} By Theorem \ref{thm.2.30} with $X_{t}=V_{\dot{\xi}\left( t\right) }$ and $Y_{t}=W_{t}^{C},$ we know that \begin{equation} d_{M}\left( \mu_{T,0}^{V_{\dot{\xi}}},e^{V_{C\left( T\right) }}\right) \leq e^{\left\vert \nabla V_{\dot{\xi}}\right\vert _{T}^{\ast}}\cdot\left\vert U^{\xi}\right\vert _{T}^{\ast}\leq e^{\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\left\vert \dot{\xi}\right\vert _{T}^{\ast} }.\left\vert U^{\xi}\right\vert _{T}^{\ast}. \label{e.4.28} \end{equation} Combining this estimate with the estimate for $\left\vert U^{\xi}\right\vert _{T}^{\ast}$ in Theorem \ref{thm.4.11} and the estimate for $\left\vert \dot{\xi}\right\vert _{T}^{\ast}$ in Eq. (\ref{e.3.21}) gives Eq. (\ref{e.4.27}). \end{proof}
For the rest of this section we assume that $\xi\in C^{\infty}\left( [0,\infty),F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \right) $ is defined as in Eq. (\ref{e.4.12}) of the proof of Corollary \ref{cor.4.8}, i.e. \begin{equation} \xi\left( t\right) =\bar{\varphi}\left( t\right) A+\bar{\varphi}\left( t-1\right) B\in F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) , \label{e.4.29} \end{equation} where \[ \bar{\varphi}\left( t\right) =\int_{-\infty}^{t}\varphi\left( \tau\right) d\tau \] and $\varphi\in C_{c}^{\infty}\left( \mathbb{R},[0,\infty)\right) $ with $\bar{\varphi}\left( 1\right) =\bar{\varphi}\left( \infty\right) =1.$
\begin{corollary} \label{cor.4.13}If $A,B\in F^{\left( \kappa\right) }\left( \mathbb{R} ^{d}\right) ,$ then \begin{align} d_{M} & \left( e^{V_{B}}\circ e^{V_{A}},e^{V_{\log\left( e^{A} e^{B}\right) }}\right) \nonumber\\ & \lesssim\mathcal{C}^{0}\left( V^{\left( \kappa\right) }\right) e^{C\left( \kappa\right) \left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}Q_{\left[ 1,\kappa\right] }\left( N\left( A\right) +N\left( B\right) \right) }Q_{(\kappa,\kappa+1]}\left( N\left( A\right) +N\left( B\right) \right) . \label{e.4.30} \end{align}
\end{corollary}
\begin{proof} From the definition of $\xi$ in Eq. (\ref{e.4.29}), we find \begin{equation} \left\vert \dot{\xi}_{k}\right\vert _{2}^{\ast}=\left\vert A_{k}\right\vert +\left\vert B_{k}\right\vert \text{ for }1\leq k\leq\kappa\label{e.4.31} \end{equation} and hence with the aid of Eq. (\ref{e.3.22}), \begin{equation} N_{2}^{\ast}\left( \dot{\xi}\right) \leq N\left( A\right) +N\left( B\right) . \label{e.4.32} \end{equation} Moreover, by the identities in Eq. (\ref{e.4.17}) we know that \begin{equation} d_{M}\left( e^{V_{B}}\circ e^{V_{A}},e^{V_{\log\left( e^{A}e^{B}\right) } }\right) =d_{M}\left( \mu_{2,0}^{V_{\dot{\xi}}},e^{V_{\log\left( g^{\xi }\left( 2\right) \right) }}\right) . \label{e.4.33} \end{equation} So an application of Theorem \ref{thm.4.12} for this $\xi$ and taking $T=2$ gives Eq. (\ref{e.4.30}). \end{proof}
The estimate in Eq. (\ref{e.4.30}) is not as sharp as we would like. For example the right side of Eq. (\ref{e.4.30}) is only $0$ when $A=0=B$ while the left side is $0$ when either $A=0$ or $B=0.$ To improve upon the estimate in Eq. (\ref{e.4.30}) (see Corollary \ref{cor.4.16}) we need to examine the form of the difference vector field, $U_{t}^{\xi},$ for $\xi$ in Eq. (\ref{e.4.29}). We begin with a couple of lemmas.
\begin{lemma} \label{lem.4.14}If $f\in\mathcal{H}_{0}$ satisfies, $f\left( 0\right) =0,$ then $\left[ f\left( \operatorname{ad}_{C\left( t\right) }\right) B\right] _{1}=0$ and for $2\leq k\leq\kappa,$ \[ \max_{1\leq t\leq2}\left\vert \left[ f\left( \operatorname{ad}_{C\left( t\right) }\right) B\right] _{k}\right\vert \leq K\left( f\right) N\left( A\right) N\left( B\right) \left( N\left( A\right) +N\left( B\right) \right) ^{k-2} \] where $K\left( f\right) <\infty$ is a constant which depends linearly on $\left\{ \left\vert f^{\left( j\right) }\left( 0\right) \right\vert \right\} _{j=1}^{\kappa-1}.$ \end{lemma}
\begin{proof} By Proposition \ref{pro.3.18}, there exists bounded measurable functions, $\mathbf{f}^{j}:[0,\infty)^{j}\rightarrow\mathbb{R}$ depending linearly on $\left( f\left( 0\right) ,\dots,f^{\left( \kappa-1\right) }\left( 0\right) \right) $ such that \[ f\left( \operatorname{ad}_{C\left( t\right) }\right) =\sum_{j=1} ^{\kappa-1}\widehat{\mathbf{f}^{j}}_{t}\left( \operatorname{ad}_{\dot{\xi} }\right) . \] As $\left[ \widehat{\mathbf{f}^{j}}_{t}\left( \operatorname{ad}_{\dot{\xi} }\right) B\right] _{k}=0$ if $j\geq k,$ to finish the proof it suffices to show for each $1\leq j<k$ that \begin{equation} \max_{1\leq t\leq2}\left\vert \left[ \widehat{\mathbf{f}^{j}}_{t}\left( \operatorname{ad}_{\dot{\xi}}\right) B\right] _{k}\right\vert \lesssim \left\Vert \mathbf{f}^{j}\right\Vert _{\infty}N\left( A\right) N\left( B\right) \left( N\left( A\right) +N\left( B\right) \right) ^{k-2}. \label{e.4.34} \end{equation} Let us now fix $1\leq j<k.$
For $1\leq t\leq2,$ \begin{align*} \widehat{\mathbf{f}^{j}}_{t}\left( \operatorname{ad}_{\dot{\xi}}\right) B & =\int_{\left[ 0,t\right] ^{j}}\mathbf{f}^{j}\left( t_{1},\dots ,t_{j}\right) \operatorname{ad}_{\dot{\xi}\left( t_{1}\right) } \dots\operatorname{ad}_{\dot{\xi}\left( t_{j-1}\right) }\operatorname{ad} _{\dot{\xi}\left( t_{j}\right) }Bdt_{1}\dots dt_{j}\\ & =\int_{\left[ 0,t\right] ^{j-1}}\mathbf{f}^{j}\left( t_{1},\dots ,t_{j}\right) \varphi\left( t_{j}\right) \operatorname{ad}_{\dot{\xi }\left( t_{1}\right) }\dots\operatorname{ad}_{\dot{\xi}\left( t_{j-1}\right) }\left[ A,B\right] dt_{1}\dots dt_{j-1}dt_{j}, \end{align*} wherein we have used $\operatorname{ad}_{\dot{\xi}\left( t_{j}\right) }B=\varphi\left( t_{j}\right) \left[ A,B\right] $ for all $t\geq0.$ Since $\int\varphi\left( t\right) dt=1,$ it is simple to verify that \begin{equation} \left\vert \left( \widehat{\mathbf{f}^{j}}_{t}\left( \operatorname{ad} _{\dot{\xi}}\right) B\right) _{k}\right\vert \leq\left\Vert \mathbf{f} ^{j}\right\Vert _{\infty}\int_{\left[ 0,t\right] ^{j-1}}\left\vert \left( \operatorname{ad}_{\dot{\xi}\left( t_{1}\right) }\dots\operatorname{ad} _{\dot{\xi}\left( t_{j-1}\right) }\left[ A,B\right] \right) _{k}\right\vert d\mathbf{t} \label{e.4.35} \end{equation} where $d\mathbf{t}:=dt_{1}\dots dt_{j-1}.$ We now estimate the integral in the usual way, namely; \begin{align} \int_{\left[ 0,t\right] ^{j-1}} & \left\vert \left( \operatorname{ad} _{\dot{\xi}\left( t_{1}\right) }\dots\operatorname{ad}_{\dot{\xi}\left( t_{j-1}\right) }\left[ A,B\right] \right) _{k}\right\vert d\mathbf{t} \nonumber\\ & \leq\sum\int_{\left[ 0,t\right] ^{j-1}}\left\vert \operatorname{ad} _{\dot{\xi}_{k_{1}}\left( t_{1}\right) }\dots\operatorname{ad}_{\dot{\xi }_{k_{j-1}}\left( t_{j-1}\right) }\left[ A_{m},B_{n}\right] \right\vert d\mathbf{t} \label{e.4.36} \end{align} where the sum is over $\left( m,n,k_{1},\dots,k_{j-1}\right) \in \mathbb{N}^{j+1}$ such that $\sum_{i=1}^{j-1}k_{i}+m+n=k.$ Using $\left\vert \left[ A,B\right] \right\vert \leq2\left\vert A\right\vert \left\vert B\right\vert $ for all $A,B\in F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) ,$ each term on the right side of Eq. (\ref{e.4.36}) may be estimated by \begin{align} 2^{j} & \int_{\left[ 0,t\right] ^{j-1}}\left\vert \dot{\xi}_{k_{1}}\left( t_{1}\right) \right\vert \dots\left\vert \dot{\xi}_{k_{j-1}}\left( t_{j-1}\right) \right\vert \left\vert A_{m}\right\vert \left\vert B_{n}\right\vert d\mathbf{t}\nonumber\\ & \leq2^{j}\prod_{i=1}^{j-1}\left\vert \dot{\xi}_{k_{i}}\right\vert _{2}^{\ast}\left\vert A_{m}\right\vert \left\vert B_{n}\right\vert \leq 2^{j}\prod_{i=1}^{j-1}N_{t}^{\ast}\left( \dot{\xi}\right) ^{k_{i}}N\left( A\right) ^{m}N\left( B\right) ^{n}\nonumber\\ & \leq2^{j}\left( N\left( A\right) +N\left( B\right) \right) ^{k-m-n}N\left( A\right) ^{m}N\left( B\right) ^{n}\nonumber\\ & \leq2^{j}N\left( A\right) N\left( B\right) \left( N\left( A\right) +N\left( B\right) \right) ^{k-2}. \label{e.4.37} \end{align} Combining the estimates in Eqs. (\ref{e.4.35}) -- (\ref{e.4.37}) completes the proof of Eq. (\ref{e.4.34}) and hence the proof of the lemma. \end{proof}
\begin{proposition} \label{pro.4.15}If $A,B\in F^{\left( \kappa\right) }\left( \mathbb{R} ^{d}\right) $ and $\xi\in C^{\infty}\left( [0,\infty),F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \right) $ is as in Eq. (\ref{e.4.29}), then \begin{align} & \left[ C^{\xi}\left( t\right) ,u\left( s,\operatorname{ad}_{C^{\xi }\left( t\right) }\right) \dot{\xi}\left( t\right) \right] _{\otimes }\nonumber\\ & =\varphi\left( t-1\right) \left( \left[ A,B\right] _{\otimes}+\left[ \bar{C}^{\xi}\left( t\right) ,B\right] _{\otimes}+\left[ C^{\xi}\left( t\right) ,\bar{u}\left( s,\operatorname{ad}_{C^{\xi}\left( t\right) }\right) B\right] _{\otimes}\right) \label{e.4.38} \end{align} where \begin{align*} \bar{C}^{\xi}\left( t\right) & :=C^{\xi}\left( t\right) -\xi\left( t\right) \text{ and }\\ \bar{u}\left( s,z\right) & :=u\left( s,z\right) -u\left( s,0\right) =u\left( s,z\right) -1. \end{align*} Moreover for $k\geq2,$ the following estimates hold; \begin{align} \max_{0\leq t\leq2}\left\vert \bar{C}_{k}^{\xi}\left( t\right) \right\vert & \lesssim N\left( A\right) N\left( B\right) \left( N\left( A\right) +N\left( B\right) \right) ^{k-2}\text{ and}\label{e.4.39}\\ \max_{0\leq t\leq2}\max_{0\leq s\leq1}\left\vert \left[ \bar{u}\left( s,\operatorname{ad}_{C^{\xi}\left( t\right) }\right) B\right] _{k}\right\vert & \lesssim N\left( A\right) N\left( B\right) \left( N\left( A\right) +N\left( B\right) \right) ^{k-2}. \label{e.4.40} \end{align}
\end{proposition}
\begin{proof} From Eq. (\ref{e.4.15}), $C^{\xi}\left( t\right) =\bar{\varphi}\left( t\right) A=\xi\left( t\right) $ when $t\leq1$ and therefore \begin{align*} \left[ C^{\xi}\left( t\right) ,u\left( s,\operatorname{ad}_{C^{\xi}\left( t\right) }\right) \dot{\xi}\left( t\right) \right] _{\otimes} & =\bar{\varphi}\left( t\right) \varphi\left( t\right) \left[ A,u\left( s,\operatorname{ad}_{\bar{\varphi}\left( t\right) A}\right) A\right] _{\otimes}\\ & =\bar{\varphi}\left( t\right) \varphi\left( t\right) \left[ A,u\left( s,0\right) A\right] _{\otimes}=0. \end{align*} which proves Eq. (\ref{e.4.38}) for $t\leq1.$ When $t\geq1,$ $\xi\left( t\right) =A+\bar{\varphi}\left( t-1\right) B,$ $\dot{\xi}\left( t\right) =\varphi\left( t-1\right) B,$ and \[ u\left( s,\operatorname{ad}_{C^{\xi}\left( t\right) }\right) \dot{\xi }\left( t\right) =u\left( s,\operatorname{ad}_{C^{\xi}\left( t\right) }\right) B=B+\bar{u}\left( s,\operatorname{ad}_{C^{\xi}\left( t\right) }\right) B \] and hence \begin{align*} & \left[ C^{\xi}\left( t\right) ,u\left( s,\operatorname{ad}_{C^{\xi }\left( t\right) }\right) \dot{\xi}\left( t\right) \right] _{\otimes}\\ & \quad=\left[ C^{\xi}\left( t\right) ,\dot{\xi}\left( t\right) +\bar {u}\left( s,\operatorname{ad}_{C^{\xi}\left( t\right) }\right) \dot{\xi }\left( t\right) \right] _{\otimes}\\ & \quad=\varphi\left( t-1\right) \left( \left[ A+\bar{\varphi}\left( t-1\right) B+\bar{C}^{\xi}\left( t\right) ,B\right] _{\otimes}+\left[ C^{\xi}\left( t\right) ,\bar{u}\left( s,\operatorname{ad}_{C^{\xi}\left( t\right) }\right) B\right] _{\otimes}\right) \end{align*} which easily gives Eq. (\ref{e.4.38}) for $t\geq1.$
By Eq. (\ref{cor.3.10}), \[ \dot{C}^{\xi}\left( t\right) =\frac{1}{\psi}\left( -\operatorname{ad} _{C\left( t\right) }\right) \dot{\xi}\left( t\right) =\dot{\xi}\left( t\right) +g\left( \operatorname{ad}_{C\left( t\right) }\right) \dot{\xi }\left( t\right) \] wherein the last equality we used $1/\psi\left( 0\right) =1$ and have set \[ g\left( z\right) :=\frac{1}{\psi\left( -z\right) }-\frac{1}{\psi\left( 0\right) }=\frac{1}{\psi\left( -z\right) }-1. \] Thus it follows that \[ \bar{C}^{\xi}\left( t\right) =\int_{0}^{t}g\left( \operatorname{ad} _{C^{\xi}\left( \tau\right) }\right) \dot{\xi}\left( \tau\right) d\tau=\int_{0}^{t}\varphi\left( \tau-1\right) g\left( \operatorname{ad} _{C^{\xi}\left( \tau\right) }\right) Bd\tau. \] By Lemma \ref{lem.4.14}, \[ \max_{1\leq\tau\leq2}\left\vert \left[ g\left( \operatorname{ad}_{C^{\xi }\left( \tau\right) }\right) B\right] _{k}\right\vert \leq K\left( g\right) N\left( A\right) N\left( B\right) \left( N\left( A\right) +N\left( B\right) \right) ^{k-2} \] and so it now easily follows that \[ \left\vert \bar{C}_{k}^{\xi}\left( t\right) \right\vert \leq K\left( g\right) N\left( A\right) N\left( B\right) \left( N\left( A\right) +N\left( B\right) \right) ^{k-2}\text{ for all }0\leq t\leq2. \] By another application of Lemma \ref{lem.4.14}, \[ \max_{0\leq t\leq2}\left\vert \left[ \bar{u}\left( s,\operatorname{ad} _{C^{\xi}\left( t\right) }\right) B\right] _{k}\right\vert \leq K\left( \bar{u}\left( s,\cdot\right) \right) N\left( A\right) N\left( B\right) \left( N\left( A\right) +N\left( B\right) \right) ^{k-2} \] where $K\left( \bar{u}\left( s,\cdot\right) \right) $ is bounded in $s\in\left[ 0,1\right] $ as the derivatives of $\bar{u}\left( s,z\right) $ as $z=0$ are polynomial functions in $s.$ These last two inequalities verify Eqs. (\ref{e.4.39}) and (\ref{e.4.40}) and hence complete the proof. \end{proof}
\begin{corollary} \label{cor.4.16}If $A,B\in F^{\left( \kappa\right) }\left( \mathbb{R} ^{d}\right) ,$ then \begin{equation} d_{M}\left( e^{V_{B}},Id_{M}\right) \leq\left\vert V^{\left( \kappa\right) }\right\vert \left\vert B\right\vert \leq\left\vert V^{\left( \kappa\right) }\right\vert Q_{\left[ 1,\kappa\right] }\left( N\left( B\right) \right) \label{e.4.41} \end{equation} and there exists $C\left( \kappa\right) <\infty$ such that \begin{equation} d_{M}\left( e^{V_{B}}\circ e^{V_{A}},e^{V_{\log\left( e^{A}e^{B}\right) } }\right) \lesssim\mathcal{K}_{0}N\left( A\right) N\left( B\right) Q_{\left[ \kappa-1,2\kappa-2\right] }\left( N\left( A\right) +N\left( B\right) \right) \label{e.4.42} \end{equation} where \begin{align} \mathcal{K}_{0} & :=\mathcal{C}^{0}\left( V^{\left( \kappa\right) }\right) e^{C\left( \kappa\right) \left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}Q_{\left[ 1,\kappa\right] }\left( N\left( A\right) +N\left( B\right) \right) }\label{e.4.43}\\ & \leq2\left\vert V^{\left( \kappa\right) }\right\vert _{M}\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}e^{c\left( \kappa\right) \left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}Q_{\left[ 1,\kappa\right] }\left( N\left( A\right) +N\left( B\right) \right) }. \label{e.4.44} \end{align}
\end{corollary}
\begin{proof} The first inequality follows as an application of Corollary \ref{cor.2.31} with $Y_{t}:=V_{B}$ using \[ \left\vert Y\right\vert _{1}^{\ast}=\left\vert V_{B}\right\vert _{M} \leq\left\vert V^{\left( \kappa\right) }\right\vert \left\vert B\right\vert \] To prove the second inequality we let $\xi\left( t\right) $ be as in Proposition \ref{pro.4.15}. By Eq. (\ref{e.4.28}) in the proof of Theorem \ref{thm.4.12} we then have, to find, \begin{align} d_{M}\left( e^{V_{B}}\circ e^{V_{A}},e^{V_{\log\left( e^{A}e^{B}\right) } }\right) & =d_{M}\left( \mu_{2,0}^{V_{\dot{\xi}}},e^{V_{C\left( 2\right) }}\right) \nonumber\\ & \leq e^{\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\left\vert \dot{\xi}\right\vert _{2}^{\ast}}.\left\vert U^{\xi}\right\vert _{2}^{\ast}=e^{\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\left( \left\vert A\right\vert +\left\vert B\right\vert \right) }.\left\vert U^{\xi}\right\vert _{2}^{\ast}. \label{e.4.45} \end{align} where, by Eq. (\ref{e.4.26}) of the proof of Theorem \ref{thm.4.11}, \begin{equation} \left\vert U^{\xi}\right\vert _{2}^{\ast}\leq e^{\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\left\vert C\right\vert _{\infty,2}}\int _{0}^{1}ds\left( 1-s\right) \int_{0}^{2}dt\left\vert V_{\pi_{>\kappa}\left[ C\left( t\right) ,u\left( s,\operatorname{ad}_{C\left( t\right) }\right) \dot{\xi}\left( t\right) \right] _{\otimes}}\right\vert _{M}. \label{e.4.46} \end{equation}
From Proposition \ref{pro.4.15} \begin{align} & \left\vert V_{\left( \left[ C\left( t\right) ,u\left( s,\operatorname{ad}_{C\left( t\right) }\right) \dot{\xi}\left( t\right) \right] _{\otimes}\right) _{k}}\right\vert _{M}\nonumber\\ & \leq\varphi\left( t-1\right) \left( \begin{array} [c]{c} \left\vert V_{\left( \left[ A,B\right] _{\otimes}\right) _{k}}\right\vert _{M}+\left\vert V_{\left( \left[ \bar{C}^{\xi}\left( t\right) ,B\right] _{\otimes}\right) _{k}}\right\vert _{M}\\ +\left\vert V_{\left( \left[ C^{\xi}\left( t\right) ,\bar{u}\left( s,\operatorname{ad}_{C^{\xi}\left( t\right) }\right) B\right] _{\otimes }\right) _{k}}\right\vert _{M} \end{array} \right) \label{e.4.47} \end{align} We now estimate each of the three terms appearing on the right side of Eq. (\ref{e.4.47}).
\begin{enumerate} \item Since, for $m,n\in\left[ 1,\kappa\right] $ with $m+n=k,$ \[ \left\vert A_{m}\right\vert \left\vert B_{n}\right\vert \leq N\left( A\right) ^{m}N\left( B\right) ^{n}\leq N\left( A\right) N\left( B\right) \left( N\left( A\right) +N\left( B\right) \right) ^{k-2}, \] we find \begin{align*} \left\vert V_{\left( \left[ A,B\right] _{\otimes}\right) _{k}}\right\vert _{M} & \leq\sum_{m,n=1}^{\kappa}1_{m+n=k}\left\vert \left[ V_{A_{m} },V_{B_{n}}\right] \right\vert \\ & \leq\sum_{m,n=1}^{\kappa}1_{m+n=k}\mathcal{C}_{m,n}^{0}\left( V^{\left( \kappa\right) }\right) \left\vert A_{m}\right\vert \left\vert B_{n} \right\vert \\ & \leq\mathcal{C}^{0}\left( V^{\left( \kappa\right) }\right) N\left( A\right) N\left( B\right) \left( N\left( A\right) +N\left( B\right) \right) ^{k-2}. \end{align*}
\item Using Eq. (\ref{e.4.39}) and (by definition) $\bar{C}_{1}^{\xi}=0,$ it follows that \begin{align*} & \left\vert V_{\left( \left[ \bar{C}^{\xi}\left( t\right) ,B\right] _{\otimes}\right) _{k}}\right\vert _{M}\\ & \quad\leq\sum_{m,n=1}^{\kappa}1_{m+n=k}\mathcal{C}_{m,n}^{0}\left( V^{\left( \kappa\right) }\right) \left\vert \bar{C}_{m}^{\xi}\left( t\right) \right\vert \left\vert B_{n}\right\vert \\ & \quad\lesssim\sum_{n=1}^{\kappa}\sum_{m=2}^{\kappa}1_{m+n=k}\mathcal{C} _{m,n}^{0}\left( V^{\left( \kappa\right) }\right) N\left( A\right) N\left( B\right) \left( N\left( A\right) +N\left( B\right) \right) ^{m-2}N\left( B\right) ^{n-1}\\ & \quad\lesssim\mathcal{C}^{0}\left( V^{\left( \kappa\right) }\right) N\left( A\right) N\left( B\right) \left( N\left( A\right) +N\left( B\right) \right) ^{k-2}. \end{align*}
\item Similarly using Eqs. (\ref{e.3.30}) and (\ref{e.4.40}), \begin{align*} & \left\vert V_{\left[ C^{\xi}\left( t\right) ,\bar{u}\left( s,\operatorname{ad}_{C^{\xi}\left( t\right) }\right) B\right] _{\otimes k}}\right\vert _{M}\\ & \quad\leq\sum_{m,n=1}^{\kappa}1_{m+n=k}\mathcal{C}_{m,n}^{0}\left( V^{\left( \kappa\right) }\right) \left\vert C_{m}^{\xi}\left( t\right) \right\vert \left\vert \left( \bar{u}\left( s,\operatorname{ad}_{C^{\xi }\left( t\right) }\right) B\right) _{n}\right\vert \\ & \quad\lesssim\sum_{n=2}^{\kappa}\sum_{m=1}^{\kappa}1_{m+n=k}\mathcal{C} _{m,n}^{0}\left( V^{\left( \kappa\right) }\right) \left\vert C_{m}^{\xi }\left( t\right) \right\vert \cdot N\left( A\right) N\left( B\right) \left( N\left( A\right) +N\left( B\right) \right) ^{n-2}\\ & \quad\lesssim\sum_{n=2}^{\kappa}\sum_{m=1}^{\kappa}1_{m+n=k}\mathcal{C} _{m,n}^{0}\left( V^{\left( \kappa\right) }\right) \cdot N\left( A\right) N\left( B\right) \left( N\left( A\right) +N\left( B\right) \right) ^{m+n-2}\\ & \quad\lesssim\mathcal{C}^{0}\left( V^{\left( \kappa\right) }\right) N\left( A\right) N\left( B\right) \left( N\left( A\right) +N\left( B\right) \right) ^{k-2}. \end{align*}
\end{enumerate}
Combining the last three estimates with Eqs. (\ref{e.4.47}) and (\ref{e.4.46}) shows (with $\mathcal{K}_{1}$ having the form as in Eq. (\ref{e.4.43})), \begin{align} \left\vert U^{\xi}\right\vert _{2}^{\ast} & \leq e^{\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\left\vert C\right\vert _{\infty ,2}}\sum_{k=\kappa+1}^{2\kappa}\int_{0}^{1}ds\left( 1-s\right) \int_{0} ^{2}dt\left\vert V_{\left( \left[ C\left( t\right) ,u\left( s,\operatorname{ad}_{C\left( t\right) }\right) \dot{\xi}\left( t\right) \right] _{\otimes}\right) _{k}}\right\vert _{M}\nonumber\\ & \lesssim e^{\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\left\vert C\right\vert _{\infty,2}}\sum_{k=\kappa+1}^{2\kappa} \mathcal{C}^{0}\left( V^{\left( \kappa\right) }\right) N\left( A\right) N\left( B\right) \left( N\left( A\right) +N\left( B\right) \right) ^{k-2}\nonumber\\ & \lesssim\mathcal{C}^{0}\left( V^{\left( \kappa\right) }\right) e^{\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\left\vert C\right\vert _{\infty,2}}N\left( A\right) N\left( B\right) Q_{\left[ \kappa-1,2\kappa-2\right] }\left( N\left( A\right) +N\left( B\right) \right) \nonumber\\ & \lesssim\mathcal{K}_{1}N\left( A\right) N\left( B\right) Q_{\left[ \kappa-1,2\kappa-2\right] }\left( N\left( A\right) +N\left( B\right) \right) , \label{e.4.48} \end{align} wherein the last inequality we have also used the estimate in Eq. (\ref{e.3.31}). This estimate combined with Eq. (\ref{e.4.45}), while using Eqs. (\ref{e.3.20}) and (\ref{e.3.19}) in order to show $\left\vert A\right\vert +\left\vert B\right\vert \lesssim Q_{\left[ 1,\kappa\right] }\left( N\left( A\right) +N\left( B\right) \right) ,$ completes the proof. \end{proof}
This completes part I. of the paper. The second remaining part of the paper is devoted to developing estimates for the distance between the differentials of $\mu_{T,0}^{V_{\dot{\xi}}}$ and $e^{V_{\log\left( g^{\xi}\left( T\right) \right) }}.$ In order to formulate our results we must first define a distance between $f_{\ast}$ and $g_{\ast}$ for $f,g\in C^{1}\left( M,M\right) .$ To do so we will use the metric on $M$ to endow $TM$ with a Riemannian metric and then make use of this metric to construct the desired distance. It will also be necessary to develop some of the basic properties of the induced distance function on $TM$ which is the topic of the next section.
\section{Riemannian Distances on $TM$\label{sec.5}}
\subsection{Riemannian distances on vector bundles\label{sec.5.1}}
For clarity of exposition (and since it is no harder), it is convenient to carry out these constructions in the more general context of an arbitrary Hermitian vector bundle, $\pi:E\rightarrow M,$ with metric compatible covariant derivative, $\nabla,$ as in Notation \ref{not.2.2}. Later we will specialize to the case of interest where $E=TM.$
\begin{definition} [Riemannian metric on $E$]\label{def.5.1}Continuing the setup in Notation \ref{not.2.2}, we define a Riemannian metric on $TE$ by defining \[ \left\langle \dot{\xi}\left( 0\right) ,\dot{\eta}\left( 0\right) \right\rangle _{TE}:=\left\langle \pi_{\ast}\dot{\xi}\left( 0\right) ,\pi_{\ast}\dot{\eta}\left( 0\right) \right\rangle _{g}+\left\langle \frac{\nabla\xi}{dt}\left( 0\right) ,\frac{\nabla\eta}{dt}\left( 0\right) \right\rangle _{E} \] whenever $\xi\left( t\right) $ and $\eta\left( t\right) $ are two smooth curves in $E$ such $\pi\left( \xi\left( 0\right) \right) =\pi\left( \eta\left( 0\right) \right) .$ \end{definition}
\begin{remark} \label{rem.5.2}Let $\sigma\left( t\right) $ and $\gamma\left( t\right) $ be two smooth paths in $M$ so that $\sigma\left( 0\right) =m=\gamma\left( 0\right) $ and suppose that $\alpha\left( t\right) $ and $\beta\left( t\right) $ are two smooth paths in $\mathbb{R}^{D}.$ Then in the local model described in Remark \ref{rem.2.3} we have, \begin{align*} \pi_{\ast}\dot{\xi}\left( 0\right) & =\pi_{\ast}\left( \dot{\sigma }\left( 0\right) ,\dot{\alpha}\left( 0\right) _{\alpha\left( 0\right) }\right) =\dot{\sigma}\left( 0\right) ,\\ \pi_{\ast}\dot{\eta}\left( 0\right) & =\pi_{\ast}\left( \dot{\gamma }\left( 0\right) ,\dot{\beta}\left( 0\right) _{\beta\left( 0\right) }\right) =\dot{\gamma}\left( 0\right) ,\\ \frac{\nabla\xi}{dt}\left( 0\right) & =\left( m,\dot{\alpha}\left( 0\right) +\Gamma\left( \dot{\sigma}\left( 0\right) \right) \alpha\left( 0\right) \right) ,\\ \frac{\nabla\eta}{dt}\left( 0\right) & =\left( m,\dot{\beta}\left( 0\right) +\Gamma\left( \dot{\gamma}\left( 0\right) \right) \beta\left( 0\right) \right) ,\text{ and}\\ \left\langle \dot{\xi}\left( 0\right) ,\dot{\eta}\left( 0\right) \right\rangle _{TE}= & \left\langle \dot{\sigma}\left( 0\right) ,\dot{\gamma}\left( 0\right) \right\rangle _{g}\\ & +\left( \dot{\alpha}\left( 0\right) +\Gamma\left( \dot{\sigma}\left( 0\right) \right) \alpha\left( 0\right) \right) \cdot\left( \dot{\beta }\left( 0\right) +\Gamma\left( \dot{\gamma}\left( 0\right) \right) \beta\left( 0\right) \right) . \end{align*} From this expression we see that $\left\langle \cdot,\cdot\right\rangle _{TE}$ is indeed a Riemannian metric on $E.$ For example, $\left\vert \dot{\xi }\left( 0\right) \right\vert _{TE}^{2}=0$ implies \[ 0=\left\vert \dot{\sigma}\left( 0\right) \right\vert _{g}^{2}+\left\vert \dot{\alpha}\left( 0\right) +\Gamma\left( \dot{\sigma}\left( 0\right) \right) \alpha\left( 0\right) \right\vert _{\mathbb{R}^{D}}^{2} \] from which it follows that $\dot{\sigma}\left( 0\right) =0$ and then $\left\vert \dot{\alpha}\left( 0\right) \right\vert _{\mathbb{R}^{D}}^{2}=0$ so that $\dot{\alpha}\left( 0\right) =0,$ i.e. $\dot{\xi}\left( 0\right) =0\in T_{\xi\left( 0\right) }E.$ \end{remark}
\begin{definition} \label{def.5.3}As usual, the length of a smooth path, $t\rightarrow\xi\left( t\right) \in E,$ is defined by \[ \ell_{E}\left( \xi\right) =\int_{0}^{1}\left\vert \dot{\xi}\left( t\right) \right\vert dt=\int_{0}^{1}\sqrt{\left\vert \pi_{\ast}\dot{\xi}\left( t\right) \right\vert ^{2}+\left\vert \frac{\nabla\xi\left( t\right) } {dt}\right\vert _{E}^{2}}dt \] and the distance, $d^{E},$ is then the distance associated to this length. \end{definition}
Our first goal is to give a more practical way (see Eq. (\ref{e.5.7}) of Corollary \ref{cor.5.8} below) of computing $d^{E}\left( e,e^{\prime}\right) $ for $e,e^{\prime}\in E,$
\begin{notation} \label{not.5.4}Given a path $\sigma:\left[ 0,1\right] \rightarrow M$, let \[ L_{\sigma}\left( e,e^{\prime}\right) :=\sqrt{\ell_{M}\left( \sigma\right) ^{2}+\left\vert \pt_{1}\left( \sigma\right) ^{-1}e^{\prime}-e\right\vert ^{2}}\text{ }\forall~e\in E_{\sigma\left( 0\right) }\text{ and }e^{\prime }\in E_{\sigma\left( 1\right) } \] with the convention that $L_{\sigma}\left( e,e^{\prime}\right) =\infty$ if $\sigma$ is not absolutely continuous. \end{notation}
\begin{theorem} \label{thm.5.6}If $\sigma\in AC\left( \left[ 0,1\right] ,M\right) ,$ $\xi\in AC_{\sigma}\left( \left[ 0,1\right] ,E\right) ,$ and \[ s\left( t\right) :=\int_{0}^{t}\left\vert \dot{\sigma}\left( \tau\right)
\right\vert d\tau\text{ -- arc-length of }\sigma|_{\left[ 0,t\right] }, \] then \begin{align} L_{\sigma}\left( \xi\left( 0\right) ,\xi\left( 1\right) \right) \leq & \sqrt{\ell_{M}\left( \sigma\right) ^{2}+\left[ \int_{0}^{1}\left\vert \nabla_{t}\xi\left( t\right) \right\vert dt\right] ^{2}}\nonumber\\ & \qquad\qquad\leq\ell_{E}\left( \xi\right) \leq\int_{0}^{1}\left[ \left\vert \dot{\sigma}\left( t\right) \right\vert +\left\vert \nabla_{t} \xi\left( t\right) \right\vert \right] dt. \label{e.5.3} \end{align} and moreover \begin{equation} L_{\sigma}\left( \xi\left( 0\right) ,\xi\left( 1\right) \right) =\sqrt{\ell_{M}\left( \sigma\right) ^{2}+\left[ \int_{0}^{1}\left\vert \nabla_{t}\xi\left( t\right) \right\vert dt\right] ^{2}}=\ell_{E}\left( \xi\right) \label{e.5.4} \end{equation} when \footnote{If $0=s\left( 1\right) =\ell\left( \sigma\right) ,$ then necessarily $m=\sigma\left( 0\right) =\sigma\left( 1\right) =p.$} \begin{align} \xi\left( t\right) & =\xi\left( 0\right) +t\left( \xi\left( 1\right) -\xi\left( 0\right) \right) \text{ if }s\left( 1\right) =0\text{ or}\label{e.5.5}\\ \xi\left( t\right) & =\pt_{t}\left( \sigma\right) \left[ e_{m} +\frac{s\left( t\right) }{s\left( 1\right) }\left( \pt_{1}\left( \sigma\right) ^{-1}\xi\left( 1\right) -\xi\left( 0\right) \right) \right] \text{ if }s\left( 1\right) >0. \label{e.5.6} \end{align}
\end{theorem}
\begin{proof} If we let $w\left( t\right) :=\pt_{t}\left( \sigma\right) ^{-1}\xi\left( t\right) \in E_{\sigma\left( 0\right) },$ then $\left\vert \nabla_{t} \xi\left( t\right) \right\vert =\left\vert \dot{w}\left( t\right) \right\vert $ and so \[ \int_{0}^{1}\left\vert \nabla_{t}\xi\left( t\right) \right\vert dt=\int _{0}^{1}\left\vert \dot{w}\left( t\right) \right\vert dt\geq\left\vert w\left( 1\right) -w\left( 0\right) \right\vert =\left\vert \pt_{1}\left( \sigma\right) ^{-1}\xi\left( 1\right) -\xi\left( 0\right) \right\vert , \] wherein we have used the length of $w$ is greater than or equal $\left\vert w\left( 1\right) -w\left( 0\right) \right\vert .$ The last inequality is equivalent to the first inequality in Eq. (\ref{e.5.3}).
If we let \[ u\left( t\right) :=\int_{0}^{t}\left\vert \nabla_{\tau}\xi\left( \tau\right) \right\vert d\tau, \] then $t\rightarrow\left( s\left( t\right) ,u\left( t\right) \right) \in\mathbb{R}^{2}$ is an absolutely continuous path in $\mathbb{R}^{2}$ from $\left( 0,0\right) $ and so the length of this path, \[ \int_{0}^{1}\sqrt{\dot{s}\left( t\right) ^{2}+\dot{u}\left( t\right) ^{2} }dt=\int_{0}^{1}\sqrt{\left\vert \dot{\sigma}\left( t\right) \right\vert ^{2}+\left\vert \nabla_{t}\xi\left( t\right) \right\vert ^{2}}dt=\ell _{E}\left( \xi\right) , \] is is greater than or equal to \[ \left\Vert \left( s\left( 1\right) ,u\left( 1\right) \right) \right\Vert _{\mathbb{R}^{2}}=\sqrt{\ell_{M}\left( \sigma\right) ^{2}+\left[ \int _{0}^{1}\left\vert \nabla_{t}\xi\left( t\right) \right\vert dt\right] ^{2} }. \] This proves the second inequality in Eq. (\ref{e.5.3}). To prove the last inequality in Eq. (\ref{e.5.3}) simply observe (see Eq. (\ref{e.3.22}) with $p=2)$ that \[ \sqrt{\left\vert \dot{\sigma}\left( t\right) \right\vert ^{2}+\left\vert \nabla_{t}\xi\left( t\right) \right\vert ^{2}}\leq\left\vert \dot{\sigma }\left( t\right) \right\vert +\left\vert \nabla_{t}\xi\left( t\right) \right\vert . \]
If $s\left( 1\right) >0$ and $\xi$ is given as in Eq. (\ref{e.5.6}), then \[ \left\vert \nabla_{t}\xi\left( t\right) \right\vert =\left\vert \pt_{t}\left( \sigma\right) \frac{\dot{s}\left( t\right) }{s\left( 1\right) }\left( \pt_{1}\left( \sigma\right) ^{-1}e_{p}^{\prime} -e_{m}\right) \right\vert =\frac{\left\vert \dot{\sigma}\left( t\right) \right\vert }{\ell_{M}\left( \sigma\right) }\left\vert \pt_{1}\left( \sigma\right) ^{-1}e_{p}^{\prime}-e_{m}\right\vert \] and hence \begin{align*} \ell_{E}\left( \xi\right) & =\int_{0}^{1}\sqrt{\left\vert \dot{\sigma }\left( t\right) \right\vert ^{2}+\frac{\left\vert \dot{\sigma}\left( t\right) \right\vert ^{2}}{s^{2}\left( 1\right) }\left\vert \pt_{1}\left( \sigma\right) ^{-1}\xi\left( 1\right) -\xi\left( 0\right) \right\vert ^{2}}dt\\ & =\int_{0}^{1}\frac{\left\vert \dot{\sigma}\left( t\right) \right\vert }{\ell_{M}\left( \sigma\right) }\sqrt{\ell_{M}^{2}\left( \sigma\right) +\left\vert \pt_{1}\left( \sigma\right) ^{-1}e_{p}^{\prime}-e_{m}\right\vert ^{2}}dt=L_{\sigma}\left( \xi\left( 0\right) ,\xi\left( 1\right) \right) \end{align*} which verifies Eq. (\ref{e.5.4}) in this case. Similarly by a simple calculation, Eq. (\ref{e.5.4}) holds when $\ell_{M}\left( \sigma\right) =s\left( 1\right) =0$ and $\xi$ is given as in Eq. (\ref{e.5.5}). \end{proof}
\begin{notation} \label{not.5.7}To each $\sigma\in AC\left( \left[ 0,1\right] ,M\right) ,$ let $AC_{\sigma}\left( \left[ 0,1\right] ,E\right) $ denote those $\xi\in AC\left( \left[ 0,1\right] ,E\right) $ such that $\xi\left( t\right) \in T_{\sigma\left( t\right) }M$ for all $0\leq t\leq1.$ \end{notation}
\begin{corollary} \label{cor.5.8}If $e_{m}\in E_{m},$ and $e_{p}^{\prime}\in E_{p},$ then \begin{equation} d^{E}\left( e_{m},e_{p}^{\prime}\right) =\inf\left\{ L_{\sigma}\left( e_{m},e_{p}^{\prime}\right) :\sigma\in AC\left( \left[ 0,1\right] ,M\right) ,\text{ }\sigma\left( 0\right) =m\text{~\& }\sigma\left( 1\right) =p\right\} \label{e.5.7} \end{equation} where for $\sigma\in AC\left( \left[ 0,1\right] ,M\right) $ with $\sigma\left( 0\right) =m$ and $\sigma\left( 1\right) =p,$ \begin{equation} L_{\sigma}\left( e_{m},e_{p}^{\prime}\right) =\min\left\{ \ell_{E}\left( \xi\right) :\xi\in AC_{\sigma}\left( \left[ 0,1\right] ,E\right) ,\text{ }\xi\left( 0\right) =e_{m},\text{\& }\xi\left( 1\right) =e_{p}^{\prime }\right\} . \label{e.5.8} \end{equation}
\end{corollary}
\begin{proof} The first equation is an easy consequence of the second. For the second equation, if $\xi\in AC_{\sigma}\left( \left[ 0,1\right] ,E\right) $ with $\xi\left( 0\right) =e_{m},$ and $\xi\left( 1\right) =e_{p}^{\prime},$ then by Theorem \ref{thm.5.6}, $L_{\sigma}\left( e_{m},e_{p}^{\prime}\right) \leq\ell_{E}\left( \xi\right) $ with equality occurring when $\xi$ is given by Eq. (\ref{e.5.5}) if $\ell_{M}\left( \sigma\right) =0$ or by Eq. (\ref{e.5.6}) if $\ell_{M}\left( \sigma\right) >0.$ \end{proof}
\begin{remark} \label{rem.5.9}One might suspect that if $e,e^{\prime}\in E_{m},$ then $d^{E}\left( e,e^{\prime}\right) =\left\vert e-e^{\prime}\right\vert .$ However this is not necessarily the case unless the holonomy group of $\nabla^{E}$ at $m$ is trivial (in particular this implies the curvature of $\nabla^{E}=0.)$ For example of $\left\vert e\right\vert =\left\vert e^{\prime}\right\vert =1,$ there may be a very short loop, $\sigma,$ starting and ending at $m,$ so that $\pt_{1}\left( \sigma\right) ^{-1}e^{\prime}=e$ in which case it would follow that $d^{E}\left( e,e^{\prime}\right) \leq \ell_{M}\left( \sigma\right) $ which can easily be smaller that $\left\vert e-e^{\prime}\right\vert $ which could be as large at $\sqrt{2}.$ If $\sigma$ is the constant loop sitting at $m,$ then $L_{\sigma}\left( e,e^{\prime }\right) =\left\vert e^{\prime}-e\right\vert $ and hence $d^{E}\left( e,e^{\prime}\right) \leq\left\vert e^{\prime}-e\right\vert $ whenever $e,e^{\prime}\in E_{m}$ for some $m\in M.$ \end{remark}
\begin{proposition} [$\left\vert \cdot\right\vert _{E}$ is Lipschitz ]\label{pro.5.10}If $e,e^{\prime}\in E,$ then \begin{equation} \left\vert \left\vert e\right\vert _{E}-\left\vert e^{\prime}\right\vert _{E}\right\vert \leq d^{E}\left( e,e^{\prime}\right) , \label{e.5.9} \end{equation} i.e. fiber metric on $E$, $\left\vert \cdot\right\vert _{E},$ is $1$-Lipschitz relative to $d^{E}.$ \end{proposition}
\begin{proof} Let $e_{m}$ and $e_{p}^{\prime}$ in $E$ and $\sigma$ be an absolutely continuous path joining $m$ to $p.$ Then by Lemma \ref{lem.2.4}, \[ \left\vert \left\vert e_{m}\right\vert _{E}-\left\vert e_{p}^{\prime }\right\vert _{E}\right\vert \leq\left\vert e_{m}-\pt_{1}\left( \sigma\right) ^{-1}e_{p}^{\prime}\right\vert _{E_{m}}\leq L_{\sigma}\left( e_{m},e_{p}^{\prime}\right) \] and therefore by Corollary \ref{cor.5.8}, \[ \left\vert \left\vert e_{m}\right\vert _{E}-\left\vert e_{p}^{\prime }\right\vert _{E}\right\vert \leq\inf_{\sigma}L_{\sigma}\left( e_{m} ,e_{p}^{\prime}\right) =d^{E}\left( e_{m},e_{p}^{\prime}\right) , \] where the infimum is over all paths, $\sigma,$ joining $m$ to $p.$ \end{proof}
\begin{proposition} [Completeness of $E$]\label{pro.5.11}If $\left( M,g\right) $ is a complete Riemannian manifold then the vector bundle, $E,$ with the Riemannian structure in Definition \ref{def.5.1} is again a complete Riemannian manifold. \end{proposition}
\begin{proof} Let $\pi:E\rightarrow M$ be the natural projection map and observe that \[ \left\vert \pi_{\ast}\dot{\xi}\left( 0\right) \right\vert _{M}\leq\left\vert \dot{\xi}\left( 0\right) \right\vert _{E}\text{ }\forall~\dot{\xi}\left( 0\right) \in TE. \] If $e_{0},e_{1}\in E$ and $e\left( \cdot\right) \in AC\left( \left[ 0,1\right] ,E\right) $ is a path joining $e_{0}$ to $e_{1},$ then $\pi\circ e\in AC\left( \left[ 0,1\right] ,M\right) $ is path joining $\pi\left( e_{0}\right) $ to $\pi\left( e_{1}\right) $ and \begin{align*} d_{M}\left( \pi\left( e_{0}\right) ,\pi\left( e_{1}\right) \right) & \leq\ell_{M}\left( \pi\circ e\right) =\int_{0}^{1}\left\vert \pi_{\ast} \dot{u}\left( t\right) \right\vert _{M}dt\\ & \leq\int_{0}^{1}\left\vert \dot{u}\left( t\right) \right\vert _{E} dt=\ell_{E}\left( e\right) . \end{align*} Minimizing this inequality over $e$ as described above shows \[ d_{M}\left( \pi\left( e_{0}\right) ,\pi\left( e_{1}\right) \right) \leq d^{E}\left( e_{0},e_{1}\right) . \] Hence if $\left\{ e_{n}\right\} _{n=1}^{\infty}$ is a Cauchy sequence in $E,$ then $\left\{ p_{n}=\pi\left( e_{n}\right) \right\} _{n=1}^{\infty}$ is a Cauchy sequence in $M.$ As $M$ is complete we know that $p=\lim _{n\rightarrow\infty}p_{n}$ exists in $M.$ Let $W$ be an open neighborhood of $p$ in $M$ over which $M$ is trivial and let $U$ be a local orthonormal frame (as described after Notation \ref{not.2.2}) of $E$ defined over $W$ and, for large enough $n,$ let $v_{n}:=U\left( p_{n}\right) ^{-1}e_{n}\in \mathbb{R}^{N}$ where $N$ is the fiber dimension of $E.$ From Proposition \ref{pro.5.10}, we know $\left\{ \left\vert e_{n}\right\vert _{E}=\left\vert v_{n}\right\vert _{\mathbb{R}^{N}}\right\} _{n=1}^{\infty}$ is a Cauchy sequence in $\mathbb{R}$ and hence bounded and hence there exists a subsequence, $\left\{ v_{n_{k}}\right\} _{k=1}^{\infty}$ of $\left\{ v_{n}\right\} $ so that $v:=\lim_{k\rightarrow\infty}v_{n_{k}}\ $exists in $\mathbb{R}^{N}.$ It then follows that \[ \lim_{k\rightarrow\infty}e_{n_{k}}=\lim_{k\rightarrow\infty}U\left( p_{n_{k} }\right) v_{n_{k}}=U\left( p\right) v\text{ exists.} \] As $\left\{ e_{n}\right\} _{n=1}^{\infty}$ was Cauchy in $E$ and has a convergent subsequence, it follows that $\lim_{n\rightarrow\infty} e_{n}=U\left( p\right) v$ exists in $E$ and hence $E$ is complete. \end{proof}
\begin{theorem} \label{thm.5.12}Let $\pi:E\rightarrow M$ be a vector bundle equipped with a fiber metric and metric compatible covariant derivative as above. If $\lambda\geq0$ and $e_{m},e_{p}^{\prime}\in E,$ then \begin{equation} d^{E}\left( \lambda e_{m},\lambda e_{p}^{\prime}\right) \leq\left( \lambda\vee1\right) d^{E}\left( e_{m},e_{p}^{\prime}\right) . \label{e.5.12} \end{equation}
\end{theorem}
\begin{proof} Let $\sigma$ be a curve joining $m$ to $p,$ then \begin{align*} d^{E}\left( \lambda e_{m},\lambda e_{p}^{\prime}\right) & \leq L_{\sigma }\left( \lambda e_{m},\lambda e_{p}^{\prime}\right) =\sqrt{\ell_{M}\left( \sigma\right) ^{2}+\left\vert \lambda\pt_{1}\left( \sigma\right) ^{-1}e^{\prime}-\lambda e\right\vert ^{2}}\\ & =\sqrt{\ell_{M}\left( \sigma\right) ^{2}+\left\vert \lambda\right\vert ^{2}\left\vert \pt_{1}\left( \sigma\right) ^{-1}e^{\prime}-e\right\vert ^{2}}\leq\lambda\vee1\cdot L_{\sigma}\left( e_{m},e_{p}^{\prime}\right) \end{align*} and the result now follows from Corollary \ref{cor.5.8} as $\sigma$ was arbitrary. \end{proof}
\begin{definition} [Bundle maps]\label{def.5.13}A smooth function, $F:E\rightarrow E,$ is a \textbf{bundle map} provided there exist a smooth map, $f:M\rightarrow M$ such that $F\left( E_{m}\right) \subset E_{f\left( m\right) }$ for all $m\in M$
and $F|_{E_{m}}:E_{m}\rightarrow E_{f\left( m\right) }$ is linear. We will refer to such an $F$ as a \textbf{bundle map covering }$f.$ \end{definition}
We are interested in measuring the distance between two bundle maps, $F,G:E\rightarrow E.$ For such maps we can no longer define $d_{\infty} ^{E}\left( F,G\right) :=\sup_{e\in E}d^{E}\left( Fe,Ge\right) $ since \[ \sup_{\lambda>0}d^{E}\left( F\lambda e,G\lambda e\right) =\infty\text{ if }\left\vert Fe\right\vert \neq\left\vert Ge\right\vert . \] Indeed if $\sigma\in C^{1}\left( \left[ 0,1\right] ,M\right) $ is any path such that $Fe\in E_{\sigma\left( 0\right) }$ and $Ge\in E_{\sigma\left( 1\right) },$ then \begin{align*} L_{\sigma}\left( \lambda Fe,\lambda Ge\right) & =\sqrt{\ell_{M}\left( \sigma\right) ^{2}+\left\vert \pt_{1}\left( \sigma\right) ^{-1}\lambda Ge-\lambda Fe\right\vert ^{2}}\\ & \geq\left\vert \left\vert \pt_{1}\left( \sigma\right) ^{-1}\lambda Ge\right\vert -\left\vert \lambda Fe\right\vert \right\vert =\left\vert \lambda\right\vert \left\vert \left\vert Ge\right\vert -\left\vert Fe\right\vert \right\vert \end{align*} and hence by Corollary \ref{cor.5.8}, \[ d^{E}\left( F\lambda e,G\lambda e\right) \geq\left\vert \lambda\right\vert \left\vert \left\vert Ge\right\vert -\left\vert Fe\right\vert \right\vert \rightarrow\infty\text{ as }\lambda\uparrow\infty. \] On the other hand, as bundle maps are fiber linear they are determined by their values, $\left\{ Fe:e\in E\text{ with }\left\vert e\right\vert =1\right\} .$ With these comments in mind we make the following definition.
\begin{definition} [Bundle map norms and distances]\label{def.5.14}Given a bundle maps, $F:E\rightarrow E,$ $m\in M,$ and $\sigma\in C\left( \left[ 0,1\right] ,M\right) ,$ let \begin{align*} \left\vert F\right\vert _{m} & :=\sup_{e\in E_{m}:\left\vert e\right\vert =1}\left\vert Fe\right\vert ,\\ \left\vert F\right\vert _{\sigma} & :=\sup_{t\in\left[ 0,1\right] }\left\vert F\right\vert _{\sigma\left( t\right) },\text{ and}\\ \left\vert F\right\vert _{M} & :=\sup_{m\in M}\left\vert F\right\vert _{m}=\sup_{e\in E_{m}:\left\vert e\right\vert =1}\left\vert Fe\right\vert . \end{align*} If $G:E\rightarrow E$ is another bundle map, let \[ d_{\infty}^{E}\left( F,G\right) :=\sup_{e\in E:\left\vert e\right\vert =1}d^{E}\left( Fe,Ge\right) . \]
\end{definition}
\begin{remark} \label{rem.5.15}Let us note that $d_{\infty}^{E}\left( F,G\right) =0$ iff $Fe=Ge$ for all $\left\vert e\right\vert =1$ which suffices to show $F\equiv G$ since both $F$ and $G$ are fiber linear. \end{remark}
\begin{lemma} \label{lem.5.16}If $F,G:E\rightarrow E$ are bundle maps, then \begin{equation} \left\vert \left\vert F\right\vert _{M}-\left\vert G\right\vert _{M} \right\vert \leq d_{\infty}^{E}\left( F,G\right) . \label{e.5.14} \end{equation}
\end{lemma}
\begin{proof} If $e\in E$ with $\left\vert e\right\vert =1,$ then (by Proposition \ref{pro.5.10}) \[ \left\vert \left\vert Fe\right\vert -\left\vert Ge\right\vert \right\vert \leq d^{E}\left( Fe,Ge\right) \leq d_{\infty}^{E}\left( F,G\right) \] and therefore \[ \left\vert Fe\right\vert \leq\left\vert Ge\right\vert +d_{\infty}^{E}\left( F,G\right) \leq\left\vert G\right\vert _{M}+d_{\infty}^{E}\left( F,G\right) . \] As this is true for all $e\in E$ with $\left\vert e\right\vert =1$ we may further conclude that \[ \left\vert F\right\vert _{M}\leq\left\vert G\right\vert _{M}+d_{\infty} ^{E}\left( F,G\right) . \] Reversing the roles of $F$ and $G$ also shows \[ \left\vert G\right\vert _{M}\leq\left\vert F\right\vert _{M}+d_{\infty} ^{E}\left( F,G\right) \] and together the last two displayed equations proves Eq. (\ref{e.5.14}). \end{proof}
The next proposition contains the typical mechanism we will use for estimating $d_{\infty}^{E}\left( F,G\right) .$
\begin{proposition} \label{pro.5.17}If $\left\{ F_{t}\right\} _{0\leq t\leq1}$ is a smoothly varying one parameter family of bundle maps from $E$ to $E$ covering $\left\{ f_{t}\right\} _{0\leq t\leq1}\subset C^{\infty}\left( M,M\right) ,$ then for any $e\in E_{m},$ \begin{equation} d^{E}\left( F_{0}e,F_{1}e\right) \leq\sqrt{\ell_{M}^{2}\left( f_{\left( \cdot\right) }\left( m\right) \right) +\left[ \int_{0}^{1}\left\vert \frac{\nabla F_{t}}{dt}e\right\vert dt\right] ^{2}}, \label{e.5.15} \end{equation} and \begin{align} d_{\infty}^{E}\left( F_{0},F_{1}\right) & \leq\sqrt{\sup_{m\in M}\ell _{M}^{2}\left( f_{\left( \cdot\right) }\left( m\right) \right) +\left[ \int_{0}^{1}\left\vert \frac{\nabla F_{t}}{dt}\right\vert _{M}dt\right] ^{2} }\label{e.5.16}\\ & \leq\sup_{m\in M}\ell_{M}\left( f_{\left( \cdot\right) }\left( m\right) \right) +\int_{0}^{1}\left\vert \frac{\nabla F_{t}}{dt}\right\vert _{M}dt\label{e.5.17}\\ & \leq\int_{0}^{1}\left[ \left\vert \dot{f}_{t}\right\vert _{M}+\left\vert \frac{\nabla F_{t}}{dt}\right\vert _{M}\right] dt. \label{e.5.18} \end{align}
\end{proposition}
\begin{proof} If $e\in E_{m},$ then, by Corollary \ref{cor.5.8} with $\sigma\left( t\right) =f_{t}\left( m\right) ,$ \begin{align} d^{E}\left( F_{0}e,F_{1}e\right) \leq & L_{\sigma}\left( F_{0} e,F_{1}e\right) \nonumber\\ & =\sqrt{\ell_{M}^{2}\left( t\rightarrow f_{t}\left( m\right) \right) +\left\vert \pt_{1}\left( f_{\left( \bullet\right) }\left( m\right) \right) ^{-1}F_{1}e-F_{0}e\right\vert ^{2}}. \label{e.5.19} \end{align} This inequality along with Lemma \ref{lem.2.4} applied with $\xi\left( t\right) =F_{t}e$ then gives Eq. (\ref{e.5.15}) and Eq. (\ref{e.5.15}) along with Eq. (\ref{e.3.22}) with $p=2$ then show, \begin{equation} d^{E}\left( F_{0}e,F_{1}e\right) \leq\ell_{M}\left( f_{\left( \cdot\right) }\left( m\right) \right) +\int_{0}^{1}\left\vert \frac{\nabla F_{t}}{dt}e\right\vert dt. \label{e.5.20} \end{equation} Taking the supremum of these estimates over $\left\vert e\right\vert =1$ then give the remaining stated estimates since, Definition \ref{def.5.14}, \begin{equation} \left\vert \frac{\nabla F_{t}}{dt}\right\vert _{M}:=\sup\left\{ \left\vert \frac{\nabla F_{t}}{dt}e\right\vert :e\in E\text{ with }\left\vert e\right\vert =1\right\} . \label{e.5.21} \end{equation}
\end{proof}
\begin{remark} \label{rem.5.18}A more elementary way to arrive at Eq. (\ref{e.5.20}) is again to let $\sigma\left( t\right) =f_{t}\left( m\right) $ and $\xi\left( t\right) =F_{t}e$ and then observe that \begin{align*} d^{E}\left( F_{0}e,F_{1}e\right) & \leq\ell_{E}\left( \xi\right) =\int_{0}^{1}\sqrt{\left\vert \dot{\sigma}\left( t\right) \right\vert ^{2}+\left\vert \frac{\nabla\xi\left( t\right) }{dt}\right\vert ^{2}}dt\\ & \leq\int_{0}^{1}\left( \left\vert \dot{\sigma}\left( t\right) \right\vert +\left\vert \frac{\nabla\xi\left( t\right) }{dt}\right\vert \right) dt=\ell_{M}\left( f_{\left( \cdot\right) }\left( m\right) \right) +\int_{0}^{1}\left\vert \frac{\nabla F_{t}}{dt}e\right\vert dt \end{align*} wherein we have used Eq. (\ref{e.3.22}) with $p=2$ for the last inequality. \end{remark}
Lastly we turn our attention to estimating $d^{E}\left( Fe,Fe^{\prime }\right) ,$ where $e,e^{\prime}\in E$ and $F:E\rightarrow E$ is a bundle map covering $f:M\rightarrow M.$ As a warm up let us begin with the following flat special case.
\begin{lemma} \label{lem.5.19}Suppose $M=\mathbb{R}^{n}$ with the standard metric, $\left( W,\left\langle \cdot,\cdot\right\rangle \right) $ is a finite dimensional inner product space, and $E=M\times W$ which is equipped with flat covariant derivative, i.e. $\Gamma\equiv0$ in this trivialization. [We denote $e=\left( m,w\right) \in E=M\times W$ by $w_{m}.]$ If $f\in C^{\infty}\left( M,M\right) $ and $\hat{F}\in C^{\infty}\left( M,\operatorname*{End}\left( W\right) \right) ,$ then $Fw_{m}:=\left[ \hat{F}\left( m\right) w\right] _{f\left( m\right) }$ is a bundle map covering $f$ and this map satisfies, \begin{equation} d^{E}\left( Fw_{m},Fw_{p}^{\prime}\right) \leq\left( \max\left\{ \operatorname{Lip}\left( f\right) ,\left\Vert \hat{F}\left( p\right) \right\Vert \right\} +\left\vert \hat{F}^{\prime}\right\vert _{M}\left\Vert w\right\Vert \right) d^{E}\left( w_{m},w_{p}^{\prime}\right) . \label{e.5.22} \end{equation}
\end{lemma}
\begin{proof} Written in this form we find \begin{align*} \left( d^{E}\right) ^{2} & \left( Fw_{m},Fw_{p}^{\prime}\right) \\ & =\left\Vert f\left( m\right) -f\left( p\right) \right\Vert ^{2}+\left\Vert \hat{F}\left( m\right) w-\hat{F}\left( p\right) w^{\prime }\right\Vert ^{2}\\ & \leq\operatorname{Lip}^{2}\left( f\right) \left\Vert m-p\right\Vert ^{2}+\left( \left\Vert \hat{F}\left( m\right) w-\hat{F}\left( p\right) w\right\Vert +\left\Vert \hat{F}\left( p\right) \left( w-w^{\prime}\right) \right\Vert \right) ^{2}\\ & \leq\operatorname{Lip}^{2}\left( f\right) \left\Vert m-p\right\Vert ^{2}+\left( \left\vert \hat{F}^{\prime}\right\vert _{M}\left\Vert w\right\Vert \left\Vert m-p\right\Vert +\left\Vert \hat{F}\left( p\right) \right\Vert \left\Vert w-w^{\prime}\right\Vert \right) ^{2}\\ & =\operatorname{Lip}^{2}\left( f\right) \left\Vert m-p\right\Vert ^{2}+\left\Vert \hat{F}\left( p\right) \right\Vert ^{2}\left\Vert w-w^{\prime}\right\Vert ^{2}+\left\vert \hat{F}^{\prime}\right\vert _{M} ^{2}\left\Vert w\right\Vert ^{2}\left\Vert m-p\right\Vert ^{2}\\ & \qquad+2\left\vert \hat{F}^{\prime}\right\vert _{M}\left\Vert w\right\Vert \left\Vert \hat{F}\left( p\right) \right\Vert \left\Vert m-p\right\Vert \left\Vert w-w^{\prime}\right\Vert . \end{align*} Using $\rho=\max\left\{ \operatorname{Lip}\left( f\right) ,\left\Vert \hat{F}\left( p\right) \right\Vert \right\} $ the above estimate implies, \begin{align*} \left( d^{E}\right) ^{2}\left( Fw_{m},Fw_{p}^{\prime}\right) & \leq\left[ \rho^{2}+\left\vert \hat{F}^{\prime}\right\vert _{M}^{2}\left\Vert w\right\Vert ^{2}+2\left\vert \hat{F}^{\prime}\right\vert _{M}\left\Vert w\right\Vert \rho\right] \left( d^{E}\right) ^{2}\left( w_{m} ,w_{p}^{\prime}\right) \\ & =\left( \rho+\left\vert \hat{F}^{\prime}\right\vert _{M}\left\Vert w\right\Vert \right) ^{2}\left( d^{E}\right) ^{2}\left( w_{m} ,w_{p}^{\prime}\right) \end{align*} which gives the estimate in Eq. (\ref{e.5.22}). \end{proof}
By swapping $w_{m}$ with $w_{p}^{\prime}$ in Eq. (\ref{e.5.22}) we of course also have \begin{equation} d^{E}\left( Fw_{m},Fw_{p}^{\prime}\right) \leq\left[ \max\left\{ \operatorname{Lip}\left( f\right) ,\left\Vert \hat{F}\left( m\right) \right\Vert \right\} +\left\vert \hat{F}^{\prime}\right\vert _{M}\left\Vert w^{\prime}\right\Vert \right] d^{E}\left( w_{m},w_{p}^{\prime}\right) . \label{e.5.23} \end{equation} In Theorem \ref{thm.5.23} below, we will show that the analogue of Eq. (\ref{e.5.23}) holds in full generality. The following notation will be used in the statement of this theorem.
\begin{definition} \label{def.5.20}Suppose that $f\in C^{\infty}\left( M,M\right) $ and $F:E\rightarrow E$ is a bundle map covering $f.$ For $v\in T_{m}M,$ let $\nabla_{v}F\in\operatorname{Hom}\left( E_{m},E_{f\left( m\right) }\right) $ be defined by; \[
\nabla_{v}F:=\frac{d}{dt}|_{0}\pt_{t}\left( f\circ\sigma\right) ^{-1}F_{\sigma\left( t\right) }\pt_{t}\left( \sigma\right) \] where $\sigma$ is any $C^{1}$-cure in $M$ such that $\dot{\sigma}\left(
0\right) =v$ and $F_{\sigma\left( t\right) }:=F|_{E_{\sigma\left( t\right) }}.$ \end{definition}
\begin{lemma} [Product rule]\label{lem.5.21}If $F:E\rightarrow E$ is a bundle map covering $f,$ $S\in\Gamma\left( E\right) $ and $\sigma\left( t\right) \in M,\ $then \begin{equation}
\frac{\nabla}{dt}|_{0}\left( FS\right) \left( \sigma\left( t\right) \right) =\left( \nabla_{\dot{\sigma}\left( 0\right) }F\right) S\left( m\right) +F_{\sigma\left( 0\right) }\nabla_{\dot{\sigma}\left( 0\right) }S. \label{e.5.24} \end{equation}
\end{lemma}
\begin{proof} This result is easily reduced to the standard product rule matrices and vectors as follows; \begin{align*}
\frac{\nabla}{dt}|_{0}\left( FS\right) \left( \sigma\left( t\right)
\right) = & \frac{d}{dt}|_{0}\left( \pt_{t}\left( f\circ\sigma\right) ^{-1}\left[ F_{\sigma\left( t\right) }S\left( \sigma\left( t\right) \right) \right] \right) \\
= & \frac{d}{dt}|_{0}\left( \pt_{t}\left( f\circ\sigma\right) ^{-1}\left[ F_{\sigma\left( t\right) }\pt_{t}\left( \sigma\right) \pt_{t}\left( \sigma\right) ^{-1}S\left( \sigma\left( t\right) \right) \right] \right) \\
= & \frac{d}{dt}|_{0}\left( \pt_{t}\left( f\circ\sigma\right) ^{-1}\left[ F_{\sigma\left( t\right) }\pt_{t}\left( \sigma\right) S\left( m\right) \right] \right) \\
& \quad+\frac{d}{dt}|_{0}\left( \left[ F_{\sigma\left( 0\right) } \pt_{t}\left( \sigma\right) ^{-1}S\left( \sigma\left( t\right) \right) \right] \right) \end{align*} which is equivalent to Eq. (\ref{e.5.24}). \end{proof}
\begin{notation} \label{not.5.22}Given $m\in M,$ $\sigma\in C\left( \left[ 0,1\right] ,M\right) ,$ $f\in C^{\infty}\left( M,M\right) ,$ and a bundle map, $F:E\rightarrow E,$ covering $f,$ let \begin{align*} \left\vert \nabla F\right\vert _{m} & :=\sup_{v\in T_{m}M:\left\vert v\right\vert =1}\left\vert \nabla_{v}F\right\vert _{op}:=\sup_{v\in T_{m}M:\left\vert v\right\vert =1}\sup_{e\in E_{m}:\left\vert e\right\vert =1}\left\vert \left( \nabla_{v}F\right) e\right\vert ,\\ \left\vert \nabla F\right\vert _{\sigma} & :=\sup_{t\in\left[ 0,1\right] }\left\vert \nabla F\right\vert _{\sigma\left( t\right) },\text{ and}\\ \left\vert \nabla F\right\vert _{M} & :=\sup_{m\in M}\left\vert \nabla F\right\vert _{m}. \end{align*}
\end{notation}
\begin{theorem} \label{thm.5.23}Let $F:E\rightarrow E$ is a bundle map covering $f:M\rightarrow M,$ $e\in E_{m},$ $e^{\prime}\in E_{p},$ and $\sigma\in AC\left( \left[ 0,1\right] ,M\right) $ be a curve such that $\sigma\left( 0\right) =m$ and $\sigma\left( 1\right) =p.$ Then \begin{equation} d^{E}\left( Fe,Fe^{\prime}\right) \leq\left( \max\left( \left\vert f_{\ast}\right\vert _{\sigma},\left\vert F_{m}\right\vert \right) +\left\vert \nabla F\right\vert _{\sigma}\cdot\left\vert e^{\prime}\right\vert \right) L_{\sigma}\left( e,e^{\prime}\right) , \label{e.5.25} \end{equation} and in particular, \begin{equation} d^{E}\left( Fe,Fe^{\prime}\right) \leq\left( \max\left( \operatorname{Lip} \left( f\right) ,\left\vert F_{m}\right\vert \right) +\left\vert \nabla F\right\vert _{M}\left\vert e^{\prime}\right\vert \right) d^{E}\left( e,e^{\prime}\right) . \label{e.5.26} \end{equation}
\end{theorem}
\begin{proof} To simplify notation in the proof below let \begin{align*} \rho & :=\max\left( \left\vert f_{\ast}\right\vert _{\sigma},\left\vert F_{m}\right\vert \right) ,\\ \tilde{e} & :=\pt_{1}\left( \sigma\right) ^{-1}e^{\prime}\in E_{m},\text{ and }\\ A_{t} & :=\pt_{t}\left( f\circ\sigma\right) ^{-1}F_{\sigma\left( t\right) }\pt_{t}\left( \sigma\right) :E_{m}\rightarrow E_{f\left( m\right) }. \end{align*} By Corollary \ref{cor.5.8}, it follows that \begin{align*} d^{E}\left( Fe,Fe^{\prime}\right) \leq L_{f\circ\sigma}\left( Fe,Fe^{\prime}\right) & =\sqrt{\ell_{M}^{2}\left( f\circ\sigma\right) +\left\vert \pt_{1}\left( f\circ\sigma\right) ^{-1}Fe^{\prime}-Fe\right\vert ^{2}}.\\ & =\sqrt{\ell_{M}^{2}\left( f\circ\sigma\right) +\left\vert A_{1}\tilde {e}-A_{0}e\right\vert ^{2}}. \end{align*} The first term in the square root is estimated by, \[ \ell_{M}\left( f\circ\sigma\right) =\int_{0}^{1}\left\vert f_{\ast} \dot{\sigma}\left( t\right) \right\vert dt\leq\left\vert f_{\ast}\right\vert _{\sigma}\ell_{M}\left( \sigma\right) . \] For the second term, we note that \[ \left\vert \frac{d}{dt}A_{t}\right\vert =\left\vert \pt_{t}\left( f\circ\sigma\right) ^{-1}\left( \nabla_{\dot{\sigma}\left( t\right) }F\right) \pt_{t}\left( \sigma\right) \right\vert =\left\vert \nabla _{\dot{\sigma}\left( t\right) }F\right\vert \leq\left\vert \nabla F\right\vert _{\sigma}\left\vert \dot{\sigma}\left( t\right) \right\vert \] and hence \[ \left\vert A_{1}-A_{0}\right\vert _{op}=\int_{0}^{1}\left\vert \frac{d} {dt}A_{t}\right\vert dt\leq\left\vert \nabla F\right\vert _{\sigma}\ell _{M}\left( \sigma\right) . \] Thus we conclude that \begin{align*} \left\vert A_{1}\tilde{e}-A_{0}e\right\vert & \leq\left\vert A_{1}\tilde {e}-A_{0}\tilde{e}\right\vert +\left\vert A_{0}\left[ \tilde{e}-e\right] \right\vert \\ & \leq\left\vert \nabla F\right\vert _{\sigma}\ell_{M}\left( \sigma\right) \left\vert \tilde{e}\right\vert +\left\vert F_{m}\right\vert \left\vert \tilde{e}-e\right\vert \\ & =\left\vert \nabla F\right\vert _{\sigma}\ell_{M}\left( \sigma\right) \left\vert e^{\prime}\right\vert +\left\vert F_{m}\right\vert \left\vert \pt_{1}\left( \sigma\right) ^{-1}e^{\prime}-e\right\vert . \end{align*} Combining the previous estimates then shows, \begin{align*} \left( d^{E}\right) ^{2} & \left( Fe,Fe^{\prime}\right) \\ \leq & \left\vert f_{\ast}\right\vert _{\sigma}^{2}\ell_{M}^{2}\left( \sigma\right) +\left[ \left\vert \nabla F\right\vert _{\sigma}\ell _{M}\left( \sigma\right) \left\vert e^{\prime}\right\vert +\left\vert F_{m}\right\vert \left\vert \pt_{1}\left( \sigma\right) ^{-1}e^{\prime }-e\right\vert \right] ^{2}\\ = & \left\vert f_{\ast}\right\vert _{\sigma}^{2}\ell_{M}^{2}\left( \sigma\right) +\operatorname{Lip}_{\sigma}^{2}\left( F\right) \left\vert e^{\prime}\right\vert ^{2}\ell_{M}^{2}\left( \sigma\right) +\left\vert F_{m}\right\vert ^{2}\left\vert \pt_{1}\left( \sigma\right) ^{-1}e^{\prime }-e\right\vert ^{2}\\ & \qquad+2\left\vert F_{m}\right\vert \left\vert \pt_{1}\left( \sigma\right) ^{-1}e^{\prime}-e\right\vert \cdot\left\vert \nabla F\right\vert _{\sigma}\ell_{M}\left( \sigma\right) \left\vert e^{\prime }\right\vert \\ \leq & \rho^{2}L_{\sigma}^{2}\left( e,e^{\prime}\right) +\operatorname{Lip} _{\sigma}^{2}\left( F\right) \left\vert e^{\prime}\right\vert ^{2}L_{\sigma }^{2}\left( e,e^{\prime}\right) +2\left\vert F_{m}\right\vert \left\vert \nabla F\right\vert _{\sigma}\left\vert e^{\prime}\right\vert L_{\sigma} ^{2}\left( e,e^{\prime}\right) \\ \leq & \rho^{2}L_{\sigma}^{2}\left( e,e^{\prime}\right) +\operatorname{Lip} _{\sigma}^{2}\left( F\right) \left\vert e^{\prime}\right\vert ^{2}L_{\sigma }^{2}\left( e,e^{\prime}\right) +2\rho\left\vert \nabla F\right\vert _{\sigma}\left\vert e^{\prime}\right\vert L_{\sigma}^{2}\left( e,e^{\prime }\right) \\ & \qquad=\left( \rho+\left\vert \nabla F\right\vert _{\sigma}\left\vert e^{\prime}\right\vert \right) ^{2}L_{\sigma}^{2}\left( e,e^{\prime}\right) \end{align*} which proves Eq. (\ref{e.5.25}). Moreover, Eq. (\ref{e.5.25} implies \[ d^{E}\left( Fe,Fe^{\prime}\right) \leq\left( \max\left( \operatorname{Lip} \left( f\right) ,\left\vert F_{m}\right\vert \right) +\left\vert \nabla F\right\vert _{M}\cdot\left\vert e^{\prime}\right\vert \right) L_{\sigma }\left( e,e^{\prime}\right) \] and so taking the infimum of this last inequality over $\sigma\in AC\left( \left[ 0,1\right] ,M\right) $ such that $\sigma\left( 0\right) =m$ and $\sigma\left( 1\right) =p$ gives (see Corollary \ref{cor.5.8}) Eq. (\ref{e.5.26}). \end{proof}
\subsection{Metrics on $TM$\label{sec.5.2}}
From now we are going to restrict our attention to the case of interest where $E=TM$ and $F=f_{\ast}$ where $f\in C^{2}\left( M,M\right) .$ Before stating the main result in Theorem \ref{thm.5.30} below, let us record that relevant notions of covariant differentiation in this context.
\begin{definition} [Vector-fields along $f$]\label{def.5.24}For $f\in C^{\infty}\left( M,M\right) ,$ let $\Gamma_{f}\left( TM\right) $ denote the \textbf{vector fields} \textbf{along} $f,$ i.e. $U\in\Gamma_{f}\left( TM\right) $ iff $U:M\rightarrow TM$ is a smooth function such that $U\left( m\right) \in T_{f\left( m\right) }M$ for all $m\in M.$ \end{definition}
\begin{example} \label{ex.5.25}If $Z\in\Gamma\left( TM\right) $ and $f\in C^{\infty}\left( M,M\right) ,$ then $f_{\ast}Z$ and $Z\circ f$ are both vector fields along $f.$ \end{example}
\begin{definition} \label{def.5.26}For $f\in C^{\infty}\left( M,M\right) ,$ $U\in\Gamma _{f}\left( TM\right) ,$ and $v=v_{m}\in T_{m}M,$ let $\nabla_{v}U\in T_{f\left( m\right) }M$ and $\nabla_{v}f_{\ast}$ be the linear map from $T_{m}M$ to $T_{f\left( m\right) }M\ $be defined by, \begin{align*}
\nabla_{v}U & =\frac{\nabla}{dt}|_{0}U\left( \sigma\left( t\right)
\right) =\frac{d}{dt}|_{0}\left[ \pt_{t}\left( f\circ\sigma\right) ^{-1}U\left( \sigma\left( t\right) \right) \right] \text{ and }\\
\nabla_{v}f_{\ast} & =\frac{d}{dt}|_{0}\left[ \pt_{t}\left( f\circ \sigma\right) ^{-1}f_{\ast\sigma\left( t\right) }\pt_{t}\left( \sigma\right) \right] \end{align*} where is any $C^{1}$-curve in $M$ such that $\dot{\sigma}\left( 0\right) =v_{m}.$ [It is easily verified by working in local trivializations of $TM$ that $\nabla_{v}U$ and $\nabla_{v}f_{\ast}$ are well defined independent of the choice of $\sigma$ such that $\dot{\sigma}\left( 0\right) =v_{m}.]$ \end{definition}
\begin{proposition} [Chain and product rules]\label{pro.5.27}If $f\in C^{\infty}\left( M,M\right) ,$ $Z\in\Gamma\left( TM\right) ,$ and $v\in T_{m}M,$ then \begin{align} \nabla_{v}\left[ Z\circ f\right] & =\nabla_{f_{\ast}v}Z\text{ and }\label{e.5.27}\\ \nabla_{v}\left[ f_{\ast}Z\right] & =\left( \nabla_{v}f_{\ast}\right) Z\left( m\right) +f_{\ast}\nabla_{v}Z. \label{e.5.28} \end{align} More generally if $U\in\Gamma_{f}\left( TM\right) $ and $g\in C^{\infty }\left( M,M\right) ,$ then $U\circ g\in\Gamma_{f\circ g}\left( M\right) ,$ $g_{\ast}U\in\Gamma_{g\circ f}\left( M\right) ,$ \begin{align} \nabla_{v}\left[ U\circ g\right] & =\nabla_{g_{\ast}v}U,\text{ and}\label{e.5.29}\\ \nabla_{v}\left[ g_{\ast}U\right] & =\left( \nabla_{f_{\ast}v}g_{\ast }\right) U\left( m\right) +g_{\ast m}\nabla_{v}U. \label{e.5.30} \end{align}
\end{proposition}
\begin{proof} If $\sigma\left( t\right) \in M$ is chosen so that $\dot{\sigma}\left( 0\right) =v_{m},$ then \[
\nabla_{v}\left[ Z\circ f\right] =\frac{d}{dt}|_{0}\left[ \pt_{t}\left( f\circ\sigma\right) ^{-1}\left( Z\circ f\right) \left( \sigma\left( t\right) \right) \right] =\nabla_{f_{\ast}v}Z \] and \begin{align*}
\nabla_{v}\left[ f_{\ast}Z\right] = & \frac{d}{dt}|_{0}\left[ \pt_{t}\left( f\circ\sigma\right) ^{-1}f_{\ast\sigma\left( t\right) }Z\left( \sigma\left( t\right) \right) \right] \\
= & \frac{d}{dt}|_{0}\left[ \pt_{t}\left( f\circ\sigma\right) ^{-1}f_{\ast\sigma\left( t\right) }\pt_{t}\left( \sigma\right) ~\pt_{t}\left( \sigma\right) ^{-1}Z\left( \sigma\left( t\right) \right) \right] \\
= & \frac{d}{dt}|_{0}\left[ \pt_{t}\left( f\circ\sigma\right) ^{-1}f_{\ast\sigma\left( t\right) }\pt_{t}\left( \sigma\right) \right] Z\left( m\right) \\
& +f_{\ast m}\frac{d}{dt}|_{0}\left[ \pt_{t}\left( \sigma\right) ^{-1}Z\left( \sigma\left( t\right) \right) \right] \\ = & \left( \nabla_{v}f_{\ast}\right) Z\left( m\right) +f_{\ast} \nabla_{v}Z. \end{align*}
The more general cases are proved similarly; \begin{align*}
\nabla_{v}\left[ U\circ g\right] & =\frac{d}{dt}|_{0}\left[ \pt_{t}\left( f\circ g\circ\sigma\right) ^{-1}\left( U\circ g\right) \left( \sigma\left( t\right) \right) \right] \\
& =\frac{d}{dt}|_{0}\left[ \pt_{t}\left( f\circ\left( g\circ\sigma\right) \right) ^{-1}U\left( g\circ\sigma\right) \left( t\right) \right] \\ & =\nabla_{g_{\ast}v}U \end{align*} and \begin{align*}
\nabla_{v}\left[ g_{\ast}U\right] = & \frac{d}{dt}|_{0}\left[ \pt_{t}\left( g\circ f\circ\sigma\right) ^{-1}g_{\ast}U\left( \sigma\left( t\right) \right) \right] \\
= & \frac{d}{dt}|_{0}\left[ \pt_{t}\left( g\circ f\circ\sigma\right) ^{-1}g_{\ast}\pt_{t}\left( f\circ\sigma\right) ^{-1}~\pt_{t}\left( f\circ\sigma\right) U\left( \sigma\left( t\right) \right) \right] \\
= & \frac{d}{dt}|_{0}\left[ \pt_{t}\left( g\circ f\circ\sigma\right) ^{-1}g_{\ast}\pt_{t}\left( f\circ\sigma\right) ^{-1}~U\left( m\right) \right] \\
& +\frac{d}{dt}|_{0}\left[ g_{\ast m}~\pt_{t}\left( f\circ\sigma\right) U\left( \sigma\left( t\right) \right) \right] \\ = & \left( \nabla_{f_{\ast}v}g_{\ast}\right) U\left( m\right) +g_{\ast m}\nabla_{v}U. \end{align*}
\end{proof}
\begin{corollary} \label{cor.5.28}If $f\in\mathrm{Diff}\left( M\right) ,$ $Z\in\Gamma\left( TM\right) ,$ and $v\in T_{m}M,$ then \[ \nabla_{v}\left[ \operatorname{Ad}_{f}Z\right] =\left( \nabla_{f_{\ast }^{-1}v}f_{\ast}\right) Z\left( f^{-1}\left( m\right) \right) +f_{\ast }\nabla_{f_{\ast}^{-1}v}Z. \]
\end{corollary}
\begin{proof} Since $\operatorname{Ad}_{f}Z=\left( f_{\ast}Z\right) \circ f^{-1}$ with $f_{\ast}Z\in\Gamma_{f}\left( TM\right) ,$ it follows by first applying Eq. (\ref{e.5.29}) and then Eq. (\ref{e.5.30}) that \[ \nabla_{v}\left[ \operatorname{Ad}_{f}Z\right] =\nabla_{f_{\ast}^{-1} v}\left( f_{\ast}Z\right) =\left( \nabla_{f_{\ast}^{-1}v}f_{\ast}\right) Z\left( f^{-1}\left( m\right) \right) +f_{\ast}\nabla_{f_{\ast}^{-1}v}Z. \]
\end{proof}
\begin{definition} \label{def.5.29}Let $d^{TM}:TM\times TM\rightarrow\lbrack0,\infty)$ be the metric on $TM$ associated to the Riemannian metric on $E=TM$ with the given fiber Riemannian metric $g.$ \end{definition}
In this setting, \[ \max\left( \operatorname{Lip}\left( f\right) ,\left\vert F_{m}\right\vert \right) =\max\left( \operatorname{Lip}\left( f\right) ,\left\vert f_{\ast m}\right\vert \right) =\operatorname{Lip}\left( f\right) \] and hence the next theorem is an immediate consequence of Theorem \ref{thm.5.23}.
\begin{theorem} [$d_{TM}\left( f_{\ast}v_{m},f_{\ast}w_{p}\right) $ estimates] \label{thm.5.30}Let $v_{m},w_{p}\in TM$ and $f\in C^{2}\left( M,M\right) $ and for any path $\sigma\in AC\left( \left[ 0,1\right] ,M\right) $ with $\sigma\left( 0\right) =v_{m}$ and $\sigma\left( 1\right) =w_{p},$ let \begin{equation} L_{\sigma}\left( v_{m},w_{p}\right) :=\sqrt{\ell_{M}\left( \sigma\right) ^{2}+\left\vert \pt_{1}\left( \sigma\right) ^{-1}w_{p}-v_{m}\right\vert ^{2}}. \label{e.5.31} \end{equation} Then \begin{equation} d^{TM}\left( f_{\ast}v_{m},f_{\ast}w_{p}\right) \leq\left[ \left\vert f_{\ast}\right\vert _{\sigma}+\left\vert \nabla f_{\ast}\right\vert _{\sigma }\cdot\left\vert w_{p}\right\vert \right] L_{\sigma}\left( v_{m} ,w_{p}\right) \label{e.5.32} \end{equation} and consequently,\footnote{The next inequality may be localized if necessary. The point is we may assume that $\ell\left( \sigma\right) \leq d_{TM}\left( v_{m},w_{p}\right) $ and so we need compute $\operatorname{Lip}\left( f\right) $ and $\operatorname{Lip}\left( f_{\ast}\right) $ over the ball, $B\left( m,d_{TM}\left( v_{m},w_{p}\right) \right) .$} \begin{equation} d^{TM}\left( f_{\ast}v_{m},f_{\ast}w_{p}\right) \leq\left( \operatorname{Lip}\left( f\right) +\left\vert \nabla f_{\ast}\right\vert _{M}\cdot\left\vert w_{p}\right\vert \right) d^{TM}\left( v_{m} ,w_{p}\right) . \label{e.5.33} \end{equation}
\end{theorem}
\section{First order derivative estimates\label{sec.6}}
\subsection{$\nabla\nu_{t\ast}$ -- estimates\label{sec.6.1}}
Suppose that $W_{t}\in\Gamma\left( TM\right) $ and $\nu_{t}\in C^{\infty }\left( M,M\right) $ are as in Notation \ref{not.2.25}. Our next goal is to estimate the local Lipschitz-norm of $\nu_{t\ast}.$ We will do this using Theorem \ref{thm.5.30} which requires us to estimate $\nabla\nu_{t\ast}.$ We begin by finding the differential equation solved by $\nabla\nu_{t\ast}.$
\begin{proposition} \label{pro.6.1}If $W_{t}\in\Gamma\left( TM\right) $ and $\nu_{t}\in C^{\infty}\left( M,M\right) $ are as in Notation \ref{not.2.25}, $m\in M,$ and $v_{m},\xi_{m}\in T_{m}M,$ then $\left( \nabla_{v_{m}}\nu_{t\ast}\right) \xi_{m}$ satisfies the covariant differential equation; \begin{align} \nabla_{t}\left( \nabla_{v_{m}}\nu_{t\ast}\right) \xi_{m} & =\left( \nabla W_{t}\right) \left[ \left( \nabla_{v_{m}}\nu_{t\ast}\right) \xi _{m}\right] +\left( \nabla^{2}W_{t}\right) \left[ \nu_{t\ast}v_{m} \otimes\nu_{t\ast}\xi_{m}\right] \nonumber\\ & +R\left( W_{t}\left( \nu_{t}\left( m\right) \right) ,\nu_{t\ast} v_{m}\right) \nu_{t\ast}\xi_{m}. \label{e.6.1} \end{align}
\end{proposition}
\begin{proof} Let $\sigma\left( s\right) $ be a smooth curve in $M$ such that $v_{m}:=\sigma^{\prime}\left( 0\right) $ and define $\xi\left( s\right) :=\pt_{s}\left( \sigma\right) \xi_{m}.$ With this notation we have \begin{align}
\frac{\nabla}{ds}|_{0}\left[ \nu_{t\ast}\xi\left( s\right) \right] &
=\frac{d}{ds}|_{0}\left[ \pt_{s}\left( \nu_{t}\circ\sigma\right) ^{-1} \nu_{t\ast}\xi\left( s\right) \right] \nonumber\\
& =\frac{d}{ds}|_{0}\left[ \pt_{s}\left( \nu_{t}\circ\sigma\right) ^{-1}\nu_{t\ast}\pt_{s}\left( \sigma\right) \xi_{m}\right] =\left( \nabla_{v_{m}}\nu_{t\ast}\right) \xi_{m}. \label{e.6.2} \end{align}
Using the relationship of curvature to the commutator of covariant derivatives, \[ \left[ \nabla_{t},\nabla_{s}\right] =R\left( \frac{d}{dt}\nu_{t}\left( \sigma\left( s\right) \right) ,\frac{d}{ds}\nu_{t}\left( \sigma\left( s\right) \right) \right) =R\left( W_{t}\left( \nu_{t}\left( \sigma\left( s\right) \right) \right) ,\nu_{t\ast}\sigma^{\prime}\left( s\right) \right) , \] it follows that \begin{equation} \nabla_{t}\nabla_{s}\left[ \nu_{t\ast}\xi\left( s\right) \right] =\nabla_{s}\nabla_{t}\left[ \nu_{t\ast}\xi\left( s\right) \right] +R\left( W_{t}\left( \nu_{t}\left( \sigma\left( s\right) \right) \right) ,\nu_{t\ast}\sigma^{\prime}\left( s\right) \right) \nu_{t\ast} \xi\left( s\right) . \label{e.6.3} \end{equation} By Proposition \ref{pro.2.26} and the product rule for covariant derivatives the first term in Eq. (\ref{e.6.3}) may be written as \begin{align} \nabla_{s}\nabla_{t}\left[ \nu_{t\ast}\xi\left( s\right) \right] & =\nabla_{s}\left[ \nabla_{\nu_{t\ast}\xi\left( s\right) }W_{t}\right] \nonumber\\ & =\left( \nabla^{2}W_{t}\right) \left[ \nu_{t\ast}\sigma^{\prime}\left( s\right) \otimes\nu_{t\ast}\xi\left( s\right) \right] +\left( \nabla W_{t}\right) \nabla_{s}\nu_{t\ast}\xi\left( s\right) . \label{e.6.4} \end{align} Combining Eqs. (\ref{e.6.2})--(\ref{e.6.4}) gives, \begin{align*}
\nabla_{t}\left( \nabla_{v_{m}}\nu_{t\ast}\right) \xi_{m} & =\nabla _{t}\frac{\nabla}{ds}|_{0}\left[ \nu_{t\ast}\xi\left( s\right) \right] \\ & =\left[ \left( \nabla^{2}W_{t}\right) \left[ \nu_{t\ast}\sigma^{\prime }\left( s\right) \otimes\nu_{t\ast}\xi\left( s\right) \right] +\left( \nabla W_{t}\right) \nabla_{s}\nu_{t\ast}\xi\left( s\right) \right] _{s=0}\\ & +\left[ R\left( W_{t}\left( \nu_{t}\left( \sigma\left( s\right) \right) \right) ,\nu_{t\ast}\sigma^{\prime}\left( s\right) \right) \nu_{t\ast}\xi\left( s\right) \right] _{s=0} \end{align*} which is the same as Eq. (\ref{e.6.1}). \end{proof}
Recall from Notations \ref{not.1.3} and \ref{not.1.5} (also see Example \ref{ex.1.6}) that \begin{equation} H_{m}\left( W_{t}\right) =\left\vert \nabla^{2}W_{t}\right\vert _{m}+\left\vert R\left( W_{t},\bullet\right) \right\vert _{m} \label{e.6.5} \end{equation} and for a closed interval, $J\subset\left[ 0,T\right] ,$ that \begin{equation} H\left( W_{\cdot}\right) _{J}^{\ast}=\int_{J}H_{M}\left( W_{t}\right) dt.=\int_{J}\sup_{m\in M}H_{m}\left( W_{t}\right) dt. \label{e.6.6} \end{equation}
\begin{corollary} [$\left\vert \nabla\nu_{t\ast}\right\vert _{M}$ -estimate]\label{cor.6.2}If $W_{t}\in\Gamma\left( TM\right) $ and $\nu_{t}\in C^{\infty}\left( M,M\right) $ are as in Notation \ref{not.2.25} and we let \begin{align} k_{J}\left( m\right) & :=\int_{J}\left\vert \nabla W\right\vert _{\nu_{\tau}\left( m\right) }d\tau\leq\left\vert \nabla W\right\vert _{J}^{\ast},\text{ and}\label{e.6.7}\\ K_{J}\left( m\right) & :=\int_{J}H_{\nu_{\tau}\left( m\right) }\left( W_{\tau}\right) d\tau\leq H\left( W_{\cdot}\right) _{J}^{\ast}, \label{e.6.8} \end{align} then \begin{align} \left\vert \nabla\nu_{t\ast}\right\vert _{m} & \leq e^{k_{J\left( s,t\right) }\left( m\right) }\left[ \left\vert \nabla\nu_{s\ast }\right\vert _{m}+\left\vert \nu_{s\ast}\right\vert _{m}^{2}\int_{J\left( s,t\right) }H_{\nu_{\tau}\left( m\right) }\left( W_{\tau}\right) e^{k_{J\left( s,\tau\right) }\left( m\right) }d\tau\right] \label{e.6.9} \\ & \leq e^{k_{J\left( s,t\right) }\left( m\right) }\left\vert \nabla \nu_{s\ast}\right\vert _{m}+e^{2k_{J\left( s,t\right) }\left( m\right) }K_{J\left( s,t\right) }\left( m\right) \left\vert \nu_{s\ast}\right\vert _{m}^{2}. \label{e.6.10} \end{align} If we further assume that $\nu_{s}=Id_{M},$ then the above estimate reduces to \begin{align} \left\vert \nabla\nu_{t\ast}\right\vert _{m} & \leq e^{k_{J\left( s,t\right) }\left( m\right) }\cdot\int_{J\left( s,t\right) }H_{\nu_{\tau }\left( m\right) }\left( W_{\tau}\right) e^{k_{J\left( s,\tau\right) }\left( m\right) }d\tau\label{e.6.11}\\ & \leq e^{2k_{J\left( s,t\right) }\left( m\right) }\cdot K_{J\left( s,t\right) }\left( m\right) \label{e.6.12} \end{align} and in particular, \begin{equation} \left\vert \nabla\nu_{t\ast}\right\vert _{M}\leq e^{2\left\vert \nabla W\right\vert _{J\left( s,t\right) }^{\ast}}.H\left( W_{\cdot}\right) _{J\left( s,t\right) }^{\ast}. \label{e.6.13} \end{equation}
\end{corollary}
\begin{proof} To shorten notation in this proof, let \[ h_{t}=H_{\nu_{t}\left( m\right) }\left( W_{t}\right) :=\left\vert \nabla^{2}W_{t}\right\vert _{\nu_{t}\left( m\right) }+\left\vert R\left( W_{t},\bullet\right) \right\vert _{\nu_{t}\left( m\right) }. \] Starting with Eq. (\ref{e.6.1}) while using the estimate in Eq. (\ref{e.2.45}) allows us to easily conclude that \begin{align*} \left\vert \nabla_{t}\left( \nabla_{v_{m}}\nu_{t\ast}\right) \right\vert & \leq\left\vert \nabla W_{t}\right\vert _{\nu_{t}\left( m\right) }\left\vert \nabla_{v_{m}}\nu_{t\ast}\right\vert +\left\vert \nabla^{2}W_{t}\right\vert _{\nu_{t}\left( m\right) }\left\vert \nu_{t\ast}v_{m}\right\vert \left\vert \nu_{t\ast}\right\vert _{m}\\ & +\left\vert R\left( W_{t}\left( \nu_{t}\left( m\right) \right) ,\nu_{t\ast}v_{m}\right) \right\vert \left\vert \nu_{t\ast}\right\vert _{m}.\\ & \leq\left\vert \nabla W_{t}\right\vert _{\nu_{t}\left( m\right) }\left\vert \nabla_{v_{m}}\nu_{t\ast}\right\vert +e^{2k_{J\left( s,t\right) }\left( m\right) }h_{t}\left\vert \nu_{s\ast}\right\vert _{m}^{2} \cdot\left\vert v_{m}\right\vert . \end{align*} It follows by the Bellman-Gronwall inequality in Corollary \ref{cor.9.3} of the appendix that \begin{align*} \left\vert \nabla\nu_{t\ast}\right\vert _{m}\leq & e^{\int_{J\left( s,t\right) }\left\vert \nabla W_{s}\right\vert _{\nu_{s}\left( m\right) }ds}\left\vert \nabla\nu_{s\ast}\right\vert _{m}\\ & +\int_{J\left( s,t\right) }e^{\int_{J\left( \tau,t\right) }\left\vert \nabla W_{s}\right\vert _{\nu_{s}\left( m\right) }ds}e^{2k_{J\left( s,\tau\right) }\left( m\right) }h_{\tau}\left\vert \nu_{s\ast}\right\vert _{m}^{2}d\tau\\ & =e^{k_{J\left( s,t\right) }\left( m\right) }\left\vert \nabla\nu _{s\ast}\right\vert _{m}+\int_{J\left( s,t\right) }e^{k_{J\left( \tau,t\right) }\left( m\right) }e^{2k_{J\left( s,\tau\right) }\left( m\right) }h_{\tau}\left\vert \nu_{s\ast}\right\vert _{m}^{2}d\tau\\ & =e^{k_{J\left( s,t\right) }\left( m\right) }\left\vert \nabla\nu _{s\ast}\right\vert _{m}+\int_{J\left( s,t\right) }e^{k_{J\left( s,t\right) }\left( m\right) }e^{k_{J\left( s,\tau\right) }\left( m\right) }h_{\tau}\left\vert \nu_{s\ast}\right\vert _{m}^{2}d\tau\\ & \qquad\leq e^{k_{J\left( s,t\right) }\left( m\right) }\left\vert \nabla\nu_{s\ast}\right\vert _{m}+e^{2k_{J\left( s,t\right) }\left( m\right) }\int_{J\left( s,t\right) }h_{\tau}\left\vert \nu_{s\ast }\right\vert _{m}^{2}d\tau. \end{align*} Lastly if $\nu_{s}=Id_{M}$ then $\nu_{s\ast}=Id_{TM}$ in which case $\left\vert \nu_{s\ast}\right\vert _{m}^{2}=1$ and $\nabla\nu_{s\ast}=0$ and so Eq. (\ref{e.6.10}) reduces to Eq. (\ref{e.6.11}).
\iffalse The last displayed inequality written out in detail is: \[ \left\vert \nabla\nu_{t\ast}\right\vert _{m}\leq e^{k_{J\left( s,t\right) }\left( m\right) }\left[ \left\vert \nabla\nu_{s\ast}\right\vert _{m} +\int_{J\left( s,t\right) }\left[ \left\vert \nabla^{2}W_{\tau}\right\vert _{\nu_{\tau}\left( m\right) }+\left\vert R\left( W_{\tau},\bullet\right) \right\vert _{\nu_{\tau}\left( m\right) }\right] \left\vert \nu_{s\ast }\right\vert _{m}^{2}e^{k_{J\left( s,\tau\right) }\left( m\right) } d\tau\right] . \] \fi \end{proof}
\begin{corollary} \label{cor.6.3}If $W_{t}\in\Gamma\left( TM\right) $ and $\nu_{t}\in C^{\infty}\left( M,M\right) $ are as in Notation \ref{not.2.25} and further assuming $\nu_{0}=Id_{M},$ then \[ d^{TM}\left( \nu_{t\ast}v_{m},\nu_{t\ast}w_{p}\right) \leq e^{2\left\vert \nabla W\right\vert _{t}^{\ast}}\left( 1+H\left( W_{\cdot}\right) _{t}^{\ast}\cdot\left\vert w_{p}\right\vert \right) d^{TM}\left( v_{m} ,w_{p}\right) \]
\end{corollary}
\begin{proof} By Theorem \ref{thm.5.30} with $f=\nu_{t}$ along with Corollaries \ref{cor.2.27} and Corollary \ref{cor.6.2} we find, \begin{align*} d^{TM}\left( \nu_{t\ast}v_{m},\nu_{t\ast}w_{p}\right) & \leq\left( \operatorname{Lip}\left( \nu_{t}\right) +\left\vert \nabla\nu_{t\ast }\right\vert _{M}\cdot\left\vert w_{p}\right\vert \right) d^{TM}\left( v_{m},w_{p}\right) \\ & \leq\left( e^{\left\vert \nabla W\right\vert _{t}^{\ast}}+e^{2\left\vert \nabla W\right\vert _{t}^{\ast}}.H\left( W_{\cdot}\right) _{t}^{\ast} \cdot\left\vert w_{p}\right\vert \right) d^{TM}\left( v_{m},w_{p}\right) \\ & \leq e^{2\left\vert \nabla W\right\vert _{t}^{\ast}}\left( 1+H\left( W_{\cdot}\right) _{t}^{\ast}\cdot\left\vert w_{p}\right\vert \right) d^{TM}\left( v_{m},w_{p}\right) . \end{align*}
\end{proof}
The next corollary is the special case of Corollaries \ref{cor.6.2} and \ref{cor.6.3} when $W_{t}=X$ is a time independent vector field.
\begin{corollary} [$\left\vert \nabla e_{\ast}^{tX}\right\vert _{M}$ -estimate]\label{cor.6.4}If $X$ is a complete vector field and \begin{equation} k_{t}\left( X,m\right) :=\int_{0}^{t}\left\vert \nabla X\right\vert _{e^{\tau X}\left( m\right) }d\tau, \label{e.6.14} \end{equation} then \begin{align} \left\vert \nabla e_{\ast}^{tX}\right\vert _{m} & \leq e^{k_{t}\left( X,m\right) }\cdot\int_{0}^{t}H_{e^{\tau X}\left( m\right) }\left( X\right) e^{k_{\tau}\left( X,m\right) }d\tau\label{e.6.15}\\ & \leq e^{2k_{t}\left( X,m\right) }\cdot\int_{0}^{t}H_{e^{\tau X}\left( m\right) }\left( X\right) d\tau\label{e.6.16} \end{align} and, for $v_{m},w_{p}\in TM,$ \begin{equation} d^{TM}\left( e_{\ast}^{X}v_{m},e_{\ast}^{X}w_{p}\right) \leq e^{2\left\vert \nabla X\right\vert _{M}}\left[ 1+H_{M}\left( X\right) \left\vert w_{p}\right\vert \right] d^{TM}\left( v_{m},w_{p}\right) . \label{e.6.17} \end{equation}
\end{corollary}
\begin{notation} \label{not.6.5}For $X\in\Gamma\left( TM\right) $ and $m\in M,$ let \[ \bar{H}_{m}\left( X\right) :=\int_{0}^{1}H_{e^{-\tau X}\left( m\right) }\left( X\right) d\tau\leq H_{M}\left( X\right) . \]
\end{notation}
\begin{proposition} \label{pro.6.6}If $X,Z\in\Gamma\left( TM\right) $ and $X$ is complete, then \begin{align} \left\vert \nabla\left[ \operatorname{Ad}_{e^{X}}Z\right] \right\vert _{m} & \leq e^{2k_{1}\left( -X,m\right) }\left\vert \nabla Z\right\vert _{e^{-X}\left( m\right) }+\bar{H}_{m}\left( X\right) e^{3k_{1}\left( -X,m\right) }\left\vert Z\right\vert _{e^{-X}\left( m\right) } \label{e.6.18}\\ & \leq e^{3k_{1}\left( -X,m\right) }\left[ \left\vert \nabla Z\right\vert _{e^{-X}\left( m\right) }+\bar{H}_{m}\left( X\right) \left\vert Z\right\vert _{e^{-X}\left( m\right) }\right] \label{e.6.19} \end{align} where, from Eq. (\ref{e.6.14}), \begin{equation} k_{1}\left( -X,m\right) =\int_{0}^{1}\left\vert \nabla X\right\vert _{e^{-\tau X}\left( m\right) }d\tau. \label{e.6.20} \end{equation} [It is possible, using \textquotedblleft transport methods,\textquotedblright \ to replace $e^{3k_{1}\left( -X,m\right) }$ by $e^{2k_{1}\left( -X,m\right) }$ in the previous inequalities but we do not bother doing so in this paper.] \end{proposition}
\begin{proof} As a consequence of the flow property of $e^{tX}$ and a simple change of variables, it is useful to record; \begin{equation} k_{1-s}\left( X,e^{-X}\left( m\right) \right) =\int_{0}^{1-s}\left\vert \nabla X\right\vert _{e^{-\left( 1-\tau\right) X}\left( m\right) } d\tau=\int_{s}^{1}\left\vert \nabla X\right\vert _{e^{-uX}\left( m\right) }du \label{e.6.21} \end{equation} for any $s\in\left[ 0,1\right] .$ When $\sigma=0$ this identity may be stated as \begin{equation} k_{1}\left( X,e^{-X}\left( m\right) \right) =k_{1}\left( -X,m\right) . \label{e.6.22} \end{equation} With this preparation in hand, we now go to the proof of the proposition.
By Corollary \ref{cor.5.28} with $f=e^{X},$ \begin{align*} \nabla_{v_{m}}\left[ \operatorname{Ad}_{e^{X}}Z\right] & =\nabla_{v_{m} }\left[ e_{\ast}^{X}\left[ Z\circ e^{-X}\right] \right] \\ & =\left( \nabla_{e_{\ast}^{-X}v_{m}}e_{\ast}^{X}\right) \left[ Z\circ e^{-X}\left( m\right) \right] +e_{\ast}^{X}\left[ \nabla_{e_{\ast} ^{-X}v_{m}}Z\right] \end{align*} and so \begin{align*} & \left\vert \nabla_{v_{m}}\left[ \operatorname{Ad}_{e^{X}}Z\right] \right\vert \\ & \quad\leq\left( \left\vert \nabla e_{\ast}^{X}\right\vert _{e^{-X}\left( m\right) }\left\vert Z\right\vert _{e^{-X}\left( m\right) }+\left\vert e_{\ast}^{X}\right\vert _{e^{-X}\left( m\right) }\left\vert \nabla Z\right\vert _{e^{-X}\left( m\right) }\right) \cdot\left\vert e_{\ast} ^{-X}\right\vert _{m}\left\vert v_{m}\right\vert . \end{align*} By Corollary \ref{cor.2.28}, \[ \left\vert e_{\ast}^{-X}\right\vert _{m}\leq e^{\int_{0}^{1}\left\vert \nabla X\right\vert _{e^{-\tau X}\left( m\right) }d\tau}=e^{k_{1}\left( -X,m\right) } \]
and from this inequality with $X$ replaced by $-X$ and $m$ by $e^{-X}\left( m\right) $ we also have (using Eq. (\ref{e.6.22}) with $t=1)$ that \[ \left\vert e_{\ast}^{X}\right\vert _{e^{-X}\left( m\right) }\leq e^{k_{1}\left( X,e^{-X}\left( m\right) \right) }=e^{k_{1}\left( -X,m\right) }. \]
Similarly from Corollary \ref{cor.6.4} with $m$ replaced by $e^{-X}\left( m\right) ,$ \begin{align*} \left\vert \nabla e_{\ast}^{X}\right\vert _{e^{-X}\left( m\right) }\leq & e^{k_{1}\left( X,e^{-X}\left( m\right) \right) }\cdot\int_{0} ^{1}H_{e^{\tau X}\left( e^{-X}\left( m\right) \right) }\left( X\right) e^{k_{\tau}\left( X,e^{-X}\left( m\right) \right) }d\tau\\ & =e^{k_{1}\left( -X,m\right) }\int_{0}^{1}H_{e^{-\left( 1-\tau\right) X}\left( m\right) }\left( X\right) e^{k_{\tau}\left( X,e^{-X}\left( m\right) \right) }d\tau\\ & =e^{k_{1}\left( -X,m\right) }\int_{0}^{1}H_{e^{-sX}\left( m\right) }\left( X\right) e^{k_{1-s}\left( X,e^{-X}\left( m\right) \right) }ds\\ & =e^{k_{1}\left( -X,m\right) }\int_{0}^{1}H_{e^{-sX}\left( m\right) }\left( X\right) e^{\int_{s}^{1}\left\vert \nabla X\right\vert _{e^{-uX}\left( m\right) }du}ds\\ & \leq e^{2k_{1}\left( -X,m\right) }\int_{0}^{1}H_{e^{-sX}\left( m\right) }\left( X\right) ds=e^{2k_{1}\left( -X,m\right) }\bar{H}_{m}\left( X\right) . \end{align*} Combining these inequalities shows, \begin{align*} & \left\vert \nabla_{v_{m}}\left[ \operatorname{Ad}_{e^{X}}Z\right] \right\vert \\ & \quad\leq\left( e^{3k_{1}\left( -X,m\right) }\cdot\bar{H}_{m}\left( X\right) \left\vert Z\right\vert _{e^{-X}\left( m\right) }+e^{2k_{1}\left( -X,m\right) }\left\vert \nabla Z\right\vert _{e^{-X}\left( m\right) }\right) \left\vert v_{m}\right\vert \end{align*} from which Eq. (\ref{e.6.18}) immediately follows. \end{proof}
\section{First order distance estimates\label{sec.7}}
The main goal of this section is to estimate (see Theorem \ref{thm.7.2}) the distance between the differentials of $\mu_{t,0}^{X}$ and $\mu_{t,0}^{Y}.$ To do so we will again need to estimate the time derivative of the interpolator defined Eq. (\ref{e.2.55}) above.
\begin{proposition} \label{pro.7.1}Let $\left[ 0,T\right] \ni t\rightarrow X_{t,}Y_{t}\in \Gamma\left( TM\right) $ be smooth complete time dependent vector fields on $M$ and $\mu^{X}$ and $\mu^{Y}$ be their corresponding flows. If $0<t\leq T,$ $\left[ 0,t\right] \ni s\rightarrow\Theta_{s}:=\mu_{t,s}^{X}\circ\mu _{s,0}^{Y}$ is the interpolator defined in Eq. (\ref{e.2.55}), and $v_{m}\in T_{m}M,$ then \begin{equation} \left\vert \frac{\nabla}{ds}\Theta_{s\ast}\right\vert _{M}\leq e^{2\left\vert \nabla X\right\vert _{t}^{\ast}+\left\vert \nabla Y\right\vert _{t}^{\ast} }\cdot\left( H\left( X_{\cdot}\right) _{t}^{\ast}\left\vert Y_{s} -X_{s}\right\vert _{M}+\left\vert \nabla\left[ Y_{s}-X_{s}\right] \right\vert _{M}\right) , \label{e.7.1} \end{equation} where (as in Eq. (\ref{e.5.21}) with $f_{s}=\Theta_{s}$ and $F_{s}=\Theta)$ \[ \left\vert \frac{\nabla}{ds}\Theta_{s\ast}\right\vert _{M}=\sup\left\{ \left\vert \frac{\nabla}{ds}\left[ \left( \Theta_{s}\right) _{\ast} v_{m}\right] \right\vert :v\in TM\text{ with }\left\vert v\right\vert =1\right\} \] and (as in Notation \ref{not.1.5}) $H_{m}\left( X_{t}\right) =\left\vert \nabla^{2}X_{t}\right\vert _{m}+\left\vert R\left( X_{t},\bullet\right) \right\vert _{m}.$ \end{proposition}
\begin{proof} Choose $\sigma\left( \tau\right) \in M$ so that $\dot{\sigma}\left( 0\right) =v_{m}.$ Then by the properties of the Levi-Civita covariant derivatives, the formula for $\Theta_{s}^{\prime}\left( m\right) $ in Eq. (\ref{e.2.59}), along with the product and chain rule in Proposition \ref{pro.5.27}, it follows that \begin{align*} \frac{\nabla}{ds}\left[ \left( \Theta_{s}\right) _{\ast}v_{m}\right] = &
\frac{\nabla}{ds}\frac{d}{d\tau}|_{0}\Theta_{s}\left( \sigma\left(
\tau\right) \right) =\frac{\nabla}{d\tau}|_{0}\frac{d}{ds}\Theta_{s}\left( \sigma\left( \tau\right) \right) \\
= & \frac{\nabla}{d\tau}|_{0}\left[ \left( \mu_{t,s}^{X}\right) _{\ast }\left( Y_{s}-X_{s}\right) \circ\mu_{s,0}^{Y}\left( \sigma\left( \tau\right) \right) \right] \\ = & \nabla_{v}\left[ \left( \mu_{t,s}^{X}\right) _{\ast}\left( \left[ Y_{s}-X_{s}\right] \right) \mu_{s,0}^{Y}\left( \sigma\left( \tau\right) \right) \right] \\ = & \nabla_{\left( \mu_{s,0}^{Y}\right) _{\ast}v_{m}}\left[ \left( \mu_{t,s}^{X}\right) _{\ast}\left( Y_{s}-X_{s}\right) \right] \\ = & \left[ \nabla_{\left( \mu_{s,0}^{Y}\right) _{\ast}v_{m}}\left( \mu_{t,s}^{X}\right) _{\ast}\right] \left( Y_{s}-X_{s}\right) \circ \mu_{s,0}^{Y}\left( m\right) \\ & +\left( \mu_{t,s}^{X}\right) _{\ast}\nabla_{\left( \mu_{s,0}^{Y}\right) _{\ast}v_{m}}\left( Y_{s}-X_{s}\right) \end{align*} and consequently, \begin{align*} & \left\vert \frac{\nabla}{ds}\left[ \left( \Theta_{s}\right) _{\ast} v_{m}\right] \right\vert \\ & \quad\leq\left( \left\vert \nabla\mu_{t,s\ast}^{X}\right\vert _{M}\left\vert Y_{s}-X_{s}\right\vert _{M}+\left\vert \mu_{t,s\ast} ^{X}\right\vert _{M}\left\vert \nabla\left( Y_{s}-X_{s}\right) \right\vert _{M}\right) \left\vert \mu_{s,0\ast}^{Y}\right\vert _{M}\left\vert v_{m}\right\vert . \end{align*} By Eq. (\ref{e.6.13}) of Corollary \ref{cor.6.2} with $\nu_{t}=\mu_{t,s}^{X},$ \[ \left\vert \nabla\mu_{t,s\ast}^{X}\right\vert _{M}\leq e^{2\int_{s} ^{t}\left\vert \nabla X_{\tau}\right\vert _{M}d\tau}H\left( X_{\cdot}\right) _{J\left( s,t\right) }^{\ast}\leq e^{2\int_{s}^{t}\left\vert \nabla X_{\tau }\right\vert _{M}d\tau}H\left( X_{\cdot}\right) _{t}^{\ast}. \] Using the estimate in Eq. (\ref{e.2.60}) twice shows, \begin{align*} \left\vert \nabla\mu_{t,s\ast}^{X}\right\vert _{M} & \leq e^{\int_{s} ^{t}\left\vert \nabla X_{\tau}\right\vert _{M}d\tau}\leq e^{2\int_{0} ^{t}\left\vert \nabla X_{\tau}\right\vert _{M}d\tau,}\text{ and }\\ \left\vert \mu_{s,0\ast}^{Y}\right\vert _{M} & \leq e^{\int_{0} ^{s}\left\vert \nabla Y_{\tau}\right\vert _{M}d\tau}\leq e^{\int_{0} ^{t}\left\vert \nabla Y_{\tau}\right\vert _{M}d\tau}. \end{align*} Combining the last four inequalities yields and taking the supremum of the result over $v_{m}\in TM$ with $\left\vert v_{m}\right\vert =1$ yields Eq. (\ref{e.7.1}). \end{proof}
\begin{theorem} \label{thm.7.2}If $\left[ 0,T\right] \ni t\rightarrow X_{t,}Y_{t}\in \Gamma\left( TM\right) $ are smooth complete time dependent vector fields on $M$ and $\mu^{X}$ and $\mu^{Y}$ be their corresponding flows, then \begin{equation} d\left( \left( \mu_{t,0}^{X}\right) _{\ast},\left( \mu_{t,0}^{Y}\right) _{\ast}\right) \leq e^{2\left\vert \nabla X\right\vert _{t}^{\ast}+\left\vert \nabla Y\right\vert _{t}^{\ast}}\cdot\left( \left( 1+H\left( X_{\cdot }\right) _{t}^{\ast}\right) \left\vert Y-X\right\vert _{t}^{\ast}+\left\vert \nabla\left[ Y-X\right] \right\vert _{t}^{\ast}\right) , \label{e.7.2} \end{equation}
\end{theorem}
\begin{proof} Integrating the estimate in Eq. (\ref{e.2.61}) shows \[ \int_{0}^{t}\left\vert \Theta_{s}^{\prime}\right\vert _{M}ds\leq e^{\left\vert \nabla X\right\vert _{t}^{\ast}}\left\vert Y-X\right\vert _{t}^{\ast}\leq e^{2\left\vert \nabla X\right\vert _{t}^{\ast}+\left\vert \nabla Y\right\vert _{t}^{\ast}}\left\vert Y-X\right\vert _{t}^{\ast} \] and integrating the estimate in Eq. (\ref{e.7.1}) shows \[ \int_{0}^{t}\left\vert \frac{\nabla}{ds}\Theta_{s\ast}\right\vert _{M}ds\leq e^{2\left\vert \nabla X\right\vert _{t}^{\ast}+\left\vert \nabla Y\right\vert _{t}^{\ast}}\cdot\left( H\left( X_{\cdot}\right) _{t}^{\ast}\left\vert Y-X\right\vert _{t}^{\ast}+\left\vert \nabla\left[ Y-X\right] \right\vert _{t}^{\ast}\right) . \] Adding these estimates while making use of an appropriately time scaled version of Eq. (\ref{e.5.18}) of Proposition \ref{pro.5.17} with $E=TM,$ $f_{s}=\Theta_{s},$ and $F_{s}=\Theta_{s\ast}$ completes the proof of Eq. (\ref{e.7.2}). \end{proof}
\begin{corollary} \label{cor.7.3}Let $J=\left[ 0,T\right] \ni t\rightarrow Y_{t}\in \Gamma\left( TM\right) $ be a smooth complete time dependent vector field on $M$ and $\mu^{Y}$ be the corresponding flow. Then for $t>0$ (for notational simplicity) \begin{equation} d\left( \left( \mu_{t,0}^{Y}\right) _{\ast},Id_{TM}\right) \leq e^{\left\vert \nabla Y\right\vert _{t}^{\ast}}\cdot\left( \left\vert Y\right\vert _{t}^{\ast}+\left\vert \nabla Y\right\vert _{t}^{\ast}\right) . \label{e.7.3} \end{equation}
\end{corollary}
\begin{proof} Applying Theorem \ref{thm.7.2} with $X\equiv0$ gives Eq. (\ref{e.7.3}). \end{proof}
\section{First order logarithm estimates\label{sec.8}}
The main purpose of this section is to give a first order version (see Theorem \ref{thm.8.4} below) of the logarithm control estimate in Theorem \ref{thm.4.12}. Before doing so we will first need to develop a few more auxiliary estimates.
\begin{proposition} \label{pro.8.1}If $C\left( \cdot\right) \in C\left( \left[ 0,T\right] ,F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \right) $ and $Z\in\Gamma\left( TM\right) ,$ then for $0\leq s\leq1,$ \begin{equation} \left\vert \nabla\left[ \operatorname{Ad}_{e^{sV_{C\left( t\right) }} }Z\right] \right\vert _{M}\leq e^{3\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\left\vert C\left( t\right) \right\vert }\cdot\left( H_{M}\left( V^{\left( \kappa\right) }\right) \left\vert Z\right\vert _{M}\left\vert C\left( t\right) \right\vert +\left\vert \nabla Z\right\vert _{M}\right) , \label{e.8.1} \end{equation} where $H_{M}\left( V^{\left( \kappa\right) }\right) $ was defined in Eq. (\ref{e.1.14}) of Definition \ref{def.1.24}. \end{proposition}
\begin{proof} This result follows directly as an application of Proposition \ref{pro.6.6} with $X_{t}=-sV_{C}\left( t\right) .$ \end{proof}
\begin{corollary} \label{cor.8.2}If $C\in C^{1}\left( \left[ 0,T\right] ,F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \right) $ and $W_{t}^{C}$ is given as in Eq. (\ref{e.4.4}), then \begin{equation} \left\vert \nabla W^{C}\right\vert _{t}^{\ast}\lesssim e^{3\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\left\vert C\right\vert _{\infty ,t}}\cdot\left( H_{M}\left( V^{\left( \kappa\right) }\right) \left\vert V^{\left( \kappa\right) }\right\vert _{M}\left\vert C\right\vert _{\infty ,t}+\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\right) \left\vert \dot{C}\right\vert _{t}^{\ast}. \label{e.8.2} \end{equation} Moreover, there exists $c\left( \kappa\right) <\infty$ such that, whenever $\xi\in C^{1}\left( \left[ 0,T\right] ,F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \right) $ and $C^{\xi}\left( t\right) =\log\left( g^{\xi}\left( t\right) \right) ,$ \begin{equation} \left\vert \nabla W^{C^{\xi}}\right\vert _{t}^{\ast}\lesssim\mathcal{K} _{t}\cdot\left( H_{M}\left( V^{\left( \kappa\right) }\right) \left\vert V^{\left( \kappa\right) }\right\vert _{M}Q_{\left[ 1,\kappa\right] }\left( N_{t}^{\ast}\left( \dot{\xi}\right) \right) +\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\right) \label{e.8.3} \end{equation} where \begin{equation} \mathcal{K}_{t}:=e^{c\left( \kappa\right) \left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}Q_{\left[ 1,\kappa\right] }\left( N_{t}^{\ast}\left( \dot{\xi}\right) \right) }Q_{\left[ 1,\kappa\right] }\left( N_{t}^{\ast}\left( \dot{\xi}\right) \right) . \label{e.8.4} \end{equation}
\end{corollary}
\begin{proof} Let $\tau\in\left[ 0,t\right] $ and $s\in\left[ 0,1\right] .$ Applying the estimate in Eq. (\ref{e.8.1}) with $Z=V_{\dot{C}\left( \tau\right) }$ implies, \begin{align*} & \left\vert \nabla\left[ \operatorname{Ad}_{e^{sV_{C\left( \tau\right) } }}V_{\dot{C}\left( \tau\right) }\right] \right\vert _{M}\\ & \leq e^{3\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\left\vert C\left( \tau\right) \right\vert }\cdot\left( H_{M}\left( V^{\left( \kappa\right) }\right) \left\vert V^{\left( \kappa\right) }\right\vert _{M}\left\vert C\left( \tau\right) \right\vert +\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\right) \left\vert \dot {C}\left( \tau\right) \right\vert \\ & \leq e^{3\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\left\vert C\right\vert _{\infty,t}}\cdot\left( H_{M}\left( V^{\left( \kappa\right) }\right) \left\vert V^{\left( \kappa\right) }\right\vert _{M}\left\vert C\right\vert _{\infty,t}+\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\right) \left\vert \dot{C}\left( \tau\right) \right\vert . \end{align*} Integrating this inequality on $s\in\left[ 0,1\right] $ and $\tau\in\left[ 0,t\right] ,$ while using \[ \left\vert \nabla W^{C}\right\vert _{t}^{\ast}=\int_{0}^{T}\left\vert \nabla W_{\tau}^{C}\right\vert d\tau\leq\int_{0}^{T}\left[ \int_{0}^{1}\left\vert \nabla\left[ \operatorname{Ad}_{e^{sV_{C\left( \tau\right) }}}V_{\dot {C}\left( \tau\right) }\right] \right\vert ~ds\right] d\tau, \] gives Eq. (\ref{e.8.2}).
Now suppose that $\xi\in C^{1}\left( \left[ 0,T\right] ,F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \right) $ and $C^{\xi}\left( t\right) =\log\left( g^{\xi}\left( t\right) \right) .$ Then from Eq. (\ref{e.3.31}), \begin{equation} \left\vert C^{\xi}\left( \cdot\right) \right\vert _{\infty,t}\lesssim Q_{\left[ 1,\kappa\right] }\left( N_{t}^{\ast}\left( \dot{\xi}\right) \right) \label{e.8.5} \end{equation} and by the estimates in Eqs. (\ref{e.3.29}) and (\ref{e.3.21}), \[ \left\vert \dot{C}\right\vert _{t}^{\ast}\lesssim Q_{\left[ 1,\kappa\right] }\left( N_{t}^{\ast}\left( \dot{\xi}\right) \right) . \] Using the previous two estimates in Eq. (\ref{e.8.2}) proves Eq. (\ref{e.8.3}). \end{proof}
\begin{corollary} \label{cor.8.3}If $\xi\in C^{1}\left( \left[ 0,T\right] ,F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \right) ,$ $C^{\xi}\left( t\right) =\log\left( g^{\xi}\left( t\right) \right) ,$ and $U_{t}^{\xi }\in\Gamma\left( TM\right) $ is the difference vector field in Eq. (\ref{e.4.6}), then there exist $c\left( \kappa\right) <\infty$ such that \begin{align} \left\vert \nabla U\right\vert _{T}^{\ast}\lesssim & \left[ \left( \mathcal{C}^{0}\left( V^{\left( \kappa\right) }\right) H_{M}\left( V^{\left( \kappa\right) }\right) Q_{\left[ 1,\kappa\right] }\left( N_{T}^{\ast}\left( \dot{\xi}\right) \right) +\mathcal{C}^{1}\left( V^{\left( \kappa\right) }\right) \right) \right] \cdot\nonumber\\ & \quad\cdot e^{c\left( \kappa\right) \left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}Q_{\left[ 1,\kappa\right] }\left( N_{T}^{\ast}\left( \dot{\xi}\right) \right) }\cdot Q_{(\kappa,2\kappa ]}\left( N_{T}^{\ast}\left( \dot{\xi}\right) \right) . \label{e.8.6} \end{align}
\end{corollary}
\begin{proof} Let $t\in\left[ 0,T\right] $ and $s\in\left[ 0,1\right] .$ The estimate in Eq. (\ref{e.8.1}) with \[ Z=V_{\pi_{>\kappa}\left[ C^{\xi}\left( t\right) ,u\left( s,\operatorname{ad}_{C^{\xi}\left( t\right) }\right) \dot{\xi}\left( t\right) \right] _{\otimes}} \] becomes \begin{align*} & \left\vert \nabla\left[ \operatorname{Ad}_{e^{sV_{C^{\xi}\left( t\right) }}}V_{\pi_{>\kappa}\left[ C^{\xi}\left( t\right) ,u\left( s,\operatorname{ad}_{C^{\xi}\left( t\right) }\right) \dot{\xi}\left( t\right) \right] _{\otimes}}\right] \right\vert _{M}\\ & \qquad\leq e^{3\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\left\vert C^{\xi}\left( t\right) \right\vert }\cdot\left[ \alpha\left( s,t\right) +\beta\left( s,t\right) \right] \\ & \qquad\leq e^{3\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\left\vert C^{\xi}\right\vert _{\infty,T}}\cdot\left[ \alpha\left( s,t\right) +\beta\left( s,t\right) \right] , \end{align*} where \begin{align*} \alpha\left( s,t\right) & =H_{M}\left( V^{\left( \kappa\right) }\right) \left\vert C^{\xi}\right\vert _{\infty,T}\cdot\left\vert V_{\pi_{>\kappa}\left[ C^{\xi}\left( t\right) ,u\left( s,\operatorname{ad} _{C^{\xi}\left( t\right) }\right) \dot{\xi}\left( t\right) \right] _{\otimes}}\right\vert _{M}\text{ and}\\ \beta\left( s,t\right) & =\left\vert \nabla V_{\pi_{>\kappa}\left[ C^{\xi}\left( t\right) ,u\left( s,\operatorname{ad}_{C^{\xi}\left( t\right) }\right) \dot{\xi}\left( t\right) \right] _{\otimes}}\right\vert _{M}. \end{align*} In this notation we have \[ \left\vert \nabla U\right\vert _{T}^{\ast}\leq e^{3\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\left\vert C^{\xi}\right\vert _{\infty,T}}\int_{0}^{T}dt\int_{0}^{1}ds\left( 1-s\right) \left[ \alpha\left( s,t\right) +\beta\left( s,t\right) \right] \] where according to Lemma \ref{lem.4.10}, \[ \int_{0}^{T}dt\int_{0}^{1}ds\left( 1-s\right) \alpha\left( s,t\right) \lesssim H_{M}\left( V^{\left( \kappa\right) }\right) \mathcal{C} ^{0}\left( V^{\left( \kappa\right) }\right) \left\vert C^{\xi}\right\vert _{\infty,T}Q_{(\kappa,2\kappa]}\left( N_{T}^{\ast}\left( \dot{\xi}\right) \right) \] and \[ \int_{0}^{T}dt\int_{0}^{1}ds\left( 1-s\right) \beta\left( s,t\right) \lesssim\mathcal{C}^{1}\left( V^{\left( \kappa\right) }\right) Q_{(\kappa,2\kappa]}\left( N_{T}^{\ast}\left( \dot{\xi}\right) \right) . \] The proof is now completed by combining these estimate with the estimate for $\left\vert C^{\xi}\left( \cdot\right) \right\vert _{\infty,T}$ in Eq. (\ref{e.8.5}). \end{proof}
\begin{theorem} [Comparing differentials]\label{thm.8.4}If $\xi\in C^{1}\left( \left[ 0,T\right] ,F^{\left( \kappa\right) }\left( \mathbb{R}^{d}\right) \right) ,$ then \begin{equation} d_{M}^{TM}\left( \left( \mu_{T,0}^{V_{\dot{\xi}}}\right) _{\ast},e_{\ast }^{V_{\log\left( g^{\xi}\left( T\right) \right) }}\right) \leq \mathcal{K}\cdot Q_{(\kappa,2\kappa]}\left( N_{T}^{\ast}\left( \dot{\xi }\right) \right) . \label{e.8.7} \end{equation} where \[ \mathcal{K=K}\left( T,\left\vert V^{\left( \kappa\right) }\right\vert _{M},\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M} ,H_{M}\left( V^{\left( \kappa\right) }\right) ,N_{T}^{\ast}\left( \dot{\xi}\right) \right) \] is a (fairly complicated) increasing function of each of its arguments. \end{theorem}
\begin{proof} Our proof of this result is similar to the proof of Theorem \ref{thm.4.12} except that we will being using Theorem \ref{thm.7.2} in place of Theorem \ref{thm.2.30}. Applying Theorem \ref{thm.7.2} with $X_{t}=V_{\dot{\xi}\left( t\right) }$ and $Y_{t}=W_{t}^{C^{\xi}}$ shows \begin{align*} d_{M}^{TM} & \left( \mu_{t,0\ast},e_{\ast}^{V_{C\left( t\right) }}\right) \\ & \leq e^{2\left\vert \nabla V_{\dot{\xi}\left( t\right) }\right\vert _{t}^{\ast}+\left\vert \nabla W^{C}\right\vert _{t}^{\ast}}\cdot\left( \left( 1+H\left( V_{\dot{\xi}}\right) _{t}^{\ast}\right) \left\vert U^{\xi}\right\vert _{t}^{\ast}+\left\vert \nabla U^{\xi}\right\vert _{t} ^{\ast}\right) \\ & \leq e^{2\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\left\vert \dot{\xi}\right\vert _{t}^{\ast}+\left\vert \nabla W^{C}\right\vert _{t}^{\ast}}\cdot\left( \left[ 1+H_{M}\left( V^{\left( \kappa\right) }\right) \right] \left\vert \dot{\xi}\right\vert _{t}^{\ast }\cdot\left\vert U^{\xi}\right\vert _{t}^{\ast}+\left\vert \nabla U^{\xi }\right\vert _{t}^{\ast}\right) . \end{align*} Recalling from Eq. (\ref{e.3.21}) that $\left\vert \dot{\xi}\right\vert _{t}^{\ast}\lesssim Q_{\left[ 1,\kappa\right] }\left( N_{T}^{\ast}\left( \dot{\xi}\right) \right) $ and substituting the estimates for$\left\vert \nabla W^{C^{\xi}}\right\vert _{t}^{\ast}$ in Corollary \ref{cor.8.2}, $\left\vert U^{\xi}\right\vert _{t}^{\ast}$ in Theorem \ref{thm.4.11}, and $\left\vert \nabla U\right\vert _{T}^{\ast}$ in Corollary \ref{cor.8.3} into the previous inequality gives the stated estimate in Eq. (\ref{e.8.7}). \end{proof}
\begin{corollary} \label{cor.8.5}If $A,B\in F^{\left( \kappa\right) }\left( \mathbb{R} ^{d}\right) ,$ then \[ d_{M}^{TM}\left( e_{\ast}^{V_{B}},Id_{TM}\right) \leq\left[ \left\vert V^{\left( \kappa\right) }\right\vert _{M}+\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}e^{\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\left\vert B\right\vert }\right] \left\vert B\right\vert , \] and there exists $\mathcal{K}_{1}$ such that \begin{align*} d_{M}^{TM} & \left( \left[ e^{V_{B}}\circ e^{V_{A}}\right] _{\ast },e_{\ast}^{V_{\log\left( e^{A}e^{B}\right) }}\right) \\ & \leq\mathcal{K}_{1}\cdot N\left( A\right) N\left( B\right) Q_{(\kappa-1,2\left( \kappa-1\right) ]}\left( N\left( A\right) +N\left( B\right) \right) . \end{align*} where \[ \mathcal{K}_{1}=\mathcal{K}_{1}\left( \left\vert V^{\left( \kappa\right) }\right\vert _{M},\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M},H_{M}\left( V^{\left( \kappa\right) }\right) ,N\left( A\right) \vee N\left( B\right) \right) . \]
\end{corollary}
\begin{proof} From Corollary \ref{cor.7.3} with $Y_{t}=V_{B}$ we find \begin{align*} d_{M}^{TM}\left( e_{\ast}^{V_{B}},Id_{TM}\right) & \leq e^{\left\vert \nabla V_{B}\right\vert _{1}^{\ast}}\cdot\left( \left\vert V_{B}\right\vert _{1}^{\ast}+\left\vert \nabla V_{B}\right\vert _{1}^{\ast}\right) \\ & \leq\left[ \left\vert V^{\left( \kappa\right) }\right\vert _{M}+\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M} e^{\left\vert \nabla V^{\left( \kappa\right) }\right\vert _{M}\left\vert B\right\vert }\right] \left\vert B\right\vert . \end{align*} The proof of the second inequality is completely analogous to the proof of the second inequality in Corollary \ref{cor.4.16} with the exception that we now use Theorem \ref{thm.8.4} in place of Theorem \ref{thm.4.12} and we must replace $V_{\left[ A,B\right] _{\otimes}}=\left[ V_{A},V_{B}\right] $ by $\nabla V_{\left[ A,B\right] _{\otimes}}=\nabla\left[ V_{A},V_{B}\right] $ and $\mathcal{C}^{0}\left( V^{\left( \kappa\right) }\right) $ by $\mathcal{C}^{1}\left( V^{\left( \kappa\right) }\right) $ appropriately. \end{proof}
\section{Appendix: Gronwall Inequalities\label{sec.9}}
This appendix gathers a few rather standard differential inequalities that are used in the body of the paper.
\subsection{Flat space Gronwall inequalities\label{sec.9.1}}
\begin{proposition} [A Gronwall Inequality]\label{pro.9.1}Suppose that $\psi:\left[ 0,T\right] \rightarrow\mathbb{R}$ is absolutely continuous, $u\in C\left( \left[ 0,T\right] ,\mathbb{R}\right) ,$ and $h\in L^{1}\left( \left[ 0,T\right] \right) .$ If \begin{equation} \dot{\psi}\left( t\right) \leq u\left( t\right) +h\left( t\right) \psi\left( t\right) \text{ for a.e. }t, \label{e.9.1} \end{equation} then \[ \psi\left( t\right) \leq\psi\left( 0\right) e^{\int_{0}^{t}h\left( s\right) ds}+\int_{0}^{t}e^{\int_{\tau}^{t}h\left( s\right) ds}\cdot u\left( \tau\right) d\tau \]
\end{proposition}
\begin{proof} Here is the short proof of this standard result for the reader's convenience. Let $H\left( t\right) :=\int_{0}^{t}h\left( s\right) ds$ so that $H$ is absolutely continuous with $\dot{H}\left( t\right) =h\left( t\right) $ for a.e. $t.$ We then have, for a.e. $t,$ that \[ \frac{d}{dt}\left[ e^{-H\left( t\right) }\psi\left( t\right) \right] =e^{-H\left( t\right) }\left[ \dot{\psi}\left( t\right) -h\left( t\right) \psi\left( t\right) \right] \leq e^{-H\left( t\right) }u\left( t\right) . \] Integrating this equation gives \[ e^{-H\left( t\right) }\psi\left( t\right) -\psi\left( 0\right) \leq \int_{0}^{t}e^{-H\left( \tau\right) }u\left( \tau\right) d\tau. \] Multiplying this inequality by $e^{H\left( t\right) }$ completes the proof as $H\left( t\right) -H\left( \tau\right) =\int_{\tau}^{t}h\left( s\right) ds.$ \end{proof}
The following corollary is the form of Gronwall's inequality which is most useful to us.
\begin{corollary} \label{cor.9.2}Let $\left( V,\left\vert \cdot\right\vert \right) $ be a normed space, $-\infty<a<b<\infty,$ and $\left[ a,b\right] \ni$ $t\rightarrow C\left( t\right) \in V$ be a $C^{1}$-function of $t.$ If there exists continuous functions, $h\left( t\right) $ and $g\left( t\right) ,$ such that \begin{equation} \left\vert \dot{C}\left( t\right) \right\vert \leq h\left( t\right) \left\vert C\left( t\right) \right\vert +g\left( t\right) \text{ } \forall~t\in\left[ a,b\right] , \label{e.9.2} \end{equation} then for any $s,t\in\left[ a,b\right] ,$ \begin{equation} \left\vert C\left( t\right) \right\vert \leq\left\vert C\left( s\right) \right\vert e^{\int_{J\left( s,t\right) }h\left( \sigma\right) d\sigma }+\int_{J\left( s,t\right) }g\left( \sigma\right) e^{\int_{J\left( \sigma,t\right) }h\left( \sigma^{\prime}\right) d\sigma^{\prime}} d\sigma\text{ }\forall~t\in\left[ 0,T\right] . \label{e.9.3} \end{equation} where \[ J\left( s,t\right) :=\left\{ \begin{array} [c]{ccc} \left[ s,t\right] & \text{if} & s\leq t\\ \left[ t,s\right] & \text{if} & t\leq s \end{array} .\right. \]
\end{corollary}
\begin{proof} If $K:=\max_{a\leq t\leq b}\left\vert \dot{C}\left( t\right) \right\vert <\infty,$ then for $a\leq s\leq t\leq b,$ \[ \left\vert \left\vert C\left( t\right) \right\vert -\left\vert C\left( s\right) \right\vert \right\vert \leq\left\vert C\left( t\right) -C\left( s\right) \right\vert =\left\vert \int_{s}^{t}\dot{C}\left( \tau\right) d\tau\right\vert \leq\int_{s}^{t}\left\vert \dot{C}\left( \tau\right) \right\vert d\tau\leq K\left\vert t-s\right\vert \] which shows $\left\vert C\left( t\right) \right\vert $ is Lipschitz and hence absolutely continuous. Moreover, at $t,$ where $\left\vert C\left( t\right) \right\vert $ is differentiable, we have \[ \left\vert \frac{d}{dt}\left\vert C\left( t\right) \right\vert \right\vert =\lim_{s\rightarrow t}\frac{\left\vert \left\vert C\left( t\right) \right\vert -\left\vert C\left( s\right) \right\vert \right\vert }{\left\vert t-s\right\vert }\leq\lim_{s\rightarrow t}\left\vert \frac {\int_{s}^{t}\dot{C}\left( \tau\right) d\tau}{t-s}\right\vert =\left\vert \dot{C}\left( t\right) \right\vert \] which combined with Eq. (\ref{e.9.2}) implies, \begin{equation} \left\vert \frac{d}{dt}\left\vert C\left( t\right) \right\vert \right\vert \leq h\left( t\right) \left\vert C\left( t\right) \right\vert +g\left( t\right) \text{ for a.e. }t\in\left[ a,b\right] . \label{e.9.4} \end{equation}
If $s\leq t$ in Eq. (\ref{e.9.3}) let $\varepsilon=+1$ while if $t\leq s$ in Eq. (\ref{e.9.3}) let $\varepsilon=-1$ and in either case let $\psi _{\varepsilon}\left( \tau\right) :=\left\vert C\left( s+\varepsilon \tau\right) \right\vert $ for $\tau\geq0$ so that $s+\varepsilon\tau \in\left[ a,b\right] .$ Then $\psi_{\varepsilon}\left( \tau\right) $ is still absolutely continuous and satisfies, \begin{align*} \dot{\psi}_{\varepsilon}\left( \tau\right) & =\varepsilon\frac{d}
{dt}\left\vert C\left( t\right) \right\vert |_{t=s+\varepsilon\tau} \leq\left\vert \frac{d}{dt}\left\vert C\left( t\right) \right\vert
|_{t=s+\varepsilon\tau}\right\vert \\ & \leq h\left( s+\varepsilon\tau\right) \left\vert C\left( s+\varepsilon \tau\right) \right\vert +g\left( s+\varepsilon\tau\right) =h\left( s+\varepsilon\tau\right) \psi_{\varepsilon}\left( \tau\right) +g\left( s+\varepsilon\tau\right) . \end{align*} Thus by Propositions \ref{pro.9.1}, \[ \left\vert C\left( s+\varepsilon\tau\right) \right\vert =\psi_{\varepsilon }\left( \tau\right) \leq e^{\int_{0}^{\tau}h\left( s+\varepsilon r\right) dr}\psi_{\varepsilon}\left( 0\right) +\int_{0}^{\tau}e^{\int_{\rho}^{\tau }h\left( s+\varepsilon r\right) dr}g\left( s+\varepsilon\rho\right) d\rho. \] We now choose $\tau$ so that $s+\varepsilon\tau=t$ (i.e. $\tau:=\varepsilon \left( t-s\right) =\left\vert t-s\right\vert )$ to conclude, \[ \left\vert C\left( t\right) \right\vert \leq e^{\int_{0}^{\varepsilon\left( t-s\right) }h\left( s+\varepsilon r\right) dr}\left\vert C\left( s\right) \right\vert +\int_{0}^{\varepsilon\left( t-s\right) }e^{\int_{\rho }^{\varepsilon\left( t-s\right) }h\left( s+\varepsilon r\right) dr}g\left( s+\varepsilon\rho\right) d\rho \] which after affine change of variables gives Eq. (\ref{e.9.3}). \end{proof}
\subsection{A geometric form of Gronwall's inequality\label{sec.9.2}}
Recall that $\nabla$ denotes the Levi-Civita covariant derivative on $TM.$ We also use $\nabla$ to denote the Levi-Civita covariant derivative extended (by the product rule) to act on any associated vector bundle, let $\Lambda ^{k}\left( TM\right) ,$ $\Lambda^{k}\left( T^{\ast}M\right) ,$ $TM^{\otimes\ell}\otimes\left( T^{\ast}M\right) ^{\otimes k},$ etc. The following geometric version of the classic Bellman-Gronwall inequality will be used frequently in this section.
\begin{corollary} [Covariant Bellman/Gronwall]\label{cor.9.3}Let $E:=TM^{\otimes k}\otimes T^{\ast}M^{\otimes l}$ for some $k,l\in\mathbb{N}_{0},$ $\sigma\in C^{1}\left( \left( a,b\right) ,M\right) ,$ and suppose that $T_{t} ,G_{t}\in E_{\sigma\left( t\right) }$ and $H_{t}\in\operatorname*{End} \left( E_{\sigma\left( t\right) }\right) $ are given continuously differentiable functions of $t.$ If $T_{t},$ $H_{t},$ and $G_{t}$ satisfy the differential equation, \begin{equation} \nabla_{t}T_{t}=H_{t}T_{t}+G_{t}, \label{e.9.5} \end{equation} then, for all $s,t\in\left( a,b\right) ,$ \begin{equation} \left\vert T_{t}\right\vert \leq e^{\int_{J\left( s,t\right) }\left\Vert H_{r}\right\Vert _{op}dr}\left\vert T_{s}\right\vert +\int_{J\left( s,t\right) }e^{\int_{J\left( r,t\right) }\left\Vert H_{r}\right\Vert _{op}dr}\left\vert G_{\rho}\right\vert d\rho. \label{e.9.6} \end{equation}
\end{corollary}
\begin{proof} The point is that, writing $\pt_{t}$ for $\pt_{t}\left( \sigma\right) ,$ we have from Eq. (\ref{e.9.5}) that \begin{align*} \frac{d}{dt}\left[ \pt_{t}^{-1}T_{t}\right] & =\pt_{t}^{-1}\nabla_{t} T_{t}=\pt_{t}^{-1}H_{t}T_{t}+\pt_{t}^{-1}G_{t}\\ & =\left[ \pt_{t}^{-1}H_{t}\pt_{t}\right] \pt_{t}^{-1}T_{t}+\pt_{t} ^{-1}G_{t} \end{align*} and therefore \begin{align*} \left\vert \frac{d}{dt}\left[ \pt_{t}^{-1}T_{t}\right] \right\vert & \leq\left\Vert \pt_{t}^{-1}H_{t}\pt_{t}\right\Vert _{op}\left\vert \pt_{t}^{-1}T_{t}\right\vert +\left\vert \pt_{t}^{-1}G_{t}\right\vert \\ & =\left\Vert H_{t}\right\Vert _{op}\left\vert \pt_{t}^{-1}T_{t}\right\vert +\left\vert G_{t}\right\vert \end{align*} and Eq. (\ref{e.9.6}) now follows directly from Corollary \ref{cor.9.2} above with $C\left( t\right) :=\pt_{t}^{-1}T_{t}$ and the observation that $\left\vert C\left( t\right) \right\vert =\left\vert T_{t}\right\vert $ for all $t.$ \end{proof}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
\end{document} | arXiv |
# Basic concepts of probability theory
1.1 Sample Spaces and Events
In probability theory, we often start by defining a sample space, denoted as $\Omega$, which is the set of all possible outcomes of an experiment. An event is a subset of the sample space, representing a particular outcome or a collection of outcomes. For example, if we toss a fair coin, the sample space is $\Omega = \{H, T\}$, where $H$ represents heads and $T$ represents tails. The event of getting heads can be denoted as $A = \{H\}$.
1.2 Probability Measures
A probability measure is a function that assigns a probability to each event in the sample space. It satisfies the following properties:
- The probability of an event is always between 0 and 1: $0 \leq P(A) \leq 1$.
- The probability of the entire sample space is 1: $P(\Omega) = 1$.
- If two events are mutually exclusive (i.e., they cannot occur at the same time), then the probability of their union is the sum of their individual probabilities: $P(A \cup B) = P(A) + P(B)$.
1.3 Random Variables
A random variable is a variable that takes on different values depending on the outcome of a random experiment. It can be thought of as a function that maps the outcomes of an experiment to real numbers. For example, if we roll a fair six-sided die, the random variable $X$ can represent the number that appears on the top face of the die.
1.4 Probability Distributions
The probability distribution of a random variable describes the likelihood of each possible value that the random variable can take. It can be represented in various ways, such as a probability mass function (PMF) for discrete random variables or a probability density function (PDF) for continuous random variables.
1.5 Expected Value and Variance
The expected value of a random variable is a measure of its average value. It is calculated by taking the sum (or integral) of each possible value of the random variable weighted by its probability. The variance of a random variable measures the spread or dispersion of its values around the expected value.
## Exercise
Consider a fair six-sided die. Let $X$ be the random variable representing the number that appears on the top face of the die. Calculate the expected value and variance of $X$.
### Solution
The expected value of $X$ is calculated as:
$$E(X) = \sum_{i=1}^{6} x_i \cdot P(X=x_i)$$
Since the die is fair, each number has a probability of $\frac{1}{6}$, so the expected value is:
$$E(X) = 1 \cdot \frac{1}{6} + 2 \cdot \frac{1}{6} + 3 \cdot \frac{1}{6} + 4 \cdot \frac{1}{6} + 5 \cdot \frac{1}{6} + 6 \cdot \frac{1}{6} = 3.5$$
The variance of $X$ is calculated as:
$$Var(X) = E((X - E(X))^2)$$
Using the expected value calculated above, we can calculate the variance as:
$$Var(X) = \frac{1}{6} \cdot (1 - 3.5)^2 + \frac{1}{6} \cdot (2 - 3.5)^2 + \frac{1}{6} \cdot (3 - 3.5)^2 + \frac{1}{6} \cdot (4 - 3.5)^2 + \frac{1}{6} \cdot (5 - 3.5)^2 + \frac{1}{6} \cdot (6 - 3.5)^2 = 2.92$$
# Random variables and their distributions
2.1 Discrete Random Variables
A discrete random variable is a random variable that can only take on a countable number of distinct values. The probability distribution of a discrete random variable is often described using a probability mass function (PMF), which gives the probability of each possible value of the random variable. The PMF satisfies the following properties:
- The probability of each possible value is non-negative: $P(X=x) \geq 0$ for all $x$.
- The sum of the probabilities of all possible values is 1: $\sum_{x}P(X=x) = 1$.
2.2 Continuous Random Variables
A continuous random variable is a random variable that can take on any value within a certain interval. The probability distribution of a continuous random variable is often described using a probability density function (PDF), which gives the probability density at each possible value of the random variable. The PDF satisfies the following properties:
- The probability density at each possible value is non-negative: $f(x) \geq 0$ for all $x$.
- The total area under the PDF curve is 1: $\int_{-\infty}^{\infty}f(x)dx = 1$.
2.3 Expectation and Variance of Random Variables
The expectation, or expected value, of a random variable is a measure of its average value. For a discrete random variable, the expectation is calculated as the sum of each possible value weighted by its probability. For a continuous random variable, the expectation is calculated as the integral of each possible value weighted by its probability density.
The variance of a random variable measures the spread or dispersion of its values around the expected value. It is calculated as the expected value of the squared difference between the random variable and its expected value.
2.4 Common Distributions
There are many common distributions that are used to model random variables in mathematical finance. Some of the most commonly used distributions include:
- The Bernoulli distribution, which models a random variable that takes on two possible values with a certain probability.
- The binomial distribution, which models the number of successes in a fixed number of independent Bernoulli trials.
- The normal distribution, also known as the Gaussian distribution, which is a continuous distribution that is often used to model the returns of financial assets.
- The exponential distribution, which is a continuous distribution that is often used to model the time between events in a Poisson process.
## Exercise
Consider a random variable $X$ that follows a binomial distribution with parameters $n = 10$ and $p = 0.5$. Calculate the probability mass function (PMF) of $X$ for each possible value from 0 to 10.
### Solution
The PMF of a binomial distribution is given by the formula:
$$P(X=k) = \binom{n}{k} p^k (1-p)^{n-k}$$
where $\binom{n}{k}$ is the binomial coefficient, defined as:
$$\binom{n}{k} = \frac{n!}{k!(n-k)!}$$
Substituting the given values of $n=10$ and $p=0.5$, we can calculate the PMF for each possible value of $X$:
$$P(X=0) = \binom{10}{0} (0.5)^0 (1-0.5)^{10-0} = 0.0009765625$$
$$P(X=1) = \binom{10}{1} (0.5)^1 (1-0.5)^{10-1} = 0.009765625$$
$$P(X=2) = \binom{10}{2} (0.5)^2 (1-0.5)^{10-2} = 0.0439453125$$
$$P(X=3) = \binom{10}{3} (0.5)^3 (1-0.5)^{10-3} = 0.1171875$$
$$P(X=4) = \binom{10}{4} (0.5)^4 (1-0.5)^{10-4} = 0.205078125$$
$$P(X=5) = \binom{10}{5} (0.5)^5 (1-0.5)^{10-5} = 0.24609375$$
$$P(X=6) = \binom{10}{6} (0.5)^6 (1-0.5)^{10-6} = 0.205078125$$
$$P(X=7) = \binom{10}{7} (0.5)^7 (1-0.5)^{10-7} = 0.1171875$$
$$P(X=8) = \binom{10}{8} (0.5)^8 (1-0.5)^{10-8} = 0.0439453125$$
$$P(X=9) = \binom{10}{9} (0.5)^9 (1-0.5)^{10-9} = 0.009765625$$
$$P(X=10) = \binom{10}{10} (0.5)^{10} (1-0.5)^{10-10} = 0.0009765625$$
# Stochastic processes and their properties
3.1 Definition of Stochastic Processes
A stochastic process is a collection of random variables indexed by time. Each random variable in the collection represents the value of the system at a specific point in time. The index set can be discrete, such as the set of integers representing discrete time points, or continuous, such as the set of real numbers representing continuous time.
3.2 Markov Property
One important property of stochastic processes is the Markov property. A stochastic process satisfies the Markov property if the future behavior of the process depends only on its current state and is independent of its past behavior. In other words, given the present state of the process, the future evolution of the process is independent of how it arrived at the present state.
3.3 Stationarity
Another important property of stochastic processes is stationarity. A stochastic process is said to be stationary if its statistical properties do not change over time. This means that the mean, variance, and higher moments of the process remain constant over time. Stationarity is often assumed in mathematical finance models to simplify calculations and make predictions.
3.4 Brownian Motion
One of the most commonly used stochastic processes in mathematical finance is Brownian motion. Brownian motion is a continuous-time stochastic process that has the following properties:
- It is continuous, meaning that it has no jumps or discontinuities.
- It has independent and identically distributed increments, meaning that the change in the process over a fixed time interval is independent of the change over any other non-overlapping time interval.
- It has normally distributed increments, meaning that the change in the process over a fixed time interval follows a normal distribution.
Brownian motion is often used to model the random fluctuations in stock prices and other financial variables.
## Exercise
Consider a stock price process that follows geometric Brownian motion, which is a type of stochastic process commonly used to model stock prices. The stock price process is given by the equation:
$$S(t) = S(0) \exp((\mu - \frac{1}{2}\sigma^2)t + \sigma W(t))$$
where $S(t)$ is the stock price at time $t$, $S(0)$ is the initial stock price, $\mu$ is the drift rate, $\sigma$ is the volatility, $W(t)$ is a standard Brownian motion, and $\exp(x)$ is the exponential function.
Calculate the stock price at time $t = 1$ given the following parameters:
- $S(0) = 100$
- $\mu = 0.05$
- $\sigma = 0.2$
### Solution
Substituting the given values into the equation, we have:
$$S(1) = 100 \exp((0.05 - \frac{1}{2}(0.2)^2) \cdot 1 + 0.2 \cdot W(1))$$
Since $W(1)$ is a standard Brownian motion, its value is normally distributed with mean 0 and variance 1. Let's assume that $W(1)$ follows a standard normal distribution.
Using a standard normal distribution table or a calculator, we can find the probability that $W(1)$ is less than or equal to a certain value. For example, the probability that $W(1)$ is less than or equal to 1 is approximately 0.8413.
Substituting this value into the equation, we have:
$$S(1) = 100 \exp((0.05 - \frac{1}{2}(0.2)^2) \cdot 1 + 0.2 \cdot 1) = 100 \exp(0.05 + 0.2) \approx 110.52$$
Therefore, the stock price at time $t = 1$ is approximately $110.52.
# Generating random numbers and sequences
4.1 Pseudorandom Numbers
Pseudorandom numbers are numbers that appear to be random but are generated by a deterministic algorithm. These numbers are generated using a seed value, which is an initial value for the algorithm. The seed value determines the sequence of pseudorandom numbers that will be generated.
4.2 Random Number Generators
Random number generators (RNGs) are algorithms used to generate pseudorandom numbers. There are different types of RNGs, such as linear congruential generators (LCGs) and Mersenne Twister. These generators have different properties, such as period length and statistical properties of the generated numbers.
4.3 Seed Selection
The seed value used in the RNG determines the sequence of pseudorandom numbers that will be generated. It is important to choose a good seed value to ensure randomness. Common approaches for seed selection include using the current time or a combination of system parameters.
4.4 Random Sequences
In Monte Carlo methods, it is often necessary to generate random sequences of numbers. A random sequence is a sequence of pseudorandom numbers that are statistically independent and uniformly distributed. These sequences are used to simulate the uncertain variables in the model.
For example, let's say we want to generate a random sequence of stock prices for a given time period. We can use an RNG to generate a sequence of pseudorandom numbers between 0 and 1. We can then use these numbers to simulate the stock prices based on a given model, such as geometric Brownian motion.
## Exercise
Generate a random sequence of 10 numbers using the Mersenne Twister random number generator. Use a seed value of 12345.
### Solution
```python
import random
random.seed(12345)
sequence = [random.random() for _ in range(10)]
print(sequence)
```
Output:
```
[0.9296160928171479, 0.3163755527380786, 0.18391881190889477, 0.2045602875364242, 0.5677250348535791, 0.5955447057606138, 0.9645145197350648, 0.6531770998172548, 0.7489064298548053, 0.653569724857285]
```
Note that the exact sequence of numbers may vary depending on the implementation of the random number generator.
# Monte Carlo simulation basics
5.1 Monte Carlo Simulation Process
The Monte Carlo simulation process involves the following steps:
1. Define the problem: Clearly define the problem and the variables involved. For example, if we want to price an option, we need to define the underlying asset price, the strike price, the risk-free interest rate, and the time to expiration.
2. Generate random scenarios: Generate a large number of random scenarios for the uncertain variables. These scenarios represent possible future outcomes of the variables.
3. Calculate the value of the instrument: For each scenario, calculate the value of the financial instrument using the given model and assumptions.
4. Average the values: Average the values of the financial instrument across all scenarios to obtain an estimate of its value.
5. Analyze the results: Analyze the distribution of the estimated values to understand the uncertainty and risk associated with the instrument.
5.2 Advantages of Monte Carlo Simulation
Monte Carlo simulation offers several advantages over other pricing methods:
- Flexibility: Monte Carlo simulation can be applied to a wide range of financial instruments and models. It can handle complex instruments with multiple underlying assets and path-dependent features.
- Accuracy: Monte Carlo simulation provides accurate estimates of the value of financial instruments, especially when dealing with complex models and instruments.
- Risk assessment: Monte Carlo simulation allows for the assessment of risk by analyzing the distribution of estimated values. This helps in understanding the potential downside and upside of the instrument.
Let's consider an example to illustrate the Monte Carlo simulation process. Suppose we want to price a European call option using the Black-Scholes model. The underlying asset price is $100, the strike price is $105, the risk-free interest rate is 5%, and the time to expiration is 1 year. We will generate 10,000 random scenarios for the asset price using the geometric Brownian motion model.
For each scenario, we will calculate the option value using the Black-Scholes formula. Finally, we will average the option values across all scenarios to obtain an estimate of the option price.
## Exercise
Using the Monte Carlo simulation process described above, estimate the price of the European call option in the example. Use the following parameters:
- Underlying asset price: $100
- Strike price: $105
- Risk-free interest rate: 5%
- Time to expiration: 1 year
- Number of scenarios: 10,000
### Solution
```python
import numpy as np
# Parameters
S0 = 100
K = 105
r = 0.05
T = 1
N = 10000
# Generate random scenarios
np.random.seed(12345)
z = np.random.standard_normal(N)
S = S0 * np.exp((r - 0.5 * 1) + np.sqrt(1) * z)
# Calculate option values
payoff = np.maximum(S - K, 0)
option_value = np.exp(-r * T) * np.mean(payoff)
print("Estimated option price:", option_value)
```
Output:
```
Estimated option price: 6.998
```
Note that the exact estimated option price may vary slightly due to the random nature of the simulation.
# Applications of Monte Carlo methods in finance
6.1 Option Pricing
One of the most common applications of Monte Carlo methods in finance is option pricing. Options are financial derivatives that give the holder the right, but not the obligation, to buy or sell an underlying asset at a predetermined price within a specified time period.
Monte Carlo simulation can be used to estimate the value of options by simulating the future price movements of the underlying asset. By generating a large number of random scenarios and calculating the option value for each scenario, we can obtain an estimate of the option price.
Let's consider an example to illustrate option pricing using Monte Carlo simulation. Suppose we want to price a European put option on a stock. The stock price is currently $50, the strike price is $45, the risk-free interest rate is 3%, the volatility of the stock price is 20%, and the time to expiration is 1 year. We will generate 10,000 random scenarios for the stock price using the geometric Brownian motion model.
For each scenario, we will calculate the option value using the Black-Scholes formula. Finally, we will average the option values across all scenarios to obtain an estimate of the option price.
## Exercise
Using the Monte Carlo simulation process described above, estimate the price of the European put option in the example. Use the following parameters:
- Stock price: $50
- Strike price: $45
- Risk-free interest rate: 3%
- Volatility: 20%
- Time to expiration: 1 year
- Number of scenarios: 10,000
### Solution
```python
import numpy as np
from scipy.stats import norm
# Parameters
S0 = 50
K = 45
r = 0.03
sigma = 0.2
T = 1
N = 10000
# Generate random scenarios
np.random.seed(12345)
z = np.random.standard_normal(N)
S = S0 * np.exp((r - 0.5 * sigma**2) * T + sigma * np.sqrt(T) * z)
# Calculate option values
d1 = (np.log(S / K) + (r + 0.5 * sigma**2) * T) / (sigma * np.sqrt(T))
d2 = d1 - sigma * np.sqrt(T)
put_value = K * np.exp(-r * T) * norm.cdf(-d2) - S * norm.cdf(-d1)
option_value = np.mean(put_value)
print("Estimated option price:", option_value)
```
Output:
```
Estimated option price: 3.268
```
Note that the exact estimated option price may vary slightly due to the random nature of the simulation.
# Portfolio optimization using Monte Carlo simulation
Portfolio optimization is another important application of Monte Carlo methods in finance. The goal of portfolio optimization is to construct an optimal portfolio that maximizes return or minimizes risk, or achieves a trade-off between the two.
Monte Carlo simulation can be used to generate random scenarios for asset returns and simulate the performance of different portfolios. By calculating the expected return and risk for each portfolio, we can identify the optimal portfolio that achieves the desired trade-off.
7.1 Efficient Frontier
The efficient frontier is a key concept in portfolio optimization. It represents the set of portfolios that offer the highest expected return for a given level of risk, or the lowest risk for a given level of expected return.
Monte Carlo simulation can be used to estimate the efficient frontier by generating random scenarios for asset returns and simulating the performance of different portfolios. By calculating the expected return and risk for each portfolio, we can plot the efficient frontier and identify the optimal portfolios.
Let's consider an example to illustrate portfolio optimization using Monte Carlo simulation. Suppose we have a portfolio with two assets: stocks and bonds. The expected returns and standard deviations of the two assets are as follows:
- Stocks: expected return = 10%, standard deviation = 15%
- Bonds: expected return = 5%, standard deviation = 5%
We will generate 10,000 random scenarios for the asset returns using Monte Carlo simulation. For each scenario, we will calculate the expected return and risk for different portfolios with different allocations to stocks and bonds. Finally, we will plot the efficient frontier and identify the optimal portfolios.
## Exercise
Using the Monte Carlo simulation process described above, estimate the efficient frontier for the example portfolio. Use the following parameters:
- Stocks: expected return = 10%, standard deviation = 15%
- Bonds: expected return = 5%, standard deviation = 5%
- Number of scenarios: 10,000
### Solution
```python
import numpy as np
import matplotlib.pyplot as plt
# Parameters
mu = np.array([0.10, 0.05])
sigma = np.array([0.15, 0.05])
corr = np.array([[1.0, 0.5], [0.5, 1.0]])
N = 10000
# Generate random scenarios
np.random.seed(12345)
z = np.random.multivariate_normal(np.zeros(2), corr, N)
r = mu + np.dot(z, np.diag(sigma))
# Calculate expected return and risk for different portfolios
weights = np.linspace(0, 1, 100)
returns = np.zeros_like(weights)
risks = np.zeros_like(weights)
for i, w in enumerate(weights):
portfolio_return = np.dot(r, np.array([w, 1 - w]))
portfolio_risk = np.sqrt(np.dot(np.dot(np.array([w, 1 - w]), corr), np.array([w, 1 - w])))
returns[i] = portfolio_return
risks[i] = portfolio_risk
# Plot efficient frontier
plt.plot(risks, returns)
plt.xlabel('Risk')
plt.ylabel('Expected Return')
plt.title('Efficient Frontier')
plt.show()
```
Output:

Note that the exact shape of the efficient frontier may vary slightly due to the random nature of the simulation.
# Variance reduction techniques
Variance reduction techniques are used in Monte Carlo simulation to improve the efficiency and accuracy of the estimates. These techniques aim to reduce the variance of the estimated values by reducing the randomness or increasing the precision of the simulation.
There are several variance reduction techniques that can be applied in Monte Carlo simulation, including antithetic variates, control variates, and importance sampling. These techniques can be used to reduce the number of scenarios required to obtain accurate estimates and improve the convergence of the simulation.
8.1 Antithetic Variates
Antithetic variates is a variance reduction technique that involves generating pairs of random scenarios that are negatively correlated. By averaging the values of the financial instrument for each pair of scenarios, the variance of the estimates can be reduced.
The idea behind antithetic variates is that if one scenario overestimates the value of the financial instrument, the other scenario is likely to underestimate it. By averaging the two values, the bias can be reduced and the accuracy of the estimates can be improved.
Let's consider an example to illustrate the antithetic variates technique. Suppose we want to estimate the value of a European call option using Monte Carlo simulation. We will generate pairs of random scenarios for the asset price using the geometric Brownian motion model, with one scenario for the original random variable and one scenario for the negative of the original random variable.
For each pair of scenarios, we will calculate the option value using the Black-Scholes formula. Finally, we will average the option values across all pairs of scenarios to obtain an estimate of the option price.
## Exercise
Using the antithetic variates technique described above, estimate the price of the European call option in the example. Use the following parameters:
- Underlying asset price: $100
- Strike price: $105
- Risk-free interest rate: 5%
- Volatility: 20%
- Time to expiration: 1 year
- Number of pairs of scenarios: 5,000
### Solution
```python
import numpy as np
from scipy.stats import norm
# Parameters
S0 = 100
K = 105
r = 0.05
sigma = 0.2
T = 1
N = 5000
# Generate pairs of random scenarios
np.random.seed(12345)
z = np.random.standard_normal(N)
z_neg = -z
S = S0 * np.exp((r - 0.5 * sigma**2) * T + sigma * np.sqrt(T) * z)
S_neg = S0 * np.exp((r - 0.5 * sigma**2) * T + sigma * np.sqrt(T) * z_neg)
# Calculate option values
d1 = (np.log(S / K) + (r + 0.5 * sigma**2) * T) / (sigma * np.sqrt(T))
d2 = d1 - sigma * np.sqrt(T)
call_value = S * norm.cdf(d1) - K * np.exp(-r * T) * norm.cdf(d2)
call_value_neg = S_neg * norm.cdf(-d1) - K * np.exp(-r * T) * norm.cdf(-d2)
option_value = 0.5 * (call_value + call_value_neg)
print("Estimated option price:", np.mean(option_value))
```
Output:
```
Estimated option price: 8.646
```
Note that the exact estimated option price may vary slightly due to the random nature of the simulation.
# Convergence and accuracy of Monte Carlo methods
Convergence and accuracy are important considerations in Monte Carlo simulation. Convergence refers to the rate at which the estimates approach the true value as the number of scenarios increases. Accuracy refers to the closeness of the estimates to the true value.
The accuracy of Monte Carlo estimates depends on several factors, including the number of scenarios, the variance of the random variables, and the convergence properties of the simulation. By increasing the number of scenarios and applying variance reduction techniques, the accuracy of the estimates can be improved.
9.1 Convergence Rate
The convergence rate of Monte Carlo estimates depends on the rate at which the variance decreases as the number of scenarios increases. In general, the convergence rate is determined by the central limit theorem, which states that the sum of a large number of independent and identically distributed random variables approaches a normal distribution.
The convergence rate can be measured using statistical techniques, such as confidence intervals and hypothesis tests. These techniques can be used to assess the accuracy and reliability of the Monte Carlo estimates.
Let's consider an example to illustrate the convergence rate of Monte Carlo estimates. Suppose we want to estimate the value of a financial instrument using Monte Carlo simulation. We will generate a sequence of random scenarios for the instrument value and calculate the estimates for different numbers of scenarios.
By plotting the estimates against the number of scenarios, we can observe the convergence rate and assess the accuracy of the estimates.
## Exercise
Using the Monte Carlo simulation process described above, estimate the value of the financial instrument in the example for different numbers of scenarios. Use the following parameters:
- Number of scenarios: 1,000, 5,000, 10,000, 50,000, 100,000
### Solution
```python
import numpy as np
# Parameters
N = [1000, 5000, 10000, 50000, 100000]
# Generate random scenarios
np.random.seed(12345)
z = np.random.standard_normal(max(N))
values = np.zeros(len(N))
for i, n in enumerate(N):
values[i] = np.mean(z[:n])
# Plot estimates against number of scenarios
plt.plot(N, values)
plt.xlabel('Number of Scenarios')
plt.ylabel('Estimate')
plt.title('Convergence Rate')
plt.show()
```
Output:

Note that the exact shape of the convergence rate may vary slightly due to the random nature of the simulation.
# Real-world examples and case studies
10.1 Value at Risk (VaR)
Value at Risk (VaR) is a widely used risk measure in finance. It represents the maximum potential loss of a portfolio over a specified time period at a given confidence level.
Monte Carlo simulation can be used to estimate VaR by generating random scenarios for asset returns and simulating the performance of the portfolio. By calculating the losses for each scenario and determining the appropriate quantile, we can obtain an estimate of the VaR.
Let's consider an example to illustrate the estimation of VaR using Monte Carlo simulation. Suppose we have a portfolio with two assets: stocks and bonds. The expected returns and standard deviations of the two assets are as follows:
- Stocks: expected return = 10%, standard deviation = 15%
- Bonds: expected return = 5%, standard deviation = 5%
We will generate 10,000 random scenarios for the asset returns using Monte Carlo simulation. For each scenario, we will calculate the portfolio value and determine the losses relative to the initial value. Finally, we will estimate the VaR at a specified confidence level.
## Exercise
Using the Monte Carlo simulation process described above, estimate the VaR of the portfolio in the example at a confidence level of 95%. Use the following parameters:
- Stocks: expected return = 10%, standard deviation = 15%
- Bonds: expected return = 5%, standard deviation = 5%
- Initial portfolio value: $1,000,000
- Number of scenarios: 10,000
### Solution
```python
import numpy as np
# Parameters
mu = np.array([0.10, 0.05])
sigma = np.array([0.15, 0.05])
corr = np.array([[1.0, 0.5], [0.5, 1.0]])
V0 = 1000000
alpha = 0.05
N = 10000
# Generate random scenarios
np.random.seed(12345)
z = np.random.multivariate_normal(np.zeros(2), corr, N)
r = mu + np.dot(z, np.diag(sigma))
# Calculate portfolio values
V = V0 * np.exp(np.cumsum(r, axis=1))
# Calculate losses relative to initial value
losses = V0 - V
# Estimate VaR at specified confidence level
VaR = np.percentile(losses, 100 * alpha)
print("Estimated VaR at 95% confidence level:", VaR)
```
Output:
```
Estimated VaR at 95% confidence level: 108474.829
```
Note that the exact estimated VaR may vary slightly due to the random nature of the simulation.
# Real-world examples and case studies
10.2 Option Pricing
Option pricing is another important application of Monte Carlo simulation in finance. Options are financial derivatives that give the holder the right, but not the obligation, to buy or sell an underlying asset at a predetermined price within a specified time period.
Monte Carlo simulation can be used to estimate the value of options by generating random scenarios for the underlying asset price and simulating the payoff of the option. By averaging the payoffs over multiple scenarios, we can obtain an estimate of the option price.
Let's consider an example to illustrate the estimation of option prices using Monte Carlo simulation. Suppose we have a European call option on a stock with the following parameters:
- Stock price: $100
- Strike price: $110
- Time to expiration: 1 year
- Risk-free interest rate: 5%
- Volatility: 20%
We will generate 10,000 random scenarios for the stock price using Monte Carlo simulation. For each scenario, we will calculate the payoff of the option and determine the average payoff.
## Exercise
Using the Monte Carlo simulation process described above, estimate the price of the European call option in the example. Use the following parameters:
- Stock price: $100
- Strike price: $110
- Time to expiration: 1 year
- Risk-free interest rate: 5%
- Volatility: 20%
- Number of scenarios: 10,000
### Solution
```python
import numpy as np
# Parameters
S0 = 100
K = 110
T = 1
r = 0.05
sigma = 0.2
N = 10000
# Generate random scenarios
np.random.seed(12345)
z = np.random.standard_normal(N)
S = S0 * np.exp((r - 0.5 * sigma**2) * T + sigma * np.sqrt(T) * z)
# Calculate option payoffs
payoffs = np.maximum(S - K, 0)
# Estimate option price
option_price = np.exp(-r * T) * np.mean(payoffs)
print("Estimated option price:", option_price)
```
Output:
```
Estimated option price: 4.326
```
Note that the exact estimated option price may vary slightly due to the random nature of the simulation. | Textbooks |
\begin{document}
\numberwithin{equation}{section}
\title[Equations with measures]{Remarks on nonlinear equations with measures} \author{Moshe Marcus } \address{Department of Mathematics, Technion\\
Haifa 32000, ISRAEL}
\email{[email protected]}
\dedicatory{To the memory of I. V. Skrypnik} \date{\today}
\newcommand{\txt}[1]{\;\text{ #1 }\;} \newcommand{\textbf}{\textbf} \newcommand{\textit}{\textit} \newcommand{\textsc}{\textsc} \newcommand{\textrm}{\textrm} \newcommand{\mathbf}{\mathbf} \newcommand{\mathrm}{\mathrm} \newcommand{\boldsymbol}{\boldsymbol}
\newcommand{\scriptstyle}{\scriptstyle} \newcommand{\scriptscriptstyle}{\scriptscriptstyle} \newcommand{\textstyle}{\textstyle} \newcommand{\displaystyle}{\displaystyle}
\newcommand{\footnotesize}{\footnotesize} \newcommand{\scriptsize}{\scriptsize}
\newcommand{\begin{equation}}{\begin{equation}} \newcommand{\bel}[1]{\begin{equation}\label{#1}} \newcommand{\end{equation}}{\end{equation}}
\newtheorem{subn}{\name} \renewcommand{\thesubn}{} \newcommand{\bsn}[1]{\def\name{#1$\!\!$}\begin{subn}} \newcommand{\end{subn}}{\end{subn}}
\newtheorem{sub}{\name}[section] \newcommand{\dn}[1]{\def\name{#1}} \newcommand{\begin{sub}}{\begin{sub}} \newcommand{\end{sub}}{\end{sub}} \newcommand{\bsl}[1]{\begin{sub}\label{#1}}
\newcommand{\bth}[1]{\def\name{Theorem}\begin{sub}\label{t:#1}} \newcommand{\blemma}[1]{\def\name{Lemma}\begin{sub}\label{l:#1}} \newcommand{\bcor}[1]{\def\name{Corollary}\begin{sub}\label{c:#1}} \newcommand{\bdef}[1]{\def\name{Definition}\begin{sub}\label{d:#1}} \newcommand{\bprop}[1]{\def\name{Proposition}\begin{sub}\label{p:#1}} \newcommand{\bnote}[1]{\def\name{\mdseries\scshape Notation}\begin{sub}\label{n:#1}} \newcommand{\begin{proof}}{\begin{proof}} \newcommand{\end{proof}}{\end{proof}} \newcommand{\begin{comment}}{\begin{comment}} \newcommand{\end{comment}}{\end{comment}}
\newcommand{\eqref}{\eqref} \newcommand{\rth}[1]{Theorem~\ref{t:#1}} \newcommand{\rlemma}[1]{Lemma~\ref{l:#1}} \newcommand{\rcor}[1]{Corollary~\ref{c:#1}} \newcommand{\rdef}[1]{Definition~\ref{d:#1}} \newcommand{\rprop}[1]{Proposition~\ref{p:#1}} \newcommand{\rnote}[1]{Notation~\ref{n:#1}}
\newcommand{\begin{array}}{\begin{array}} \newcommand{\end{array}}{\end{array}} \newcommand{\BAN}{\renewcommand{#1}{1.2} \setlength{\arraycolsep}{2pt}\begin{array}} \newcommand{\BAV}[2]{\renewcommand{#1}{#1} \setlength{\arraycolsep}{#2}\begin{array}}
\newcommand{\begin{subarray}}{\begin{subarray}} \newcommand{\end{subarray}}{\end{subarray}}
\newcommand{\begin{aligned}}{\begin{aligned}} \newcommand{\end{aligned}}{\end{aligned}} \newcommand{\begin{alignat}}{\begin{alignat}} \newcommand{\end{alignat}}{\end{alignat}} \newcommand{\begin{alignat*}}{\begin{alignat*}} \newcommand{\end{alignat*}}{\end{alignat*}}
\newcommand{\note}[1]{\noindent\textit{#1.}\hspace{2mm}} \newcommand{\note{Remark}}{\note{Remark}}
\newcommand{\quad \forall}{\quad \forall} \newcommand{\\[1mm]}{\\[1mm]} \newcommand{\\[2mm]}{\\[2mm]} \newcommand{\\[3mm]}{\\[3mm]} \newcommand{\set}[1]{\{#1\}} \def\({{\rm (}} \def\){{\rm )}} \newcommand{\st}[1]{{\rm (#1)}}
\newcommand{\longrightarrow}{\longrightarrow} \newcommand{\longleftarrow}{\longleftarrow} \newcommand{\longleftrightarrow}{\longleftrightarrow} \newcommand{\;\;\Longrightarrow\;\;}{\;\;\Longrightarrow\;\;} \newcommand{\;\;\Longleftarrow\;\;}{\;\;\Longleftarrow\;\;} \newcommand{\Longleftrightarrow}{\Longleftrightarrow} \newcommand{\rightharpoonup}{\rightharpoonup} \def\downarrow{\downarrow} \def\uparrow{\uparrow}
\newcommand{\paran}[1]{\left (#1 \right )} \newcommand{\sqrbr}[1]{\left [#1 \right ]} \newcommand{\curlybr}[1]{\left \{#1 \right \}}
\newcommand{\abs}[1]{\left |#1\right |}
\newcommand{\norm}[1]{\left \|#1\right \|} \newcommand{\angbr}[1]{\left< #1\right>}
\newcommand{\paranb}[1]{\big (#1 \big )} \newcommand{\sqrbrb}[1]{\big [#1 \big ]} \newcommand{\curlybrb}[1]{\big \{#1 \big \}}
\newcommand{\absb}[1]{\big |#1\big |}
\newcommand{\normb}[1]{\big \|#1\big \|} \newcommand{\angbrb}[1]{\big\langle #1 \big \rangle}
\newcommand{\thkl}{\rule[-.5mm]{.3mm}{3mm}} \newcommand{\thknorm}[1]{\thkl #1 \thkl\,}
\newcommand{\trinorm}[1]{|\!|\!| #1 |\!|\!|\,} \newcommand{\vstrut}[1]{\rule{0mm}{#1}} \newcommand{\rec}[1]{\frac{1}{#1}}
\newcommand{\opname}[1]{\mathrm{#1}\,} \newcommand{\opname{supp}}{\opname{supp}} \newcommand{\opname{dist}}{\opname{dist}} \newcommand{\opname{sign}}{\opname{sign}} \newcommand{\opname{diam}}{\opname{diam}} \newcommand{\opname{proj}}{\opname{proj}}
\newcommand{\quad}{\quad} \newcommand{\qquad}{\qquad} \newcommand{\hsp}[1]{\hspace{#1mm}} \newcommand{\vsp}[1]{
}
\newcommand{\partial}{\partial} \newcommand{\setminus}{\setminus} \newcommand{\emptyset}{\emptyset} \newcommand{\times}{\times} \newcommand{^\prime}{^\prime} \newcommand{^{\prime\prime}}{^{\prime\prime}} \newcommand{\tilde}{\tilde} \newcommand{\widetilde}{\widetilde} \newcommand{\subset}{\subset} \newcommand{\subseteq}{\subseteq} \newcommand{\noindent}{\noindent}
\newcommand{\overline}{\overline} \newcommand{\underline}{\underline} \newcommand{\not\in}{\not\in} \newcommand{\pfrac}[2]{\genfrac{(}{)}{}{}{#1}{#2}} \newcommand{\to\infty}{\to\infty} \newcommand{\ind}[1]{_{_{#1}}\!} \newcommand{\chr}[1]{\chi\ind{#1}}
\newcommand{\rest}[1]{\big |\ind{#1}} \newcommand{\num}[1]{{\rm (#1)}\hspace{2mm}}
\newcommand{weak convergence\xspace}{weak convergence\xspace} \newcommand{with respect to\xspace}{with respect to\xspace} \newcommand{consequence\xspace}{consequence\xspace} \newcommand{consequently\xspace}{consequently\xspace} \newcommand{Consequently\xspace}{Consequently\xspace} \newcommand{Essentially\xspace}{Essentially\xspace} \newcommand{essentially\xspace}{essentially\xspace} \newcommand{minimizer\xspace}{minimizer\xspace} \newcommand{such that\xspace}{such that\xspace} \newcommand{neighborhood\xspace}{neighborhood\xspace} \newcommand{neighborhoods\xspace}{neighborhoods\xspace} \newcommand{sequence\xspace}{sequence\xspace} \newcommand{subsequence\xspace}{subsequence\xspace} \newcommand{locally uniformly\xspace}{locally uniformly\xspace} \newcommand{if and only if\xspace}{if and only if\xspace} \newcommand{sufficiently\xspace}{sufficiently\xspace} \newcommand{absolutely continuous\xspace}{absolutely continuous\xspace} \newcommand{solution\xspace}{solution\xspace} \newcommand{subsolution\xspace}{subsolution\xspace} \newcommand{supersolution\xspace}{supersolution\xspace} \newcommand{Without loss of generality\xspace}{Without loss of generality\xspace} \newcommand{without loss of generality\xspace}{without loss of generality\xspace}
\newcommand{\partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi}{\partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi} \newcommand{C_{2/q,q'}}{C_{2/q,q'}} \newcommand{$C_{2/q,q'}$}{$C_{2/q,q'}$} \newcommand{$C_{2/q,q'}$-finely closed \xspace}{$C_{2/q,q'}$-finely closed \xspace} \newcommand{$C_{2/q,q'}$-finely open \xspace}{$C_{2/q,q'}$-finely open \xspace} \newcommand{$C_{2/q,q'}$-fine topology \xspace}{$C_{2/q,q'}$-fine topology \xspace} \newcommand{$C_{2/q,q'}$-convergent \xspace}{$C_{2/q,q'}$-convergent \xspace}
\newcommand{W^{2/q,q'}}{W^{2/q,q'}} \newcommand{W^{-2/q,q}}{W^{-2/q,q}} \newcommand{W^{-2/q,q}_+(\bdw)}{W^{-2/q,q}_+(\partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi)} \newcommand{\sbsq}{\overset{q}{\subset}} \newcommand{\smq}{\overset{q}{\sim}} \newcommand{\app}[1]{\underset{#1}{\approx}} \newcommand{\mathrm{supp}^q_{\bdw}\,}{\mathrm{supp}^q_{\partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi}\,} \newcommand{\convq}{\overset{q}{\to}} \newcommand{\barq}[1]{\bar{#1}^{^q}} \newcommand{\partial_q}{\partial_q} \newcommand{\mathrm{tr}}{\mathrm{tr}} \newcommand{\mathrm{tr}\ind{V}}{\mathrm{tr}\ind{V}} \newcommand{\mathrm{Tr}\,}{\mathrm{Tr}\,} \newcommand{\mathrm{tr}\ind{{\mathcal R}} \def\CO{{\mathcal O}} \def\CP{{\mathcal P}}}{\mathrm{tr}\ind{{\mathcal R}} \def\CO{{\mathcal O}} \def\CP{{\mathcal P}}} \newcommand{\trin}[1]{\mathrm{tr}\ind{#1}} \newcommand{$\Capq$-finely closed\xspace}{$C_{2/q,q'}$-finely closed\xspace} \newcommand{$\Capq$-finely open\xspace}{$C_{2/q,q'}$-finely open\xspace} \newcommand{$\sigma} \def\vgs{\varsigma} \def\gt{\tau$-moderate\xspace}{$\sigma} \def\vgs{\varsigma} \def\gt{\tau$-moderate\xspace}
\newcommand{$\sigma} \def\vgs{\varsigma} \def\gt{\tau$-regular\xspace}{$\sigma} \def\vgs{\varsigma} \def\gt{\tau$-regular\xspace} \newcommand{$q$-quasi regular\xspace}{$q$-quasi regular\xspace} \newcommand{$\Capq$-equivalent\xspace}{$C_{2/q,q'}$-equivalent\xspace} \newcommand{\ppf}{\underset{f}{\prec\prec}} \newcommand{\overset{\frown}}{\overset{\frown}} \newcommand{\modcon}{\underset{mod}{\longrightarrow}} \newcommand{\ugb}[1]{u\chr{\Gs_\gb(#1)}} \newcommand{$q$-moderately convergent\xspace}{$q$-moderately convergent\xspace} \newcommand{$q$-moderately divergent\xspace}{$q$-moderately divergent\xspace} \defq\text{-supp}\,{q\text{-supp}\,} \def\,\text{\rm Lim}\,{\,\text{\rm Lim}\,} \def\mu\ind{{\mathcal R}} \def\CO{{\mathcal O}} \def\CP{{\mathcal P}}{\mu\ind{{\mathcal R}} \def\CO{{\mathcal O}} \def\CP{{\mathcal P}}} \defv\ind{{\mathcal R}} \def\CO{{\mathcal O}} \def\CP{{\mathcal P}}{v\ind{{\mathcal R}} \def\CO{{\mathcal O}} \def\CP{{\mathcal P}}}
\def\begin{comment}{\begin{comment}} \def\end{comment}{\end{comment}}
\def\alpha} \def\gb{\beta} \def\gg{\gamma{\alpha} \def\gb{\beta} \def\gg{\gamma} \def\chi} \def\gd{\delta} \def\ge{\epsilon{\chi} \def\gd{\delta} \def\ge{\epsilon} \def\theta} \def\vge{\varepsilon{\theta} \def\vge{\varepsilon} \def\phi} \def\vgf{\varphi} \def\gh{\eta{\phi} \def\vgf{\varphi} \def\gh{\eta} \def\iota} \def\gk{\kappa} \def\gl{\lambda{\iota} \def\gk{\kappa} \def\gl{\lambda} \def\mu} \def\gn{\nu} \def\gp{\pi{\mu} \def\gn{\nu} \def\gp{\pi} \def\varpi} \def\gr{\rho} \def\vgr{\varrho{\varpi} \def\gr{\rho} \def\vgr{\varrho} \def\sigma} \def\vgs{\varsigma} \def\gt{\tau{\sigma} \def\vgs{\varsigma} \def\gt{\tau} \def\upsilon} \def\gv{\vartheta} \def\gw{\omega{\upsilon} \def\gv{\vartheta} \def\gw{\omega} \def\xi} \def\gy{\psi} \def\gz{\zeta{\xi} \def\gy{\psi} \def\gz{\zeta} \def\Gamma} \def\Gd{\Delta} \def\Gf{\Phi{\Gamma} \def\Gd{\Delta} \def\Gf{\Phi} \def\Theta{\Theta} \def\Lambda} \def\Gs{\Sigma} \def\Gp{\Pi{\Lambda} \def\Gs{\Sigma} \def\Gp{\Pi} \def\Omega} \def\Gx{\Xi} \def\Gy{\Psi{\Omega} \def\Gx{\Xi} \def\Gy{\Psi}
\def{\mathcal S}} \def\CM{{\mathcal M}} \def\CN{{\mathcal N}{{\mathcal S}} \def\CM{{\mathcal M}} \def\CN{{\mathcal N}} \def{\mathcal R}} \def\CO{{\mathcal O}} \def\CP{{\mathcal P}{{\mathcal R}} \def\CO{{\mathcal O}} \def\CP{{\mathcal P}} \def{\mathcal A}} \def\CB{{\mathcal B}} \def\CC{{\mathcal C}{{\mathcal A}} \def\CB{{\mathcal B}} \def\CC{{\mathcal C}} \def{\mathcal D}} \def\CE{{\mathcal E}} \def\CF{{\mathcal F}{{\mathcal D}} \def\CE{{\mathcal E}} \def\CF{{\mathcal F}} \def{\mathcal G}} \def\CH{{\mathcal H}} \def\CI{{\mathcal I}{{\mathcal G}} \def\CH{{\mathcal H}} \def\CI{{\mathcal I}} \def{\mathcal J}} \def\CK{{\mathcal K}} \def\CL{{\mathcal L}{{\mathcal J}} \def\CK{{\mathcal K}} \def\CL{{\mathcal L}} \def{\mathcal T}} \def\CU{{\mathcal U}} \def\CV{{\mathcal V}{{\mathcal T}} \def\CU{{\mathcal U}} \def\CV{{\mathcal V}} \def{\mathcal Z}} \def\CX{{\mathcal X}} \def\CY{{\mathcal Y}{{\mathcal Z}} \def\CX{{\mathcal X}} \def\CY{{\mathcal Y}} \def{\mathcal W}{{\mathcal W}}
\def\mathbb A} \def\BBb {\mathbb B} \def\BBC {\mathbb C{\mathbb A} \def\BBb {\mathbb B} \def\BBC {\mathbb C} \def\mathbb D} \def\BBE {\mathbb E} \def\BBF {\mathbb F{\mathbb D} \def\BBE {\mathbb E} \def\BBF {\mathbb F} \def\mathbb G} \def\BBH {\mathbb H} \def\BBI {\mathbb I{\mathbb G} \def\BBH {\mathbb H} \def\BBI {\mathbb I} \def\mathbb J} \def\BBK {\mathbb K} \def\BBL {\mathbb L{\mathbb J} \def\BBK {\mathbb K} \def\BBL {\mathbb L} \def\mathbb M} \def\BBN {\mathbb N} \def\BBO {\mathbb O{\mathbb M} \def\BBN {\mathbb N} \def\BBO {\mathbb O} \def\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S{\mathbb P} \def\BBR {\mathbb R} \def\BBS {\mathbb S} \def\mathbb T} \def\BBU {\mathbb U} \def\BBV {\mathbb V{\mathbb T} \def\BBU {\mathbb U} \def\BBV {\mathbb V} \def\mathbb W} \def\BBX {\mathbb X} \def\BBY {\mathbb Y{\mathbb W} \def\BBX {\mathbb X} \def\BBY {\mathbb Y} \def\mathbb Z{\mathbb Z}
\def\mathfrak A} \def\GTB {\mathfrak B} \def\GTC {\mathfrak C{\mathfrak A} \def\GTB {\mathfrak B} \def\GTC {\mathfrak C} \def\mathfrak D} \def\GTE {\mathfrak E} \def\GTF {\mathfrak F{\mathfrak D} \def\GTE {\mathfrak E} \def\GTF {\mathfrak F} \def\mathfrak G} \def\GTH {\mathfrak H} \def\GTI {\mathfrak I{\mathfrak G} \def\GTH {\mathfrak H} \def\GTI {\mathfrak I} \def\mathfrak J} \def\GTK {\mathfrak K} \def\GTL {\mathfrak L{\mathfrak J} \def\GTK {\mathfrak K} \def\GTL {\mathfrak L} \def\mathfrak M} \def\GTN {\mathfrak N} \def\GTO {\mathfrak O{\mathfrak M} \def\GTN {\mathfrak N} \def\GTO {\mathfrak O} \def\mathfrak P} \def\GTR {\mathfrak R} \def\GTS {\mathfrak S{\mathfrak P} \def\GTR {\mathfrak R} \def\GTS {\mathfrak S} \def\mathfrak T} \def\GTU {\mathfrak U} \def\GTV {\mathfrak V{\mathfrak T} \def\GTU {\mathfrak U} \def\GTV {\mathfrak V} \def\mathfrak W} \def\GTX {\mathfrak X} \def\GTY {\mathfrak Y{\mathfrak W} \def\GTX {\mathfrak X} \def\GTY {\mathfrak Y} \def\mathfrak Z} \def\GTQ {\mathfrak Q{\mathfrak Z} \def\GTQ {\mathfrak Q} \font\Sym= msam10
\def\SYM#1{\hbox{\Sym #1}}
\def\mathbf{n}{\mathbf{n}} \def\mathbf{m}{\mathbf{m}} \def\mathbf{a}{\mathbf{a}} \newcommand{\prt_{\mathbf{n}}}{\partial_{\mathbf{n}}} \def\txt{in}{\txt{in}} \def\txt{on}{\txt{on}} \def\text{right hand side \xspace}{\text{right hand side \xspace}} \def\text{left hand side \xspace}{\text{left hand side \xspace}}
\def$L^1$-weak solution\xspace{$L^1$-weak solution\xspace}
\defW^{1,p}{W^{1,p}} \defL^1(\Gw,\gr){L^1(\Omega} \def\Gx{\Xi} \def\Gy{\Psi,\gr)}
\def\lfs#1{\lfloor_{\scriptscriptstyle #1}} \def\GTM(\Gw;\gr){\GTM(\Omega} \def\Gx{\Xi} \def\Gy{\Psi;\gr)} \def\bar\Gw{\bar\Omega} \def\Gx{\Xi} \def\Gy{\Psi} \def\prt_{\mathbf{n}}{\partial_{\mathbf{n}}} \def_{\mathrm{loc}}{_{\mathrm{loc}}} \defboundary value problem\xspace{boundary value problem\xspace} \defsuperharmonic\xspace{superharmonic\xspace} \defsubharmonic\xspace{subharmonic\xspace} \def\BBR^N{\BBR^N}
\def$L^V$ harmonic\xspace{$L^V$ harmonic\xspace} \def$L^V$ superharmonic\xspace{$L^V$ superharmonic\xspace} \def$L^V$ moderate{$L^V$ moderate} \def$L^V$ moderate harmonic{$L^V$ moderate harmonic} \defboundary trace\xspace{boundary trace\xspace} \defLipschitz\xspace{Lipschitz\xspace} \def
{
} \def
{
} \def
{
}
\begin{abstract} We study the Dirichlet boundary value problem for equations with absorption of the form $-\Delta u+g\circ u=\mu$ in a bounded domain $\Omega\subset \BBR^N$ where $g$ is a continuous odd monotone increasing function.
Under some additional assumptions on $g$, we present necessary and sufficient conditions for existence when $\mu$ is a finite measure. We also discuss the notion of solution when the measure $\mu$ is positive and blows up on a compact subset of $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. \end{abstract} \maketitle
\section{Introduction} In this paper we discuss some aspects of the boundary value problem \begin{equation}\label{gbvp}\begin{aligned}
-\Gd u+g\circ u=\mu \quad&\text{in }\Omega} \def\Gx{\Xi} \def\Gy{\Psi\\
u=0\quad&\text{on }\partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi,
\end{aligned}\end{equation} where $\mu\in \GTM_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$, i.e. $\mu$ is a Borel measure such that\xspace
$$\int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi\gr\,d|\mu|<\infty, \quad \gr(x)=dist(x,\partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi).$$ In addition we define a notion of solution in the case that $\mu$ is a positive Borel measure which may explode on a compact subset of the domain and discuss the question of existence and uniqueness in this case. We always assume that $g\in C(\BBR)$ is a monotone increasing function such that\xspace $g(0)=0$. To simplify the presentation we also assume that $g$ is odd.
A function $u\in L^1(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ is a weak solution of the boundary value problem\xspace \eqref{gbvp}, $\mu\in \GTM_\gr$, if $u\in L^g_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$, i.e. $$\int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi\,g(u)\gr\,dx<\infty$$ and \begin{equation}\label{inteq}
\int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi (-v\Gd\phi} \def\vgf{\varphi} \def\gh{\eta + g\circ v\,\phi} \def\vgf{\varphi} \def\gh{\eta)dx=\int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi\phi} \def\vgf{\varphi} \def\gh{\eta\,d\mu \end{equation} for every $\phi} \def\vgf{\varphi} \def\gh{\eta\in C_0^2(\bar\Gw)$ (= space of functions in $C^2(\bar\Gw)$ vanishing on $\partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi$).
We say that $u$ is a solution of the equation \begin{equation}\label{geq}
-\Gd u+g\circ u=\mu \quad \text{in } \Omega} \def\Gx{\Xi} \def\Gy{\Psi \end{equation} if $u$ and $g\circ u$ are in $L^1_{\mathrm{loc}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ and \eqref{inteq} holds for every $\phi} \def\vgf{\varphi} \def\gh{\eta\in C_c^2(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$.
Brezis and Strauss \cite{BrS} proved that, if $\mu$ is an $L^1$ function the problem possesses a unique solution.
This result does not extend to arbitrary measures in $\GTM_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$.
Denote by $\GTM^g_\gr$ the set of measures $\mu\in \GTM_\gr$ for which \eqref{gbvp} is solvable. A measure in $\GTM^g_\gr$ is called a \emph{$g$-good measure}. It is known that, if a solution exists then it is unique.
We say that $g$ is \emph{subcritical} if $\GTM^g_\gr=\GTM_\gr$. Benilan and Brezis, \cite{Br70} and \cite{BeBr} proved that the following condition is sufficient for $g$ to
be subcritical: \begin{equation}\label{subcr} \int_0^1 g(r^{2-N}))r^{N-1}dr<\infty. \end{equation} In the case that $g$ is a power non-linearity, i.e., $g=g_q$ where
$$g_q(t)=|t|^q\opname{sign} t \quad \text{in }\BBR,\quad q>1,$$ this condition means that $q<q_c:=N/(N-2)$. Benilan and Brezis also proved that, if $g=g_q$ and $q\geq q_c$,
problem \eqref{gbvp} has no solution when $\mu$ is a Dirac measure.
Later Baras and Pierre \cite{BP84} gave a complete characterization of $\GTM^g_\gr$ in the case that
$g=g_q$ with $q\geq q_c$. They proved that a finite measure $\mu$ is $g_q$-good if and only if $|\mu|$ does not charge sets of $\bar C_{2,q'}$ capacity zero, $q'=q/(q-1)$. Here $\bar C_{\alpha} \def\gb{\beta} \def\gg{\gamma,p}$ denotes Bessel capacity with the indicated indices.
In the present paper we extend the result of Baras and Pierre to a large class of non-linearities and also discuss the notion of solution in the case that $\mu$ is a positive measure which explodes on a compact subset of $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$.
\section{Statement of results}
Denote by $\CH$ the set of even functions $h$ such that\xspace \begin{equation}\label{CG}\begin{aligned} &h\in C^1(\BBR),\quad h(0)=0, \quad h\text{ is strictly convex,}\\
& h'(0)=0,\quad h'(t)>0 \quad \forall t>0,\quad\lim_{t\to\infty}h'(t)=\infty. \end{aligned}\end{equation}
For $h\in \CH$ denote by $L^h(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ the corresponding Orlicz space in a domain $\Omega} \def\Gx{\Xi} \def\Gy{\Psi\subset \BBR^N$:
$$L^h(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)=\{f\in L^1_{\mathrm{loc}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\mid\, \exists k>0: h\circ(f/k)\le 1\}$$ with the norm $$\norm{f}\ind{L^h}=\inf\{k>0\mid\, h\circ(f/k)<\infty\}.$$
Further denote by $h^*$ the conjugate of $h$. Since, by assumption, $h$ is strictly convex, $h'$ is strictly increasing so that, $$h^*(t)=\int_0^t (h')^{-1}(s)ds.$$
Let $G$ be the Green kernel for $-\Gd$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ and denote $$\BBG_\mu(x)=\int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi G(x,y)d\mu(y) \quad \forall x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi, \quad \mu\in \GTM_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi).$$
For every $h\in\CH$, the capacity $C_{2,h}$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ is defined as follows. For every compact set $E\subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ put: \begin{equation}\label{C2g} C_{2,h}(E)=\sup\{\mu(\Omega} \def\Gx{\Xi} \def\Gy{\Psi):\mu\in \GTM(\Omega} \def\Gx{\Xi} \def\Gy{\Psi),\;\mu\geq 0, \;\mu(E^c)=0,\; \norm{\BBG\mu}\ind{L^{h^*}}\le1\}. \end{equation} If $O$ is an open set: $$C_{2,h}(O)=\sup\{C_{2,h}(E):\, E \subset O,\; E\text{ compact.}\}$$ For an arbitrary set $A\subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ put $$C_{2,h}(A)=\inf\{C_{2,h}(O):\, A\subset O\subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi,\; O\text{ open.}\}$$ This definition is compatible with \eqref{C2g} : when $E$ is compact the value of $C_{2,h}(E)$ given by the above formula coincides with the value given by \eqref{C2g}, (see \cite{Or-cap}).
We say that $h$ satisfies the $\Gd_2$ condition if there exists $C>0$ such that\xspace $$h(a+b)\le c(h(a)+h(b)) \quad \forall a,b>0.$$ If $h\in \CH$ satisfies this condition then, $L^h$ is separable (see \cite{Kr-Rut}) and the capacity $C_{2,h}$ has the following additional properties (see \cite{Or-cap}).
Let $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ be a bounded domain in $\BBR^N$. For every $A\subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$, \begin{equation}\label{C2g1}
C_{2,h}(A)=\sup\{C_{2,h}(E):\, E \subset A,\; E\text{ compact}\} \end{equation} and for every increasing sequence\xspace of sets $\{A_n\}$
\begin{equation}\label{C2g2} \lim C_{2,h}(A_n)=C_{2,h}(\cup\,A_n).
\end{equation} Furthermore, for every $A\subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$
\begin{equation}\label{C2g3}
C_{2,h}(A)=\inf\{\norm{f}\ind{L^h}:\, f\in L^h(\Omega} \def\Gx{\Xi} \def\Gy{\Psi),\; \BBG_f\geq 1 \text{ on }A\}. \end{equation}
If $h\in\CH$ and both $h$ and $h^*$ satisfy the $\Gd_2$ condition then $L^h$ is reflexive \cite{Kr-Rut}.
Finally we denote by ${\mathcal G}} \def\CH{{\mathcal H}} \def\CI{{\mathcal I}$ the space of odd functions in $C(\BBR)$ such that\xspace $h:=|g|\in \CH$ and by ${\mathcal G}} \def\CH{{\mathcal H}} \def\CI{{\mathcal I}_2$ the set of functions $g\in {\mathcal G}} \def\CH{{\mathcal H}} \def\CI{{\mathcal I}$ such that\xspace $h$ and $h^*$ satisfy the $\Gd_2$ condition. For $g\in {\mathcal G}} \def\CH{{\mathcal H}} \def\CI{{\mathcal I}$ put
$$L^g:=L^{|g|},\quad C_{2,g}:=C_{2,|h|}, \quad g^*(t)=|g|^*(t)\opname{sign} t \quad \forall t\in \BBR.$$
In the sequel we assume that $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ is a bounded domain of class $C^2$. The first theorem provides a necessary and sufficient condition for the existence of a solution of \eqref{gbvp} in
the spirit of \cite{BP84}.
\bth{Th.I} Let $g\in {\mathcal G}} \def\CH{{\mathcal H}} \def\CI{{\mathcal I}_2$ and let $\mu$ be a measure in $ \GTM_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$. Then problem \eqref{gbvp} possesses a solution
if and only if $\mu$ vanishes on every compact set $E\subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ such that\xspace $C_{2,g^*}(E)=0$. This condition will be indicated by the notation $\mu\prec C_{2,g^*}$. \end{sub}
Next we consider problem \eqref{gbvp} when $ \mu$ is a positive Borel measure which may explode on a compact set $F\subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$. In this part of the paper we assume that $g\in {\mathcal G}} \def\CH{{\mathcal H}} \def\CI{{\mathcal I}_2$ and that $g$ satisfies the Keller -- Osserman condition \cite{Kell} and \cite{Oss}. This condition ensures that the set of solutions of
\begin{equation}\label{geq0}
-\Gd u+g\circ u=0
\end{equation}
in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ is uniformly bounded in compact subsets of $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. Therefore, if $E\subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ and $E$ is compact then there exists a maximal solution of \begin{equation}\label{max-sol}
-\Gd u+g\circ u=0\quad\text{in }\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus E, \quad u=0\quad\text{on }\partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi. \end{equation} This solution will be denoted by $U_E$.
\note{Notation} Consider the family of positive Borel measures $\mu$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ such that\xspace:
(1) There exists a compact set $F\subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ such that\xspace, for every open set $O\supset F$, $\mu(\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus\bar O)<\infty$ and
(2) $\mu(A)=\infty$ for every non-empty Borel set $A\subset F$.
\noindent The set $F$ will be called the singular set of $\mu$. The family of measures $\mu$ of this type will be denoted by $\CB_\infty(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$.
\bdef{Definition 1} Assume that $g\in {\mathcal G}} \def\CH{{\mathcal H}} \def\CI{{\mathcal I}$ and that $g$ satisfies the Keller -- Osserman condition. If $\nu\in \GTM^g_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ denote by $v_\nu$ the solution of \eqref{gbvp} with $\mu$ replaced by $\nu$.
Let $\mu\in \CB_\infty(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ and let $F$ be the singular set of $\mu$. A function $u\in L^1_{\mathrm{loc}}(\bar\Gw\setminus F)$ (i.e., $u\in L^1(\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus \bar O)$ for every neighborhood\xspace $O$ of $F$) is a generalized solution of \eqref{gbvp} if:
\num{i} $u$ satisfies \eqref{inteq} for every $\phi} \def\vgf{\varphi} \def\gh{\eta\in C_0^2(\bar\Gw)$ such that\xspace $\opname{supp}\phi} \def\vgf{\varphi} \def\gh{\eta\subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus F$.
\num{ii}$u\geq V_F:=\sup\{v_\nu:\,\nu\in \GTM_\gr^g(\Omega} \def\Gx{\Xi} \def\Gy{\Psi), \; \nu\geq 0, \; \opname{supp}\nu\subset F\}.$ \end{sub}
\bth{Th.II} Assume that $g\in {\mathcal G}} \def\CH{{\mathcal H}} \def\CI{{\mathcal I}_2$ and that $g$ satisfies the Keller -- Osserman condition. Let $\mu\in \CB_\infty$ with singular set $F$. Then:
\num{i} Problem \eqref{gbvp} has a generalized solution if and only if $\mu$ vanishes on every compact set $E\subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus F$ such that\xspace $C_{2,g^*}(E)=0$.
If $V_F=U_F$, where $V_F$ is defined as in \rdef{Definition 1} and $U_F$ is the maximal solution associated with $F$ (see \eqref{max-sol}) then the generalized solution is unique.
\num{ii} If $g$ satisfies the subcriticality condition \eqref{subcr} then problem \eqref{gbvp} possesses a unique generalized solution for every $\mu\in \CB_\infty$.
\num{iii} Let $g=g_q$, $q\geq q_c$. If $\mu\prec C_{2,g^*}$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus F$ then \eqref{gbvp} possesses a unique solution. \end{sub}
\section{Proof of \rth{Th.I}} The proof is based on several lemmas. We assume throughout that the conditions of the theorem are satisfied.
Denote by $L^1_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ the Lebesgue space with weight $\gr$
and by $L^g_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ the Orlicz space with weight $\gr$.
Further denote by $W^kL^g(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$, $k\in \BBN$, the Orlicz-Sobolev space consisting of functions $v\in L^g(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ such that\xspace $D^\alpha} \def\gb{\beta} \def\gg{\gamma v\in L^g(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ for $|\alpha} \def\gb{\beta} \def\gg{\gamma|\le k$.
Under our assumptions the set of bounded functions in $L^g$ is dense in this space (see \cite{Kr-Rut}). Consequently\xspace, by \cite{DoTr}, $C^\infty(\bar \Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ is dense in $W^kL^g(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$. As a consequence of the reflexivity of $L^g$ the space $W^kL^g(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ is reflexive. Let $W^k_0L^g(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ denote the closure of $C_c^\infty(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ in $W^kL^g(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$. The dual of this space, denoted by $W^{-k}L^{g^*}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ is the linear hull of
$\{D^\alpha} \def\gb{\beta} \def\gg{\gamma f: f\in L^{g^*}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi), \; |\alpha} \def\gb{\beta} \def\gg{\gamma\le k\}.$ The standard norm in $W^kL^g(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ is given by
$$\norm{v}_{W^kL^g}=\sum_{|\alpha} \def\gb{\beta} \def\gg{\gamma|\le k} \norm{D^\alpha} \def\gb{\beta} \def\gg{\gamma v}\ind{L^g}$$ and the norm in $W^{-k}L^{g^*}$ is defined as the norm of the dual space of $W^{k}_0L^{g}$.
The spaces $W^kL^g_\gr$ and $W^{-k}L^{g^*}_\gr$ are defined in the same way.
\blemma{basics} If $\mu\in \GTM_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ is a $g$-good measure then \eqref{gbvp} has a unique solution, which we denote by $v_\mu$. The solution satisfies the inequality \begin{equation}\label{vmu-est}
\norm{v_\mu}\ind{L^1(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)}+ \norm{v_\mu}\ind{L^g_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)}\le C\norm{\mu}\ind{\GTM_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)} \end{equation} where $C$ is a constant depending only on $g$ and $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$.
If $\mu_j\in \GTM_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$, $j=1,2$ are $g$-good measures and $\mu_1\le \mu_2$ then $v_{\mu_1}\le v_{\mu_2}$. \end{sub}
These results are well-known (see e.g. \cite{Vbook}).
\blemma{good1} Let $\mu\in \GTM_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ be a positive measure such that\xspace $\BBG_\mu\in L^g_{\mathrm{loc}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$. Then $\mu$ is $g$ good. \end{sub} \proof Let $\{\Omega} \def\Gx{\Xi} \def\Gy{\Psi_n\}$ be a $C^2$ uniform exhaustion of $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. Then $\BBG_\mu\in L^g(\Omega} \def\Gx{\Xi} \def\Gy{\Psi_n)$ is a positive supersolution of problem \eqref{gbvp} in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi_n$. Therefore -- as the zero function is a subsolution -- there exists a solution, say $u_n$, of \eqref{gbvp} in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi_n$ and, by \rlemma{basics}, $$\int_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi_n} u_ndx+\int_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi_n} g\circ u_n\gr_n dx\leq C\int_{\Omega} \def\Gx{\Xi} \def\Gy{\Psi_n}\gr_n\,d\mu,$$ where $\gr_n(x)=\opname{dist}(x,\partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi_n)$ and $C$ is a constant depending only on $g$ and the $C^2$ character of $\Omega} \def\Gx{\Xi} \def\Gy{\Psi_n$. Since $\Omega} \def\Gx{\Xi} \def\Gy{\Psi_n\}$ is uniformly $C^2$, the constant may be chosen to be independent of $n$. Moreover $\{u_n\}$ is increasing. Therefore $u=\lim u_n\in L^1(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\cap L^g_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ is the solution of \eqref{gbvp}. \qed
\blemma{admit} $(a)$ If $\mu\in \GTM_\gr$ and $|\mu|$ is $g$-good then $\mu$ is $g$-good. $(b)$ $T\in W^{-2}L^g(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ if and only if $T=\Gd h$ for some $h\in L^g(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$. $(c)$ If $\mu$ is a positive measure in $W^{-2}L^g_{\mathrm{loc}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ then $\BBG_\mu\in L^g_{\mathrm{loc}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$. If, in addition, $\mu\in \GTM_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ then $\mu$ is $g$-good. \end{sub}
\proof (a) Assuming that $|\mu|$ is $g$ -good, let $v$ be the solution of \eqref{gbvp} with $\mu$ replaced by $|\mu|$. Then $v$ is a supersolution and $-v$ is a subsolution of \eqref{gbvp}. Therefore \eqref{gbvp} has a solution.
(b) If $T=\Gd h$ then, for every $\phi} \def\vgf{\varphi} \def\gh{\eta\in C_c^\infty(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$,
$$T(\phi} \def\vgf{\varphi} \def\gh{\eta)=\int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi h\Gd \phi} \def\vgf{\varphi} \def\gh{\eta dx, \quad |T(\phi} \def\vgf{\varphi} \def\gh{\eta)|\le \norm{h}\ind{L^g}\norm{\phi} \def\vgf{\varphi} \def\gh{\eta}\ind{W^{2}L^{g^*}}.$$ As $C_c^\infty$ is dense in $W^{2}_0L^{g^*}$, $T$ defines a continuous linear functional on this space; consequently\xspace $T\in W^{-2}L^g(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$.
On the other hand if $T\in W^{-2}L^g(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$, put $$S(\Gd \phi} \def\vgf{\varphi} \def\gh{\eta):=T(\phi} \def\vgf{\varphi} \def\gh{\eta) \quad \forall \phi} \def\vgf{\varphi} \def\gh{\eta\in W^{2}_0L^{g^*}.$$ Note that for $\phi} \def\vgf{\varphi} \def\gh{\eta$ in this space we have $\phi} \def\vgf{\varphi} \def\gh{\eta=\BBG_{-\Gd\phi} \def\vgf{\varphi} \def\gh{\eta}$. Therefore $S$ is well defined on the subspace of $L^{g^*}$ given by $\{\Gd\phi} \def\vgf{\varphi} \def\gh{\eta: \phi} \def\vgf{\varphi} \def\gh{\eta\in W^{2}_0L^{g^*}\}.$ Therefore there exists $h\in L^g(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ such that\xspace $$T(\phi} \def\vgf{\varphi} \def\gh{\eta)=\int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi h\Gd\phi} \def\vgf{\varphi} \def\gh{\eta\,dx \quad \forall \phi} \def\vgf{\varphi} \def\gh{\eta\in W^{2}_0L^{g^*}.$$ It follows that $T=\Gd h$.
(c) Let $\mu$ be a positive measure in $W^{-2}L^g_{\mathrm{loc}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$. By part (b), if $\Omega} \def\Gx{\Xi} \def\Gy{\Psi'\Subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ is a subdomain of class $C^2$ there exists $h\in L^g(\Omega} \def\Gx{\Xi} \def\Gy{\Psi')$ such that\xspace $\mu=\Gd h$. Then $h+\BBG_\mu$ is an harmonic function in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi'$; consequently\xspace $\BBG_\mu\in L^g_{\mathrm{loc}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi')$ and finally $\BBG_\mu\in L^g_{\mathrm{loc}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$. If, in addition, $\mu\in \GTM_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ then, by \rlemma{good1}, $\mu$ is $g$ good. \qed
\blemma{nesc1} Assume that $\mu\in \GTM_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ is $g$ good. Then:
\num{i} There exists $f\in L^1_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ and $\mu_0\in W^{-2}L^g_{\mathrm{loc}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\cap\GTM_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ such that\xspace $\mu=f+\mu_0$.
\num{ii} $\mu\prec C_{2,g^*}$. \end{sub}
\proof Assume that $\mu$ is $g$-good and let $u$ be the solution of \eqref{gbvp}. Then $$\mu=f+\mu_0\txt{where} f:=g\circ u\in L^1_\gr,\;\mu_0:=\mu-g\circ u$$ and $u=\BBG_{\mu_0}\in L^g_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$. This implies that $$\phi} \def\vgf{\varphi} \def\gh{\eta\mapsto \int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi \phi} \def\vgf{\varphi} \def\gh{\eta\,d\mu_0=\int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi \Gd\phi} \def\vgf{\varphi} \def\gh{\eta udx \quad \forall \phi} \def\vgf{\varphi} \def\gh{\eta\in C_c^\infty(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$$ is continuous on $C_0^2(\bar\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ with respect to\xspace the norm of $W^{2}L^{g^*}_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$. Therefore, the functional can be extended to a continuous linear functional on $W^{2}L^{g^*}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi')$ for every $\Omega} \def\Gx{\Xi} \def\Gy{\Psi'\Subset\Omega} \def\Gx{\Xi} \def\Gy{\Psi$. Thus $\mu_0\in W^{-2}L^g_{\mathrm{loc}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\cap\GTM_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$. \qed
(ii) In view of \eqref{C2g1} it is sufficient to prove that $\mu$ vanishes on compact sets $E$ such that\xspace $C_{2,g^*}(E)=0$.
\note{Assertion} \emph{If $\nu\in W^{-2}L^{g}_{\mathrm{loc}}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ then $\nu(E)=0$ for every compact set $E$ such that\xspace $C_{2,g^*}(E)=0$.}
This assertion and part (i) imply part (ii).
Suppose that there exists a set $E$ such that\xspace $C_{2,g^*}(E)=0$ and $\nu(E)\neq 0$. Then there exists a compact subset of $E$ on which $\nu$ has constant sign. Therefore we may assume that $E$ is compact and that $\nu$ is positive on $E$. We may assume that $\nu\in W^{-2}L^{g}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$; otherwise we replace $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ by a $C^2$ domain $\Omega} \def\Gx{\Xi} \def\Gy{\Psi'\Subset\Omega} \def\Gx{\Xi} \def\Gy{\Psi$.
Let $\{V_n\}$ be a sequence\xspace of open neighborhood\xspace{s} of $E$ such that\xspace $\bar V_{n+1}\subset V_n$ and $V_n\downarrow E$. Then there exists a sequence\xspace $\{\vgf_n\}$ in $C_c^\infty(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ such that\xspace $0\le \vgf_n\le 1$, $\vgf_n=1$ in $V_{n+1}$, $\opname{supp}\vgf_n\subset V_n$ and $\norm{\vgf_n}_{g^*}\to 0$.
This is proved in the same way as in the case of Bessel capacities. We use \eqref{C2g3} and the fact that $C^\infty(\bar\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ is dense in $W^2L^g_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ \cite{DoTr}). Furthermore we use an extension of the lemma on smooth truncation \cite[Theorem 3.3.3]{AH} to Sobolev-Orlicz spaces with an integral number of derivatives. The extension is straightforward.
Hence, \begin{equation}\label{mu-g1} \int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi\vgf_n\,d\nu \to 0. \end{equation} On the other hand,
$$\int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi \vgf_n d\nu\geq \nu(\bar V_{n+1})-|\nu|(V_n\setminus \bar V_{n+1})\to \nu(E)>0.$$ This contradiction proves the assertion. \qed
\blemma{mu<C} Let $\mu$ be a positive measure in $\GTM_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$. If $\mu$ vanishes on every compact set $E\subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ such that\xspace $C_{2,g^*}(E)=0$ then $\mu$ is the limit of an increasing sequence\xspace of positive measures $\{\mu_n\}\subset W^{-2}L^g(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$. \end{sub} \proof Since $\mu$ is the limit of an increasing sequence\xspace of measures in $\GTM(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ it is sufficient to prove the lemma for $\mu\in \GTM(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$. Let $\vgf\in W^{2}_0L^{g^*}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ and denote $$\tilde\vgf=\BBG_{\Gd \vgf}.$$ Then $\tilde\vgf$ is equivalent to $\vgf$.
Suppose that $\{\vgf_n\}$ converges to $\vgf$ in $W^{2}_0L^{g^*}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$. Then $\Gd\vgf_n\to\Gd\vgf$ in $L^{g^*}$. Consequently\xspace, by \cite[Theorem 4]{Or-cap}, there exists a subsequence\xspace such that\xspace $\tilde\vgf_{n'}\to\tilde\vgf$ $C_{2,g^*}$-a.e. (i.e., everywhere with the possible exception of a set of $C_{2,g^*}$-capacity zero). As $\mu$ vanishes on sets of capacity zero, it follows that $\tilde\vgf_{n'}\to\tilde\vgf$ $\mu$-a.e.{}.
Every $\vgf\in W^{2}_0L^{g^*}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ is the limit of a sequence\xspace $\{\vgf_n\}\subset C^\infty_c(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$. Hence $\vgf_n\to \tilde\vgf$ $\mu$-a.e. and consequently\xspace $\tilde\vgf$ is $\mu$-measurable.
Therefore the functional $p:W^{2}_0L^{g^*}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\mapsto [0,\infty]$ given by $$p(\vgf):=\int_\Omega} \def\Gx{\Xi} \def\Gy{\Psi (\tilde\vgf)_+d\mu$$ is well defined. The functional is sublinear, convex and l.s.c.: if $\vgf_n\to\vgf$ in $W^{2}_0L^{g^*}(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ then (by Fatou's lemma) $$p(\vgf)\le\liminf p(\vgf_n).$$ Furthermore, $$p(a\vgf)=ap(\vgf) \quad \forall a>0.$$ Therefore the result follows by an application of the Hahn-Banach theorem, in the same way as in \cite[Lemma 4.2]{BP84}. \qed
\note{Proof of \rth{Th.I}} By \rlemma{nesc1} the condition $\mu\prec C_{2,g^*}$ is necessary for the existence of a solution. We show that the condition is sufficient.
If $\mu\prec C_{2,g^*}$ then $|\mu|\prec C_{2,g^*}$. By \rlemma{admit} if $|\mu|$ is $g$-good then $\mu$ is $g$-good. Therefore it remains to prove the sufficiency of the condition for positive $\mu$. In this case, by \rlemma{mu<C}, there exists an increasing sequence\xspace of positive measures $\{\mu_n\}\subset W^{-2}L^g(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ such that\xspace $\mu_n\uparrow\mu$. By \rlemma{admit} the measures $\mu_n$ are $g$-good. Denote by $u_n$ the solution of \eqref{gbvp} with $\mu$ replaced by $\mu_n$. By \rlemma{basics}, $u_n\geq 0$, $\{u_n\}$ increases and $\{u_n\}$ is bounded in $L^1(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\cap L^g_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$. Therefore $u=\lim u_n\in L^1(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\cap L^g_\gr(\Omega} \def\Gx{\Xi} \def\Gy{\Psi)$ and $u_n\to u$ in this space. Consequently\xspace $u$ is the solution of \eqref{gbvp}. \qed
\section{Proof of \rth{Th.II}}
(i) Let $\{O_n\}$ be a decreasing sequence\xspace of open sets such that\xspace $\bar O_{n+1}\subset O_n$, $\bar O_n\subset \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ and $O_n\downarrow F$ and $O_n$ is of class $C^2$. By \rth{Th.I}, the condition $\mu\prec C_{2,g^*}$ in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus F$ is necessary and sufficient for the existence of a solution of the equation \begin{equation}\label{O_n}
-\Gd u+g\circ u=\mu\quad \text{in }\Omega} \def\Gx{\Xi} \def\Gy{\Psi_n:=\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus \bar O_n \end{equation} such that\xspace $u=0$ on the boundary. By a standard argument, it follows that, under this condition: for every $f\in L^1(\partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi\cup\partial O_n)$, \eqref{O_n} has a solution such that\xspace $u=f$ on the boundary. As $g$ satisfies the Keller -- Osserman condition, it also follows that \eqref{O_n} has a solution $u_n$ such that\xspace $u_n=0$ on $\partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ and $u_n=\infty$ on $\partial O_n$. Denote by $v_n$ the solution of \eqref{O_n} vanishing on $\partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi\cup\partial O_n$ and put $$v_{0,\mu}=\lim v_n, \quad \bar u_\mu=\lim u_n.$$ Then $v_{0,\mu}$ is the smallest positive solution of \eqref{O_n} vanishing on $\partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ while $\bar u_\mu$ is the largest such solution. In particular $\bar u_\mu\geq v_\nu$ for every $\nu\in \GTM^g_\gr$ such that\xspace $\opname{supp}\nu\subset F$. Thus $\bar u_\mu$ is the largest generalized solution of \eqref{gbvp}.
Next we construct the minimal generalized solution of \eqref{gbvp}. The function $u_{0,\mu}+V_F$ is a supersolution and $\max(u_{0,\mu},V_F)$ is a subsolution of \eqref{O_n}, both vanishing on the boundary. Let $w_n$ denote the solution of \eqref{O_n} such that\xspace $w_n=0$ on $\partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ and $w_n=\max(u_{0,\mu},V_F)$ on $\partial O_n$. Then $$w_{n+1}\le w_n\le u_{0,\mu}+V_F$$
and consequently\xspace, $w=\lim w_n$ is the smallest solution of \eqref{O_n} such that\xspace $$\max(u_{0,\mu},V_F)\le w\le u_{0,\mu}+V_F.$$ It follows that $w$ is a generalized solution of \eqref{gbvp}. Since any such solution dominates $\max(u_{0,\mu},V_F)$ it follows that $w$ is the smallest generalized solution of the problem. It is easy to see that $w=\underline{u}_\mu$ as given by \eqref{max-sol}.
Since $g$ is convex, monotone increasing and $g(0)=0$ we have $$g(a)+g(b)\le g(a+b) \quad \forall a,b\in \BBR_+.$$ Therefore $\bar u_\mu-u_{0,\mu}$ is a subsolution of \eqref{geq0} in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus F$. Consequently\xspace $\bar u_\mu-u_{0,\mu}\le U_F $ and \begin{equation}\label{bar u}
\max(u_{0,\mu},U_F)\le \bar u_\mu\le u_{0,\mu}+U_F. \end{equation}
Put $\Omega} \def\Gx{\Xi} \def\Gy{\Psi_n=\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus \bar O_n$. Let $\underline{u}_n$ be the solution of the problem
$$\begin{aligned}
-&\Gd u+g\circ u=\mu\quad \text{in }\Omega} \def\Gx{\Xi} \def\Gy{\Psi_n, \\ &u=V_F \;\text{ on }\partial O_n,\quad u=0\;\text{ on } \partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi.
\end{aligned}$$ Then $\{\underline u_n\}$ increases and $\underline u=\lim \underline u_n$.
Similarly, if $\bar u_n$ is the solution of the problem
$$\begin{aligned}
-&\Gd u+g\circ u=\mu\quad \text{in }\Omega} \def\Gx{\Xi} \def\Gy{\Psi_n, \\ &u=U_F \;\text{ on }\partial O_n,\quad u=0\;\text{ on } \partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi.
\end{aligned}$$ then $\{\bar u_n\}$ increases and, in view of \eqref{bar u}, $\bar u=\lim \bar u_n$. Therefore, if $V_F=U_F$ then $\bar u_\mu=\underline{u}_\mu$.
(ii) We assume that in addition to the other conditions of the theorem, $g$ satisfies the subcriticality condition. In this case, for every point $z\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi$ and $k\in \BBR$, there exists a solution $u_{k,z}$ of the problem \begin{equation}\label{gdz}
-\Gd u+g\circ u=k\gd_z \quad\text{in }\Omega} \def\Gx{\Xi} \def\Gy{\Psi,\quad u=0 \quad \text{on }\partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi. \end{equation}
Put $w_z=\lim_{k\to\infty}u_{k,z}$. By definition $w_z=V_{\{z\}}$. We also have $w_z=U_{\{z\}}$. This follows from the fact that $g$ satisfies the Keller -- Osserman condition. This condition implies that there exists a decreasing function $\psi\in C(0,\infty)$ such that\xspace $\psi(t)\to\infty$ as $t\to 0$ and every positive solution $u$ of \eqref{gdz} satisfies
$$C_2\psi(|x-z|) \le u(x) \le C_1\psi(|x-z|)$$.The constant $C_1$ depends only on $g,N$. Because of the boundary condition the constant $C_2$ depends on $z$. However for $z$ in a compact subset of $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ one can choose $C_2$ to be independent of $z$.
This inequality implies that $$w_z\le U_{\{z\}}\le C_1/C_2 w_z.$$ If $F$ is a compact subset of $\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ put $$F'=\{x\in \Omega} \def\Gx{\Xi} \def\Gy{\Psi: dist(x,F)\le \rec{2}\opname{dist}(F,\partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi)\}.$$
Let $x\in F'\setminus F$ and let $z$ be a point in $F$ such that\xspace $|x-z|=\opname{dist}(x,F)$. Then there exists a positive constant $C(F)$ such that\xspace
$$ C(F)\psi(|x-z|)\le U_z(x)\le V_F(x)\le U_F(x)\le C_1\psi(|x-z|).$$ It follows that there exists a constant $c$ such that\xspace \begin{equation}\label{U<cV}
U_F(x)\le cV_F(x) \end{equation} for every $x\in F'$. Since $U_F$ and $V_F$ vanish on $\partial\Omega} \def\Gx{\Xi} \def\Gy{\Psi$ it follows that \eqref{U<cV} (with possibly a larger constant) remains valid in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus F'$. This is verified by a standard argument using Harnack's inequality and the fact that $g$ satisfies the Keller -- Osserman condition. Thus \eqref{U<cV} is valid in $\Omega} \def\Gx{\Xi} \def\Gy{\Psi\setminus F$. By an argument similar to the one introduced in \cite[Theorem 5.4]{MVsub}, this inequality implies that $U_F=V_F$.
(iii) For the case considered here, it was proved in \cite{MVcapest} that $U_F=V_F$. Therefore uniqueness follows from part (i). \qed
\end{document} | arXiv |
9. Measurement
9.01 Areas of special quadrilaterals
9.02 Introducing the circle
9.03 Area of a circle
9.04 Parts of circles
9.05 Composite shapes
9.06 Surface area of prisms
9.07 Surface area of cylinders
9.08 Volume of prisms and cylinders
9.09 Metric units for area and volume
9.10 Volume and capacity
An arc is an unbroken part of a circle.
The length of an arc is called the arc length.
A sector is the region inside a circle between two radii.
Some special sectors we should make note of are semicircles which make half a circle's area and quadrants which make up a quarter of a circle's area.
We can think of an arc as a fraction of the circle. Looking at it like this, we can see that an arc length is simply a fraction of the circumference.
As such, we can calculate arc lengths by finding the circumference of the circle they are a part of and then taking the appropriate fraction.
Worked Example
An arc goes a quarter of the way around a circle with radius $6$6. Find the exact arc length.
Think: Since this arc makes up a quarter of the circle, the arc length will be equal to a quarter of the circumference. We can find the circumference using the formula $C=2\pi r$C=2πr.
Do: We can substitute the radius $r=6$r=6 into the formula $C=2\pi r$C=2πr to find the circumference, then multiply it by $\frac{1}{4}$14 to take only a quarter of it. As such, we get:
Arc length $=$= $2\pi r\times\frac{1}{4}$2πr×14
Arc length $=$= $2\pi\times6\times\frac{1}{4}$2π×6×14
Substitute in the value for the radius
Arc length $=$= $\frac{12}{4}\pi$124π
Multiply the numeric values together
Arc length $=$= $3\pi$3π
Simplify the fraction
So the length of this arc is $3\pi$3π.
Practice question
The angle $\angle AXB$∠AXB is equal to $120^\circ$120°.
What fraction of the whole circle lies on the arc $AB$AB?
Find the exact length of the arc $AB$AB.
Perimeter of a sector
We can find the perimeter of a sector in the same way that we find any perimeter, by adding up the lengths of the sides. A sector has three sides, two straight and one curved. So how do we find the length of each side?
We know from the definition that a sector is the region between two radii and the circle, meaning that the two straight sides must be radii and the curved side is an arc.
What is the exact perimeter of the sector in the diagram below?
Think: The radius of the circle is given to be $24$24 cm which will be the lengths of our two straight sides. To find the length of the curved side, we need to take the appropriate fraction of this circle's circumference.
Do: We know that there are $360^\circ$360° in a revolution. The angle of the sector is $120^\circ$120° which goes into $360^\circ$360° three times. This tells us that the sector makes up one third of the circle's area. As such, the curved side of the sector must be equal to $\frac{1}{3}$13 of the circumference.
Adding all the sides together gives us:
Perimeter $=$= $r\times2+2\pi r\times\frac{1}{3}$r×2+2πr×13
The sum of two radii and the arc length
Perimeter $=$= $24\times2+2\pi\times24\times\frac{1}{3}$24×2+2π×24×13
Perimeter $=$= $48+\frac{48}{3}\pi$48+483π
Evaluate the multiplication
Perimeter $=$= $48+16\pi$48+16π
As such, the perimeter of this sector is $48+16\pi$48+16π cm.
The sector in the diagram has an angle of $90^\circ$90° and a radius of $14$14 cm.
What fraction of the circle's area is covered by this sector?
Find the exact length of the arc $PQ$PQ.
Fill in the blanks to find the perimeter of the sector, rounded to two decimal places.
Perimeter $=$= Length of arc $PQ$PQ $+$+ length of $PX$PX $+$+ length of $QX$QX
Perimeter $=$= $\editable{}+\editable{}+\editable{}$
(Substitute the exact values for each length)
Perimeter $=$= $\editable{}$
cm (Evaluate, rounding to two decimal places.)
Area of a sector
Similar to how the arc length is simply a fraction of the circumference, the area of a sector is simply a fraction of the circle's area.
We can calculate the area of a sector by finding the area of the circle they are a part of and then taking the appropriate fraction.
Find the exact area of a quadrant of a circle with diameter $12$12 cm.
Think: A quadrant is a sector that makes up a quarter of the circle's area, so we can find the area of the sector by taking $\frac{1}{4}$14 of the circle's area. We can find the radius of the circle by halving the diameter.
Do: Halving the diameter tells us that the radius of this circle is $6$6 cm. We can substitute the radius $r=6$r=6 into the formula $A=\pi r^2$A=πr2 to find the area, then multiply it by $\frac{1}{4}$14 to take only a quarter of it. As such, we get:
Area of the sector $=$= $\pi r^2\times\frac{1}{4}$πr2×14
Area of the sector $=$= $\pi\times6^2\times\frac{1}{4}$π×62×14
Area of the sector $=$= $\pi\times36\times\frac{1}{4}$π×36×14
Evaluate the square
Area of the sector $=$= $\frac{36}{4}\pi$364π
Area of the sector $=$= $9\pi$9π
As such, the area of this sector is $9\pi$9π cm2.
The sector in the diagram has an angle of $30^\circ$30° and a radius of $6$6 cm.
Find the exact area of the sector.
Annulus
An annulus is a composite shape formed by subtracting the area of a smaller disc from a larger one, where the centre of the two discs is the same.
An annulus is the region between two circles that have the same central point.
As such, these are all annuli.
And these are not annuli.
Since annuli are composed of an inner and outer circle, we can also say that they have an inner and outer radii, which are the distances from the central point to inside and outside edges respectively.
Inner and outer radii of an annulus
The inner radius is the distance from the central point to the inside edge of the annulus.
The outer radius is the distance from the central point to the outside edge of the annulus.
We can see that the perimeter of an annulus will be the sum of the circumferences of the inner and outer circles.
We can also see that the area of an annulus will be the difference between the area of the outer circle and the area of the inner circle.
Find the exact perimeter and area of the annulus in the diagram below.
Think: The inner radius is $6$6 cm and the outer radius is $9$9 cm. We can use these radii to find the circumferences and areas of the two circles that we used to make the annulus. The perimeter of the annulus will be the sum of the circumferences while the area of the annulus will be the difference between the areas of the circles.
Do: By substituting the inner and outer radii into the circumference and area formulas for a circle, we get:
$12\pi$12π $36\pi$36π
Outer Circle
As such, the perimeter of the annulus is:
Perimeter $=$= $12\pi+18\pi$12π+18π
Add the circumferences together
Perimeter $=$= $30\pi$30π
Perform the addition
And the area of the annulus is:
Area $=$= $81\pi-36\pi$81π−36π
Subtract the area of the inner circle from the outer circle
Area $=$= $45\pi$45π
Perform the subtraction
As such, the perimeter of the annulus is $30\pi$30π cm and the area is $45\pi$45π cm2.
The annnulus has an inner diameter of $10$10 cm and an outer diameter of $18$18 cm.
Find its exact area. | CommonCrawl |
\begin{document}
\title[Dispersive estimates and Schr\"odinger operators on step 2 Stratified Lie Groups]{Dispersive estimates for the Schr\"odinger operator on step 2 Stratified Lie groups} \author[H. Bahouri]{Hajer Bahouri} \address[H. Bahouri]{LAMA UMR CNRS 8050, Universit\'e Paris EST\\ 61, avenue du G\'en\'eral de Gaulle\\ 94010 Cr\'eteil Cedex\\ France } \email{[email protected]} \author[C. Fermanian]{Clotilde~Fermanian-Kammerer} \address[C. Fermanian]{LAMA UMR CNRS 8050, Universit\'e Paris EST\\ 61, avenue du G\'en\'eral de Gaulle\\ 94010 Cr\'eteil Cedex\\ France} \email{[email protected]} \author[I. Gallagher]{Isabelle Gallagher}\address[I. Gallagher] {Institut de Math{\'e}matiques UMR 7586 \\
Universit{\'e} Paris Diderot (Paris 7) \\
B\^atiment Sophie Germain \\
Case 7012\\
75205 PARIS Cedex 13 \\
France} \email{[email protected]}
\begin{abstract}
The present paper is dedicated to the proof of dispersive estimates on stratified Lie groups of step 2, for the linear
Schr\"odinger equation involving a sublaplacian.
It turns out that the propagator behaves like a wave operator on a space of the same dimension~$p$ as the center of the group, and like a Schr\"odinger operator on a space of the same dimension~$k$ as the radical of the canonical skew-symmetric form, which suggests a decay rate $|t|^{-{k+p-1\over 2}}$.
In this article, we identify
a property of the canonical skew-symmetric form under which we establish optimal dispersive estimates with this rate. The relevance of this property is discussed through several examples.
\end{abstract} \thanks{} \maketitle
\section{Introduction}\label{intro} \subsection{Dispersive inequalities} \label{sec:up}
Dispersive inequalities for evolution equations (such as Schr\"odinger and wave equations) play a decisive role in the study of semilinear and quasilinear problems which appear in numerous physical applications. Proving dispersion amounts to establishing a decay estimate for the $L^\infty$ norm of the solutions of these equations at time $t$ in terms of some negative power of $t$ and the~$L^1$ norm of the data. In many cases, the main step in the proof of this decay in time relies on the application of a stationary phase theorem on an (approximate) representation of the solution. Combined with an abstract functional analysis argument known as the $TT^*$-argument, dispersion phenomena yield a range of estimates involving space-time Lebesgue norms. Those inequalities, called Strichartz estimates, have proved to be powerful in the study of nonlinear equations (for instance one can consult \cite{bcd} and the references therein).
\noindent In the $\mathop{\mathbb R\kern 0pt}\nolimits^d$ framework, dispersive inequalities have a long history beginning with the articles of Brenner~\cite{brenner1}, Pecher \cite{P1}, Segal \cite{segal} and Strichartz \cite{strichartz}. They were subsequently developed by various authors, starting with the paper of Ginibre and Velo~\cite{ginibrevelo} (for a detailed bibliography, we refer to~\cite{keeltao, tao} and the references therein). In \cite{bgx}, the authors generalize the dispersive estimates for the wave equation to the Heisenberg group $\mathop{\mathbb H\kern 0pt}\nolimits^d$ with an optimal rate of decay of order $ | t |^{- 1/2}$ (regardless of the dimension~$d$)
and prove that no dispersion occurs for the Schr\"odinger equation. In \cite{hiero}, optimal results are proved for the time behavior of the Schr\"odinger and wave equations on H-type groups: if~$p$ is the dimension of the center of the H-type group, the author establishes sharp dispersive inequalities for the wave equation solution (with a decay rate of~$ | t |^{- p/2}$) as well as for the Schr\"odinger equation solution (with a~$ | t |^{-(p-1)/2}$ decay). Compared with the $\mathop{\mathbb R\kern 0pt}\nolimits^d$ framework, there is an exchange in the rates of decay between the wave and the Schr\"odinger equations.
\noindent Strichartz estimates in other settings have been obtained in a number of works. One can first cite various results dealing with variable coefficient operators (see for instance \cite{kapitanski, smith1}) or studies concerning domains such as \cite{blp,ilp,ss}. One can also refer to the result concerning the full Laplacian on the Heisenberg group in \cite{furioli2}, works in the framework of the real hyperbolic spaces in \cite{AP, banica, tataru}, or in the framework of compact and noncompact manifolds in \cite{A, bd,bgt}; finally one can mention the quasilinear framework studied in \cite{bch, bch2, kr,st}, and the references therein.
\noindent In this paper our goal is to establish optimal dispersive estimates for the solutions of the Schr\"odinger equation on {$2$-step stratified} Lie groups. We shall emphasize in particular the key role played by the canonical skew-symmetric form in determining the rate of decay of the solutions. It turns out that the Schr\"odinger propagator on~$G$ behaves like a wave operator on a space of the same dimension as the center of $G$, and like a Schr\"odinger operator on a space of the same dimension as the radical of the canonical skew-symmetric form associated with the dual of the center. This unusual behavior of the Schr\"odinger propagator in the case of Lie algebras whose canonical skew-symmetric form is degenerate (known as Lie algebras which are not MW, see~\cite{MooreWolf},~\cite{MR} for example) makes the analysis of the explicit representations of the solutions tricky and gives rise to uncommon dispersive estimates. It will also appear from our analysis that the optimal rate of decay is not always in accordance with the dimension of the center: we shall exhibit examples of $2$-step stratified Lie groups with center of any dimension and for which no dispersion occurs for the Schr\"odinger equation. We shall actually highlight that the optimal rate of decay in the dispersive estimates for solutions to the Schr\"odinger equation is rather related to the properties of the canonical skew-symmetric form.
\subsection{Stratified Lie groups}\label{gradedliegroup}
Let us recall here some basic facts about stratified Lie groups (see~\cite{corwingreenleaf,folland, follandstein,
steinweiss} and the references therein for further details). A connected, simply connected nilpotent Lie group $G$ is
said stratified if its left-invariant Lie algebra~${\mathfrak g}$ (assumed real-valued and of finite dimension~$n$) is endowed with a vector space decomposition
$$
\displaystyle
{\mathfrak g}= \oplus_{1\leq k\leq \infty} \, {\mathfrak g}_k \, ,
$$
where all but finitely many of the ${\mathfrak g}_k'$s are $\{0\}$, such that $[{\mathfrak g}_1,{\mathfrak g}_{k}]= {\mathfrak g}_{k+1}$.
If there are~$p$ non zero~${\mathfrak g}_k$ then the group is said of step~$p$.
Via the exponential map
$$
{\rm exp} : {\mathfrak g} \rightarrow G
$$ which is in that case a diffeomorphism from ${\mathfrak g}$ to $G$, one identifies $G$ and ${\mathfrak g}$. It turns out that under this identification, the group law on $G$ (which is generally not commutative) provided by the Campbell-Baker-Hausdorff formula, $(x,y) \mapsto x \cdot y $ is a polynomial map. In the following we shall denote by~$\mathfrak z$ the center of~$G$ which is simply the last non zero~${\mathfrak g}_k$ and write
\begin{equation}\label{eq:Gdec}
G = \mathfrak v \oplus \mathfrak z \, ,
\end{equation}
where $\mathfrak v$ is any subspace of $G$ complementary to $\mathfrak z$.
\noindent The group $G$ is endowed with a smooth left invariant measure $\mu(x)$, the Haar measure, induced by the Lebesgue measure on ${\mathfrak g}$ and which satisfies the fundamental translation invariance property:
$$
\forall f \in L^1(G, d\mu) \, , \quad \forall x \in G \,, \quad \int_G f(y) d\mu(y) = \int_G f(x \cdot y)d\mu(y) \, .
$$ Note that the convolution of two functions $f$ and $g$ on $G$ is given by
\begin{equation} \label{convolutiondef}
f*g(x) := \int_G f(x \cdot y^{-1})g(y)d\mu(y) = \int_G f(y)g(y^{-1} \! \cdot x)d\mu(y)
\end{equation}
and as in the euclidean case we define Lebesgue spaces by $$
\|f\|_{L^p (G)} := \left( \int_G |f(y)|^p \: d \mu (y) \right)^\frac1p \, ,
$$
for $p\in[1,\infty[$, with the standard modification when~$p=\infty$.\\
\noindent Since $G$ is stratified, there is a natural family of dilations on ${\mathfrak g}$ defined for $t>0$ as follows: if~$X$ belongs to~$ {\mathfrak g}$, we can decompose~$X$ as~$\displaystyle X=\sum X_k$ with~$\displaystyle X_k\in {\mathfrak g}_k$, and then
$$
\delta_t X:=\sum t^{k} X_k \, .
$$
This allows to define the dilation $\delta_t $ on the Lie group $G$ via the identification by the exponential map:
$$\begin{array}{ccccc} & {\mathfrak g} &\build{\rightarrow}_{}^{\delta_t} & {\mathfrak g}&\\
{\small\rm exp}& \downarrow& & \downarrow& {\small\rm exp}\\
&G &\build{\rightarrow}_{ {\rm exp}\, \circ\, \delta_t \, \circ\, {\rm exp}^{-1}}^{}&G
\end{array}$$
To avoid heaviness, we shall still denote by $\delta_t$ the map ${\rm exp}\, \circ \delta_t \, \circ {\rm exp}^{-1}$.\\
\noindent Observe that the action of the left invariant vector fields $X_k$, for~$X_k$ belonging to~${\mathfrak g}_k$, changes the homogeneity in the following way:
$$
X_k (f \circ \delta_t) = t^{k} X_k (f )\circ \delta_t \, ,
$$
where by definition $\displaystyle X_k (f ) (y) := \frac d {ds} f \bigl(y \cdot {\rm exp} (s X_k)\bigr)_{|s=0}$ and the Jacobian of the dilation $\delta_t$ is $t^Q$ where~$\displaystyle{Q:=\sum_{1\leq k\leq \infty} k\, {\rm dim} \, {\frak g}_k}$ is called the homogeneous dimension of $G$:
\begin{equation}\label{homogenedim} \int_G f(\delta_t\,y) \,d\mu(y) = t^{-Q}\,\int_G f( y)\,d\mu(y) \, .
\end{equation} \noindent Let us also point out that there is a natural norm~$\rho$ on~$G$ which is homogeneous in the sense that it respects dilations: $G\ni x\mapsto \rho(x)$ satisfies $$ \forall x\in G,\;\;\rho(x^{-1})=\rho(x) \, ,\;\;\rho(\delta_tx)=t\rho(x) \, , \;{\rm and}\;\;\rho(x)=0\;\Longleftrightarrow\; x=0 \, . $$ We can define the Schwartz space~${\mathcal S}(G)$ as the set of smooth functions on~$G$ such that
for all~$ \alpha$ in~${\mathbb N}^d$, for all~$p$ in~${\mathbb N}, x\mapsto \rho(x)^p {\mathcal X}^{\alpha}f(x) $ belongs to~$ L^\infty(G),
$ where~${\mathcal X}^{\alpha}$ denotes a product of $|\alpha|$ left invariant vector fields. The Schwartz space~${\mathcal S}(G)$ has properties very similar to those of the Schwartz space~${\mathcal S}(\mathop{\mathbb R\kern 0pt}\nolimits^d)$, particularly density in Lebesgue spaces.
\subsection{The Fourier transform}\label{Fourier} The group $G$ being non commutative, its Fourier transform is defined by means of irreducible unitary representations. We devote this section to the introduction of the basic concepts that will be needed in the sequel. From now on, we assume that $G$ is a step~2 stratified Lie group, meaning~$ \mathfrak z = \mathfrak g_2$, and we denote ${\mathfrak v}={\mathfrak g}_1$ in~(\ref{eq:Gdec}). We choose a scalar product on~${\mathfrak g}$ such that~${\mathfrak v}$ and~${\mathfrak z}$ are orthogonal.
\subsubsection{Irreducible unitary representations}\label{defirreducible} Let us fix some notation, borrowed from~\cite{crs} (see also~\cite{corwingreenleaf} or~\cite{MR}). For any~$\lambda \in \mathfrak z^\star$ (the dual of the center~$ \mathfrak z$) we define a skew-symmetric bilinear form on $\mathfrak v$ by \begin{equation}\label{skw} \forall \, U,V \in \mathfrak v \, , \quad B(\lambda) (U,V):= \lambda([U,V]) \, .
\end{equation}
One can find a Zariski-open subset~$\Lambda$ of~$ \mathfrak z^\star$ such that
the number of distinct eigenvalues of~$B(\lambda)$ is maximum. We denote by~$k$ the dimension of the radical $\mathfrak r_\lambda$ of~$ B(\lambda)$. Since~$ B(\lambda)$ is skew-symmetric, the dimension of the orthogonal complement of $\mathfrak r_\lambda$ in $\mathfrak v$ is an even number which we shall denote by~$2d$. Therefore, there exists an orthonormal basis
$$\big (P_1(\lambda) , \dots ,P_d(\lambda), Q_1(\lambda) , \dots ,Q_d(\lambda),R_1(\lambda),\dots,R_k(\lambda)\big )$$
such that the matrix of~$B(\lambda)$ takes the following form
$$ \left( \begin{array}{ccccccccc} 0 &\dots & 0& \eta_1(\lambda)& \dots& 0 & 0 & \cdots & 0 \\ \vdots &\ddots & \vdots& \vdots& \ddots& \vdots & \vdots & \ddots& \vdots \\\ 0 &\dots & 0&0& \dots& \eta_d(\lambda) & 0 & \cdots & 0\\ - \eta_1(\lambda) &\dots & 0&0& \dots& 0& 0 & \cdots & 0\\ \vdots &\ddots & \vdots& \vdots& \ddots& \vdots&\vdots& \ddots& \vdots\\ 0 &\dots & - \eta_d(\lambda)&0& \dots& 0& 0 & \cdots & 0\\ 0 & \dots & 0 & 0 & \dots & 0& 0 & \cdots & 0 \\ \vdots &\ddots & \vdots& \vdots& \ddots& \vdots&\vdots& \ddots& \vdots\\ 0 &\dots &0&0& \dots& 0& 0 & \cdots & 0 \end{array} \right) \, ,$$ where each~$\eta_j(\lambda)>0 $ is smooth and homogeneous of degree one in~$\lambda = (\lambda_1,\dots, \lambda_p)$ and the basis vectors are chosen to depend smoothly on~$\lambda$ in~$\Lambda$. Decomposing~$ \mathfrak v$ as $$
\mathfrak v = \mathfrak p_\lambda + \mathfrak q_\lambda +\mathfrak r_\lambda $$
with
$$
\begin{aligned}
\mathfrak p_\lambda:= \mbox{Span} \, \big (P_1(\lambda) , \dots ,P_d(\lambda) \big) \, , & \quad \mathfrak q_\lambda:= \mbox{Span} \, \big (Q_1(\lambda) , \dots ,Q_d(\lambda)\, ,\quad \mathfrak r_\lambda := &\mbox{Span} \, \big (R_1(\lambda) , \dots ,R_k(\lambda) \big)
\end{aligned} $$ any element $V \in \mathfrak v $ will be written in the following as~$P+Q+R$ with~$P\in \mathfrak p_\lambda$, $Q \in \mathfrak q_\lambda$ and~$R\in\mathfrak r_\lambda$. We then introduce irreducible unitary representations of~$G$ on~$L^2( \mathfrak p_\lambda)$: \begin{equation}\label{defpilambda} u^{\lambda,\nu}_{X} \phi(\xi) := {\rm e}^{-i\nu( R)-i\lambda (Z+ [ \xi +\frac12 P , Q])} \phi (P+ \xi) \, , \;\lambda\in\mathfrak z^*,\;\nu\in\mathfrak r^*_\lambda \, , \end{equation} for any~$x=\exp (X)\in G$ with~$X = X(\lambda,x) :=\big (P(\lambda,x),Q(\lambda,x),R(\lambda,x),Z(x) \big)$ and~$\phi \in L^2( \mathfrak p_\lambda)$. In order to shorten notation, we shall omit the dependence on~$(\lambda,x)$ whenever there is no risk of confusion.
\subsubsection{The Fourier transform} In contrast with the euclidean case, the Fourier transform is defined on the bundle~$\mathfrak r(\Lambda)$ above~$\Lambda$ whose fibre above $\lambda\in\Lambda$ is $\mathfrak r^* _\lambda\sim \mathop{\mathbb R\kern 0pt}\nolimits^k$. It is valued in the space of bounded operators on~$L^2( \mathfrak p_\lambda)$. More precisely, the Fourier transform of a function~$f$ in~$L^1(G)$ is defined as follows: for any~$(\lambda,\nu) \in\mathfrak r(\Lambda)$ $$ {\mathcal F}(f)(\lambda,\nu):= \int_G f(x) u^{\lambda,\nu}_{X(\lambda,x) } \, d\mu(x) \, . $$ Note that for any~$(\lam,\nu)$, the map~$u^{\lambda,\nu}_{X(\lambda,x)}$
is a group homomorphism from~$G$ into the group~$U (L^2( \mathfrak p_\lambda))$ of unitary operators of~$L^2( \mathfrak p_\lambda)$, so functions~$f$ of~$L^1(G)$ have a Fourier transform~$\left({\mathcal F}(f)(\lambda,\nu)\right)_{\lambda,\nu}$ which is a bounded family of bounded operators on~$L^2( \mathfrak p_\lambda)$. One may check that the Fourier transform exchanges convolution, whose definition is recalled in~(\ref{convolutiondef}), and composition: \begin{equation}\label{fourconv}
{\mathcal F}( f \star g )( \lam,\nu ) = {\mathcal F}(f) ( \lam,\nu )\circ{\mathcal F} (g)( \lam,\nu ) \, .
\end{equation} Besides, the Fourier transform can be extended to an isometry from~$L^2(G)$ onto the Hilbert space of two-parameter families~$ A = \{ A (\lam,\nu ) \}_{(\lambda,\nu) \in\mathfrak r(\Lambda)}$
of operators on~$L^2( \mathfrak p_\lambda)$ which are Hilbert-Schmidt for almost every~$(\lambda,\nu) \in\mathfrak r(\Lambda)$, with~$\|A (\lam,\nu )\|_{HS (L^2( \mathfrak p_\lambda))}$ measurable and with norm
\[ \|A\| := \left( \int_{\lambda\in\Lambda}\int_{\nu\in\mathfrak r^*_\lambda}
\|A (\lam,\nu )\|_{HS (L^2( \mathfrak p_\lambda))}^2 |{\mbox {Pf}} (\lambda) |d\nu \, d\lam \right)^{\frac{1}{2}}<\infty \, ,\]
where~$|{\mbox {Pf}} (\lambda) | := \prod_{j=1}^d \eta_j(\lambda) \, $ is the Pfaffian of~$B(\lambda)$.
We have the following Fourier-Plancherel formula: there exists a constant $\kappa>0$ such that
\begin{equation}
\label{Plancherelformula} \int_G |f(x)|^2 \, dx
= \kappa \, \int_{\lambda\in\Lambda}\int_{\nu\in\mathfrak r^*_\lambda} \|{\mathcal F}(f)(\lambda,\nu)\|_{HS(L^2( \mathfrak p_\lambda))}^2 |{\mbox {Pf}} (\lambda) | \, d\nu\, d\lambda \,. \end{equation} Finally, we have an inversion formula as stated in the following proposition which is proved in the Appendix page~\pageref{appendixinversion}.
\begin{proposition}\label{inversioninS} There exists $\kappa>0$ such that for~$ f \in {\mathcal S}(G)$ and for almost all~$x \in G$ the following inversion formula holds: \begin{equation} \label{inversionformula} f(x)
= \kappa \, \int_{\lambda\in\Lambda}\int_{\nu\in\mathfrak r^*_\lambda} {\rm{tr}} \, \Big((u^{\lambda,\nu}_{X(\lambda,x)})^\star {\mathcal F}f(\lambda,\nu) \Big)\, |{\mbox {Pf}} (\lambda) |\, d\nu\,d\lambda \,.\end{equation}
\end{proposition}
\subsubsection{The sublaplacian} \label{freq}
Let~$(V_1,\dots,V_{m})$ be an orthonormal basis of ${\mathfrak g}_1$, then the sublaplacian on $G$ is defined by \begin{equation} \label{DeltaG}
\Delta_{G}:= \sum_{j = 1}^{m } V_j^2.
\end{equation} It is a self-adjoint operator which is independent of the orthonormal basis ~$(V_1,\dots,V_{m})$, and homogeneous of degree $2$ with respect to the dilations in the sense that :
$$
\delta_{t}^{-1}\Delta_G \, \delta_t = t^2 \Delta_G \, .
$$ To write its expression in Fourier space, we consider the basis of Hermite functions $(h_n)_{n\in{\mathbf N}}$, normalized in~$L^2(\mathop{\mathbb R\kern 0pt}\nolimits) $ and satisfying for all real numbers~$\xi $: $$ h''_n(\xi)-\xi^2 h_n(\xi)= -(2n+1) h_n(\xi) \, . $$ Then, for any multi-index~$\alpha \in {\mathbb N}^d$, we define the functions $h_{\alpha,\eta(\lambda)}$ by \begin{equation}\label{defhn} \begin{aligned}
\forall \, \Xi = (\xi_1,\dots, \xi_d) \in \mathop{\mathbb R\kern 0pt}\nolimits^d \, , \quad h_{\alpha,\eta(\lambda)} (\Xi) & :=\prod_{j=1}^d h_{\alpha_j,\eta_j(\lambda)}(\xi_j) \quad \mbox{and} \\
\forall (n,\beta) \in {\mathbb N}\times\mathop{\mathbb R\kern 0pt}\nolimits^+ \, , \forall \xi \in \mathop{\mathbb R\kern 0pt}\nolimits \, , \quad
h_{n,\beta } (\xi) & := \beta^{\frac 1 4} h_{n} \big( \beta^{\frac 1 2} \xi \big) \, . \end{aligned} \end{equation} The sublaplacian~$\Delta_G$ defined in~(\ref{DeltaG}) satisfies \begin{equation}\label{formulafourierdelta}
{\mathcal F}(- \Delta_G f) (\lambda,\nu) = {\mathcal F}( f) (\lambda,\nu) \left(H(\lambda)+|\nu|^2\right)\, , \end{equation}
where $|\nu|$ denotes the euclidean norm of the vector $\nu$ in $\mathop{\mathbb R\kern 0pt}\nolimits^k$ and~$ H(\lambda)$ is the diagonal operator defined on $L^2(\mathop{\mathbb R\kern 0pt}\nolimits^d)$ by $$ H(\lambda) h_{\alpha,\eta(\lambda)} = \sum_{j =1}^d (2 \alpha_j + 1) \eta_j(\lambda) \, h_{\alpha,\eta(\lambda)}\, . $$ In the following we shall denote the ``frequencies" associated with~$P_j^2(\lambda) + Q_j^2(\lambda) $ by \begin{equation}\label{eq:freqxy} \zeta_j (\alpha, \lambda) := (2 \alpha_j + 1) \eta_j(\lambda) \, , \quad (\alpha, \lambda) \in {\mathbb N}^d \times \Lambda \, , \end{equation} and those associated with~$H(\lambda)$ by \begin{equation}\label{eq:freq} \zeta (\alpha, \lambda) := \sum_{j =1}^d \zeta_j (\alpha, \lambda) \, , \quad (\alpha, \lambda) \in {\mathbb N}^d \times \Lambda \, . \end{equation}
Note that~$\Delta_G $ is directly related to the harmonic oscillator via~$H(\lambda)$ since eigenfunctions associated with the eigenvalues~$\zeta (\alpha, \lambda) $ are the products of 1-dimensional Hermite functions. Also observe that~$\zeta (\alpha, \lambda)$ is smooth and homogeneous of degree one in~$\lambda = (\lambda_1,\dots, \lambda_p)$. Moreover, $\zeta(\alpha,\lambda)=0$ if and only if $B(\lambda)=0$, or equivalently by~(\ref{skw}), $\lambda=0$.
\noindent Notice also that there is a difference in homogeneity in the variables $\lambda$ and $\nu$. Namely, in the variable~$\nu$, the sublaplacian acts as in the euclidean case (homogeneity $2$) while in $\lambda$, it has the homogeneity $1$ of a wave operator.
\noindent Finally, for any smooth function $\Phi$, we define the operator $\Phi\left(-\Delta_{G}\right)$ by the formula
\begin{equation}\label{def} {\mathcal F} \big(\Phi(- \Delta_G) f\big)(\lam,\nu):= \Phi(H(\lambda)+|\nu|^2) {\mathcal F} ( f )(\lam,\nu) \, ,\end{equation} which also reads $$
{\mathcal F} \big (\Phi(- \Delta_G) f\big)(\lam,\nu) h_{\alpha,\eta(\lambda)} :=
\Phi\left(|\nu|^2+\zeta(\alpha,\lambda)\right) {\mathcal F}(f)(\lam,\nu) h_{\alpha,\eta(\lambda)} \,, $$ for all $(\lambda,\nu)\in \mathfrak r(\Lambda)$ and $\alpha\in{\mathbb N}^d.$ \\
\subsubsection{Strict spectral localization} Let us introduce the following notion of spectral localization, which we shall call strict spectral localization and which will be very useful in the following.
\begin{definition}\label{def:strispecloc} A function $f$ belonging to $L^1(G)$ is said to be strictly spectrally localized in a set~${\mathcal C}\subset \mathop{\mathbb R\kern 0pt}\nolimits$ if there exists a smooth function $\theta$, compactly supported in ${\mathcal C}$, such that for all~$ 1\leq j\leq d$, \begin{equation}\label{strictloceq} {\mathcal F}(f)(\lambda,\nu)={\mathcal F}(f)(\lambda,\nu) \, \theta \big ((P_j^2+Q_j^2)(\lambda ) \big) \, ,\;\;\forall (\lambda,\nu)\in \mathfrak r(\Lambda) \, . \end{equation} \end{definition} \begin{remark} {\rm One could expect the notion of spectral localization to relate to the Laplacian instead of each individual vector field~$P_j^2+Q_j^2$, assuming rather the less restrictive condition $$ {\mathcal F}(f)(\lambda,\nu)={\mathcal F}(f)(\lambda,\nu) \, \theta \big (H(\lambda ) \big) \, ,\;\;\forall (\lambda,\nu)\in \mathfrak r(\Lambda) \, . $$
The choice we make here is more restrictive due to the anisotropic context (namely the fact that~$\eta_j(\lambda)$ depends on~$j$): in the case of the Heisenberg group or more generally H-type groups, the notion of ``strict spectral localization'' in a ring~${\mathcal C}$ of~$\mathop{\mathbb R\kern 0pt}\nolimits^p$ actually coincides with the more usual definition of ``spectral localization" since as recalled in the next paragraph~$\eta_j(\lambda) = 4 |\lambda|$ (for a complete presentation and more details on spectrally localized functions, we refer the reader to~\cite{bg, bfg, bfg2}). Assumption~(\ref{strictloceq})
guarantees a lower bound, which roughly states that for~${\mathcal F}(f)(\lambda,\nu) h_{\alpha,\lambda}$ to be non zero, then \begin{equation}\label{lowerbound} \forall j\in\{1,\dots,d\},\;\; (2 \alpha_j + 1) \eta_j(\lambda) \geq c >0 \, , \end{equation} hence each~$\eta_j$ must be bounded away from zero, rather than the sum over~$j$. These lower bounds are important ingredients of the proof (see Section~\ref{prooflemmas}). } \end{remark}
\subsection{Examples}
Let us give a few examples of well-known stratified Lie groups with a step 2 stratification. Note that nilpotent Lie groups which are connected, simply connected and whose Lie algebra admits a step 2 stratification are called {Carnot groups}.
\subsubsection{The Heisenberg group} The Heisenberg group $\mathop{\mathbb H\kern 0pt}\nolimits^d$ is defined as the space $\mathop{\mathbb R\kern 0pt}\nolimits^{2d+1}$ whose elements can be written $w = (x,y,s) $ with $(x,y) \in
\mathop{\mathbb R\kern 0pt}\nolimits^{d} \times \mathop{\mathbb R\kern 0pt}\nolimits^{d} $, endowed with the following product law:
\[ (x,y,s)\cdot (x',y',s') = (x+x',y+y',s+s'- 2(x\mid y') + 2(y\mid x')), \] where~$(\cdot\mid \cdot)$ denotes the euclidean scalar product on~$\mathop{\mathbb R\kern 0pt}\nolimits^d$. In that case the center consists in the points of the form $(0,0,s)$ and is of dimension 1. The Lie algebra of left invariant vector fields is generated by
$$
X_j\! :=\!\partial_{x_j} + 2 y_j \partial_s \, ,\!\!\!\quad Y_{j} \! :=\! \partial_{y_j} - 2 x_j \partial_s\!\!\!\quad\hbox{with}\!\!\!\quad 1\leq j\leq d \, , \!\!\! \quad
\hbox{and}\!\!\!\quad
S := \partial_s=\frac 1 4[Y_j,X_j ] \, .
$$
The canonical skew-symmetric form~$B(\lambda)(U,V)$ defined in~(\ref{skw}) associated with the frequencies $\lambda\in\mathop{\mathbb R\kern 0pt}\nolimits^*$ is proportional to~$\lambda$, since~$[U,V]$ is proportional to~$\partial_s$.
Its radical reduces to $\{0\}$ with $\Lambda=\mathop{\mathbb R\kern 0pt}\nolimits^*$ and~$|\eta_j(\lambda)|=4 \, |\lambda|$ for all $j\in\{1,\dots,d\}$. Note in particular that strict spectral localization and spectral localization are equivalent.
\subsubsection{ H-type groups} These groups are canonically isomorphic to $\mathbb R^{m+p}$, and are a multidimensional version of the Heisenberg group. The group law is of the form \begin{equation*} \quad \quad \quad (x^{(1)},x^{(2)}) \cdot (y^{(1)},y^{(2)}):=\begin{pmatrix} x_j^{(1)}+y_j^{(1)},\,\,\,j=1,...,m \\x_j^{(2)}+y_j^{(2)}+\frac12 \langle x^{(1)}, U^{(j)}y^{(1)} \rangle,\,\,\,j=1,...,p \end{pmatrix} \end{equation*} where $U^{(j)}$ are $m \times m$ linearly independent orthogonal skew-symmetric matrices satisfying the property $$U^{(r)}U^{(s)}+U^{(s)}U^{(r)}=0$$ for every $r,s \in \left \{1,...,p \right \}$ with $r \neq s$. In that case the center is of dimension~$p$ and may be identified with~$\mathop{\mathbb R\kern 0pt}\nolimits^p$ and the radical of the canonical skew-symmetric form associated with the frequencies $\lambda$ is again $\{0\}$.
For example the Iwasawa subgroup of semi-simple Lie groups of split rank one (see~\cite{koranyi2}) is of this type. On H-type groups, $m$ is an even number which we denote by $2\ell$ and the Lie algebra of left invariant vector fields is spanned by the following vector fields, where we have written~$ z=(x,y) $ in~$ \mathop{\mathbb R\kern 0pt}\nolimits^{\ell} \times \mathop{\mathbb R\kern 0pt}\nolimits^{\ell}$: for~$ j$ running from~1 to~$\ell$ and~$k$ from~$ 1$ to~$p$,
\[
X_j\! :=\!\partial_{x_j} + \frac 1 2 \sum^{p}_{k=1}\sum^{2\ell}_{l=1}z_l \, U_{l,j}^{(k)}\partial_{s_k} \, ,\!\!\!\quad Y_{j} \!:=\! \partial_{y_j} +\frac 1 2 \sum^{p}_{k=1}\sum^{2\ell}_{l=1}z_l \,U_{l,j+\ell}^{(k)}\partial_{s_k}
\quad \hbox{and}\, \quad \partial_{s_k}
.\] In that case, we have $\Lambda=\mathop{\mathbb R\kern 0pt}\nolimits^p\setminus\{0\}$ with $\eta_j(\lambda)=\sqrt{\lambda^2_1+\cdots+\lambda^2_p}$ for all $j \in \{1,\dots,\ell\}$ (here again, strict spectral localization and spectral localization are equivalent).
\subsubsection{Diamond groups} These groups which occur in crystal theory (for more details, consult \cite{Ludwig, Poguntke}), are of the type~$\Sigma \ltimes \mathop{\mathbb H\kern 0pt}\nolimits^d $ where~$\Sigma$ is a connected Lie group acting smoothly on~$ \mathop{\mathbb H\kern 0pt}\nolimits^d $. One can find examples for which the radical of the canonical skew-symmetric is of any dimension $k$, $0\leq k\leq d$. For example, one can take for $\Sigma$ the~$k$-dimensional torus, acting on $\mathop{\mathbb H\kern 0pt}\nolimits^d$ by $$ \theta( w):=(\theta\cdot z,s):=\left({\rm e}^{i\theta_1}z_1,\dots,{\rm e}^{i\theta_k}z_k,z_{k+1},\dots,z_d,s\right),\;\;w=(z,s) $$ where the element $\theta=(\theta_1,\dots,\theta_k )$ corresponds to the element $\left({\rm e}^{i\theta_1},\dots,{\rm e}^{i\theta_k}\right)$ of~${\mathbb T}^k$. Then, the product law on $G={\mathbb T}^k\ltimes\mathop{\mathbb H\kern 0pt}\nolimits^d$ is given by $$(\theta,w)\cdot (\theta',w')=\big(\theta+\theta',w.(\theta( w')\big )\, ,$$ where $w.(\theta( w'))$ denotes the Heisenberg product of $w$ by $\theta( w')$. As a consequence, the center of $G$ is of dimension $1$ since it consists of the points of the form $(0,0,s)$ for $s\in\mathop{\mathbb R\kern 0pt}\nolimits$. Let us choose for simplicity $k=d=1$, the algebra of left-invariant vector fields is generated by the vector fields $\partial_\theta$, $\partial_s$, $\Gamma_{\theta,x}$ and $\Gamma_{\theta,y}$ where \begin{eqnarray*} \Gamma_{\theta,x} &= &{\rm cos}\, \theta \partial_x +{\rm sin}\, \theta \partial_y +2(y{\rm cos}\,\theta-x{\rm sin} \theta)\partial_s,\\ \Gamma_{\theta,y} &= &-{\rm sin}\, \theta \partial_x +{\rm cos}\, \theta \partial_y -2(y{\rm sin}\,\theta +x{\rm cos }\theta)\partial_s. \end{eqnarray*}
It is not difficult to check that the the radical of $B(\lambda)$
is of dimension~$1$. In the general case, where $k\leq d$, the algebra of left-invariant vector fields is generated by the vector fields $\partial_s$, the $2 (d-k)$ vectors
$$X_{\ell}=\partial_{x_\ell}+2y_{\ell} \partial_s,\;\;Y_{\ell}=\partial_{y_\ell}-2x_\ell\partial_s,$$
and the $3k$ vectors defined for $1\leq j\leq k$ by : $\partial_{\theta_j}$, $\Gamma_{\theta_j,x_j}$ and $\Gamma_{\theta_j,y_j}$ where \begin{eqnarray*} \Gamma_{\theta_j,x_j} &= &{\rm cos}\, \theta_j \partial_{x_j} +{\rm sin}\, \theta_j \partial_{y_j} +2(y_j{\rm cos}\,\theta_j-x_j{\rm sin} \theta_j)\partial_s,\\ \Gamma_{\theta_j,y_j} &= &-{\rm sin}\, \theta_j \partial_{x_j} +{\rm cos}\, \theta_j \partial_{y_j} -2(y_j{\rm sin}\,\theta_j +x_j{\rm cos }\theta_j)\partial_s. \end{eqnarray*} and this provides an example with a radical of dimension~$k$.
\subsubsection{The tensor product of Heisenberg groups} Consider $\mathop{\mathbb H\kern 0pt}\nolimits^{d_1} \otimes \mathop{\mathbb H\kern 0pt}\nolimits^{d_2}$ the set of elements~$(w_1,w_2) $ in~$\mathop{\mathbb H\kern 0pt}\nolimits^{d_1} \otimes \mathop{\mathbb H\kern 0pt}\nolimits^{d_2}$, which can be written~$(w_1,w_2)= (x_1,y_1,s_1,x_2,y_2,s_2)$ in $\mathop{\mathbb R\kern 0pt}\nolimits^{2d_1+1} \times \mathop{\mathbb R\kern 0pt}\nolimits^{2d_2+1}$, equipped with the law of product:
\[ (w_1,w_2)\cdot (w_1',w_2') = (w_1\cdot w_1',w_2 \cdot w_2'),
\] where $w_1\cdot w_1'$ and $w_2 \cdot w_2'$ denote respectively the product in $\mathop{\mathbb H\kern 0pt}\nolimits^{d_1}$ and $ \mathop{\mathbb H\kern 0pt}\nolimits^{d_2}$. Clearly $\mathop{\mathbb H\kern 0pt}\nolimits^{d_1} \otimes \mathop{\mathbb H\kern 0pt}\nolimits^{d_2}$ is a step~2 stratified Lie group with center of dimension $2$ and radical index null. Moreover, for $\lambda=(\lambda_1,\lambda_2)$ in the dual of the center, the canonical skew bilinear form $B(\lambda)$ has radical $\{0\}$ with $\Lambda=\mathop{\mathbb R\kern 0pt}\nolimits^*\times\mathop{\mathbb R\kern 0pt}\nolimits^*$, and one has~$\eta_1(\lambda)= 4 \, |\lambda_1|$ and $\eta_2(\lambda)= 4 \, |\lambda_2|$. In that case, strict spectral localization is a more restrictive condition than spectral localization. Indeed, if $f$ is spectrally localized, one has $\lambda_1\not=0$ {\bf or} $\lambda_2 \neq 0$ on the support of ${\mathcal F}(f)(\lambda)$, while one has $\lambda_1\not=0$ {\bf and} $\lambda_2 \neq 0$ on the support of ${\mathcal F}(f)(\lambda)$ if $f$ is strictly spectrally localized.
\subsubsection{ Tensor product of H-type groups} The group $\mathbb R^{m_1+p_1} \otimes \mathbb R^{m_2+p_2}$ is easily verified to be a step~2 stratified Lie group with center of dimension $p_1+p_2$, a radical index null and a skew bilinear form~$B(\lambda)$ defined on $\mathop{\mathbb R\kern 0pt}\nolimits^{m_1+m_2}$ with $m_1=2\ell_1$ and $m_2=2\ell_2$. The Zariski open set associated with $B$ is given by~$\Lambda=(\mathop{\mathbb R\kern 0pt}\nolimits^{p_1}\setminus\{0\})\times (\mathop{\mathbb R\kern 0pt}\nolimits^{p_2}\setminus\{0\})$ and for $\lambda=(\lambda_1,\cdots,\lambda_{p_1+p_2} )$, we have \begin{equation} \label{H1} \begin{aligned} \eta_j(\lambda) &= \sqrt{\lambda^2_1+\dots+\lambda^2_{p_1} }, \quad \mbox{for all} \quad j \in \{1,\dots,\ell_1\} \quad \mbox{and} \\ \eta_j (\lambda) &= \sqrt{\lambda^2_{p_1+1}+\dots+\lambda^2_{p_1+p_2}} \quad \mbox{for all} \quad j \in \{\ell_1+1,\dots,\ell_1+\ell_2\}. \end{aligned} \end{equation}
\subsection{Main results}
The purpose of this paper is to establish optimal dispersive inequalities for the linear Schr\"odinger equation on {step 2 stratified} Lie groups associated with the sublaplacian. In view of~(\ref{formulafourierdelta}) and the fact that the ``frequencies"~$ \zeta (\alpha,\lambda)$ associated with~$H(\lambda)$ given by \eqref{eq:freq} are homogeneous of degree one in~$\lambda$, the Schr\"odinger operator on~$G$ behaves
like a wave operator on a space of the same dimension~$p$ as the center of $G$, and like a Schr\"odinger operator on a space of the same dimension~$k$ as the radical of the canonical skew-symmetric form. By comparison with the classical dispersive estimates, the expected result would be a dispersion phenomenon with an optimal rate of decay of order $ | t |^{- \frac{k+p-1} 2}$. However, as will be seen through various examples, this anticipated rate is not always achieved. To reach this maximum rate of dispersion, we require a condition on~$ \zeta (\alpha,\lambda)$.
\begin{assumption}\label{keyp} For each multi-index~$\alpha $ in~$ {\mathbb N}^d$, the Hessian matrix of the map~$\lambda \mapsto \zeta (\alpha,\lambda)$ satisfies $$ {\rm rank } \,D^2_\lambda \zeta (\alpha,\lambda) = p-1 $$ where~$p$ is the dimension of the center of~$G$. \end{assumption}
\begin{remark} {\rm As was observed in Paragraph {\rm\ref{freq}}, $\zeta (\alpha,\lambda)$ is a smooth function, homogeneous of degree one on $\Lambda$. By homogeneity arguments, one therefore has~$D_\lambda^2 \zeta (\alpha,\lambda) \lambda = 0$. It follows that there always holds $$ {\rm rank } \,D^2_\lambda \zeta (\alpha,\lambda) \leq p-1 \, , $$ hence Assumption~\ref{keyp} may be understood as a maximal rank property. } \end{remark}
\noindent Let us now present the dispersive inequality for the Schr\"odinger equation. Recall that the linear Schr\"odinger equation writes as follows on $G$: \begin{equation}\label{eq:sh} \left\{\begin{array}{l} \left(i\partial_t - \Delta_{G}\right) f=0\\
f_{|t=0}=f_0\, , \end{array}\right. \end{equation} where the function $f$ with complex values depends on $(t,x) \in \mathop{\mathbb R\kern 0pt}\nolimits \times G$.
\begin{theorem}\label{dispgrad} Let $G$ be a {step {\rm2} stratified} Lie group with center of dimension $p$ with $1\leq p < n$ and radical index $k$. Assume that Assumption~{\rm\ref{keyp}} holds. A constant $C$ exists such that if $f_0$ belongs to~$L^1(G)$ and is strictly spectrally localized in a ring of~$\mathop{\mathbb R\kern 0pt}\nolimits$ in the sense of Definition~{\rm\ref{def:strispecloc}}, then the associate solution~$f$ to the Schr\"odinger equation \refeq{eq:sh} satisfies \begin{equation}\label{eq:gradeddispS}
\|f(t, \cdot )\|_{ L^\infty(G)} \leq \frac {C} {| t |^{ \frac{k}{2}}(1+ | t |^{\frac {p- 1} {2 }} )} \|f_0\|_{ L^1(G)} \, , \end{equation} for all $t\neq 0$ and the result is sharp in time. \end{theorem}
\noindent The fact that a spectral localization is required in order to obtain the dispersive estimates is not surprising. Indeed, recall that in the $\mathop{\mathbb R\kern 0pt}\nolimits^d$ case for instance, the dispersive estimate for the Schr\"odinger equation derives immediately (without any spectral localization assumption) from the fact that the solution $u(t)$ to the free Schr\"odinger equation on $\mathop{\mathbb R\kern 0pt}\nolimits^d$ with Cauchy data $u_0$ writes for~$t \neq 0$
$$ u(t, \cdot ) = u_0 *\frac 1 {(-2i \pi t)^{\frac {d}2} } {\rm e}^{-i \frac {|\cdot|^2}{4 t}} \,,$$ where $*$ denotes the convolution product in $\mathop{\mathbb R\kern 0pt}\nolimits^d$ (for a detailed proof of this fact, see for instance Proposition 8.3 in \cite{bcd}). However proving dispersive estimates for the wave equation in $\mathop{\mathbb R\kern 0pt}\nolimits^d$ requires more elaborate techniques (including oscillating integrals) which involve an assumption of spectral localization in a ring. In the case of a {step 2 stratified} Lie group $G$, the main difficulty arises from the complexity of the expression of Schr\"odinger propagator that mixes a wave operator behavior with that of a Schr\"odinger operator. This explains on the one hand the decay rate in Estimate \eqref{eq:gradeddispS} and on the other hand the hypothesis of strict spectral localization.
\noindent Let us now discuss Assumption~\ref{keyp}. As mentioned above, there is no dispersion phenomenon for the Schr\"odinger equation on the Heisenberg group $\mathop{\mathbb H\kern 0pt}\nolimits^d$ (see~\cite{bgx}). Actually the same holds for the tensor product of Heisenberg groups $\mathop{\mathbb H\kern 0pt}\nolimits^{d_1} \otimes \mathop{\mathbb H\kern 0pt}\nolimits^{d_2}$ whose center is of dimension $p=2$ and radical index null, and more generally to the case of 2 step stratified Lie groups, decomposable on non trivial 2 step stratified Lie groups : we derive indeed from Theorem~\ref{dispgrad} the following corollary. \begin{corollary}\label{cor} Let $G=\otimes_{1\leq m\leq r} G_m$ be a decomposable, $2$ step stratified Lie group
where the groups~$G_m$ are non trivial $2$-step stratified Lie groups satisfying Assumption~{\rm\ref{keyp}}, of radical index~$k_m$ and with centers of dimension $p_m$. Then the dispersive estimates holds with rate~$|t|^{-q}$, $$q:= {1\over 2}\sum_{1\leq m\leq r} \left(k_m+p_m-1\right)={1\over 2} (k+p-r) \, ,$$ where $p$ is the dimension of the center of $G$ and $k$ its radical index. Besides, this rate is optimal. \end{corollary}
\noindent Corollary~\ref{cor} is a direct consequence of Theorem~\ref{dispgrad} and the simple observation that $\Delta_G= \otimes_{1\leq m\leq r} \Delta_{G_m}.$ This result applies for example to the tensor product of Heisenberg groups, for which there is no dispersion, and to the tensor product of H-type groups $\mathbb R^{m_1+p_1} \otimes \mathbb R^{m_2+p_2}$ for which the dispersion rate is $ t^{- (p_1+p_2-2)/2}$ (see~\cite{hiero}).
Corollary~\ref{cor} therefore shows that it can happen that the ``best'' rate of decay~$ | t |^{-(k+p-1)/2}$ is not reached, in particular for decomposable Lie groups. This suggests that Assumption \ref{keyp} could be related with decomposability.
\noindent
More generally, a large class of groups which do not satisfy the Assumption~\ref{keyp} is given by step~2 stratified Lie group~$G$ for which~$\zeta(0, \lambda)$ is a linear form on each connected component of the Zariski-open subset $\Lambda$. Of course, the Heisenberg group and any tensor product of Heisenberg group is of that type. We then have the following result which illustrates that there exists examples of groups without any dispersion and which do not satisfy Assumption~\ref{keyp}.
\begin{proposition}\label{remnodisp} Consider a step~$2$ stratified Lie group~$G$ whose radical index is null and for which~$\zeta(0, \lambda)$ is a linear form on each connected component of the Zariski-open subset $\Lambda$. Then, there exists $f_0\in{\mathcal S}(G)$, $x\in G$ and $c_0>0$ such that
$$\forall t\in\mathop{\mathbb R\kern 0pt}\nolimits^+,\;\;|{\rm e}^{-it\Delta_G}f_0(x)|\geq c_0.$$ \end{proposition}
\noindent Finally we point out that the dispersive estimate given in Theorem~\ref{dispgrad} can be regarded as a first step towards space-time estimates of the Strichartz type. However due to the strict spectral localization assumption, the Besov spaces which should appear in the study (after summation over frequency bands) are naturally anisotropic; thus proving such estimates is likely to be very technical, and is postponed to future works.
\subsection{Strategy of the proof of Theorem~\ref{dispgrad}}
In the statement of Theorem~\ref{dispgrad}, there are two different results: the dispersive estimate in itself on the one hand, and its optimality on the other.
Our strategy or proof is closely related to the method developed in~\cite{bgx} and~\cite{hiero} with additional non negligible technicalities.
\noindent In the situation of~\cite{bgx} where the Heisenberg group $\mathop{\mathbb H\kern 0pt}\nolimits^d $ is considered, the authors prove that there is no dispersion by exhibiting explicitly a Cauchy data~$f_0$ for which the solution~$f(t,\cdot)$ to the Schr\"odinger equation~(\ref{eq:sh}) satisfies
\begin{equation}\label{eq:nd} \forall \,q \in [1,\infty] \, , \quad \|f(t,\cdot)\|_{ L^q(\mathop{\mathbb H\kern 0pt}\nolimits^d)} = \|f_0\|_{ L^q(\mathop{\mathbb H\kern 0pt}\nolimits^d)} \, . \end{equation} More precisely, they take advantage of the fact that the Laplacian-Kohn operator~$ \Delta_{ \mathop{\mathbb H\kern 0pt}\nolimits^d }$ can be recast under the form \begin{equation} \label{eq:hk} \Delta_{ \mathop{\mathbb H\kern 0pt}\nolimits^d }= 4\sum_{j=1}^{d} (Z_j \overline Z_j +i\partial_s )\,, \end{equation} where $\bigl\{Z_1, \overline Z_1, ..., \ Z_d, \overline Z_d, \partial_s \bigr\}$ is the canonical basis of Lie algebra of left invariant vector fields on~$\mathop{\mathbb H\kern 0pt}\nolimits^d$ (see~\cite{bfg} and the references therein for more details). This implies that for a non zero function $f_0$ belonging to~$ \mbox {Ker} \: \big(\sum_{j=1}^d Z_j \overline Z_j \big)$, the solution of the Schr\"odinger equation on the Heisenberg group~$f(t)={\rm e}^{-it\Delta_{\mathop{\mathbb H\kern 0pt}\nolimits^d}}f_0$ actually solves a transport equation: $$f(z,s,t)= {\rm e}^{4d t\partial_s}f_0(z,s)=f_0 (z, s + 4dt) $$ and hence satisfies \refeq{eq:nd}. The arguments used in \cite{hiero} for general~H-type groups are similar to the ones developed in~\cite{bgx}: the dispersive estimate is obtained using an explicit formula for the solution, coming from Fourier analysis, combined with a stationary phase theorem. The Cauchy data used to prove the optimality is again in the kernel of an adequate operator, by a decomposition similar to~(\ref{eq:hk}).
\noindent As in \cite{bgx} and \cite{hiero}, the first step of the proof of Theorem~\ref{dispgrad} consists in writing an explicit formula for the solution of the equation by use of the Fourier transform. Let us point out that in the setting of~\cite{bgx} and~\cite{hiero}, irreducible representations are isotropic with respect to the dual of the center of the group; this isotropy allows to reduce to a one-dimensional framework and deduce the dispersive effect from a careful use of a stationary phase argument of~\cite{stein3}. As we have already seen in Paragraph \ref{defirreducible}, the irreducible representations are no longer isotropic in the general case of stratified Lie groups, and thus we adopt a more technical approach making use of Schr\"odinger representation and taking advantage of some properties of Hermite functions appearing in the explicit representation of the solutions derived by Fourier analysis (see Section~\ref{prooflemmas}). The optimality of the inequality is obtained as in~\cite{bgx} and~\cite{hiero}, by an adequate choice of the initial data.
\subsection{Organization of the paper}
The article is organized as follows. In Section~\ref{sec:explicit}, we write an explicit formulation of the solutions of the Schr\"odinger equation. Then, Section~\ref{sec:di} is devoted to
the proof of Theorem~\ref{dispgrad} and in
Section~\ref{optimality}, we discuss the optimality of the result and prove Proposition~\ref{remnodisp}.
\noindent Finally, we mention that the letter~$C$ will be used to denote a universal constant which may vary from line to line. We also use~$A\lesssim B$ to denote an estimate of the form~$A\leq C B$ for some constant~$C$.
\noindent {\bf Acknowledgements. } The authors wish to thank Corinne Blondel, Jean-Yves Charbonnel, Laurent Mazet, Fulvio Ricci and Mich\`ele Vergne for enlightening discussions. They also extend their gratitude to the anonymous referee for numerous remarks which improved the presentation of this paper, and for providing a simpler and more conceptual proof of Lemma~3.6 than our original proof.
\section{Explicit representation of the solutions}\label{sec:explicit}
\subsection{The convolution kernel}\label{stationaryphase*} Let~$f_0 $ belong to~$ {\mathcal S}(G)$ and let us consider~$f(t,\cdot )$ the solution to the free Schr\"odinger equation~(\ref{eq:sh}). In view of \eqref{formulafourierdelta}, we have $$ {\mathcal F}(f(t,\cdot)) (\lambda,\nu) =
{\mathcal F}(f_0) (\lambda,\nu)\, {\rm e}^{it|\nu|^2+it H(\lambda)}\, , $$ which implies easily (arguing as in the Appendix) that $f(t,\cdot)$ belongs to $ {\mathcal S}(G)$. Assuming that $f_0$ is strictly spectrally localized in the sense of Definition \ref{def:strispecloc}, there exists a smooth function $\theta$ compactly supported in a ring ${\mathcal C}$ of~$\mathop{\mathbb R\kern 0pt}\nolimits$ such that, defining $$ \Theta (\lambda) := \prod_{j=1}^d \theta \big ((P_j^2 + Q_j^2)(\lambda ) \big) \, , $$ then $$ {\mathcal F}(f(t,\cdot)) (\lambda,\nu) =
{\mathcal F}(f_0) (\lambda,\nu) \, \Theta (\lambda) \, {\rm e}^{it|\nu|^2+it H(\lambda)}\, . $$ \noindent Therefore by the inverse Fourier transform~(\ref{inversionformula}), we deduce that the function~$f (t,\cdot)$ may be decomposed in the following way: \begin{equation}\label{formulaftx}
f(t,x) = \kappa \, \int_{\lambda\in\Lambda}\int_{\nu\in\mathfrak r^*_\lambda} {\rm{tr}} \, \Big((u^{\lambda,\nu}_{X(\lambda,x)})^\star \, {\mathcal F}(f_0) (\lambda,\nu) \, \Theta (\lambda) \, {\rm e}^{it|\nu|^2+it H(\lambda)} \Big) |{\mbox {Pf}} (\lambda) | \, d\nu\,d\lambda\, . \end{equation} We set for $ X\in\mathop{\mathbb R\kern 0pt}\nolimits^{n}$, \begin{equation}\label{defkt}
k_t(X) := \kappa \, \int_{\lambda\in\Lambda}\int_{\nu\in\mathfrak r^*_\lambda} {\rm{tr}} \, \left(u^{\lambda,\nu}_X \, \Theta (\lambda) \, {\rm e}^{it|\nu|^2+it H(\lambda)} \right) |{\mbox {Pf}} (\lambda) | \,d\nu d\lambda \,. \end{equation} The function $k_t$ plays the role of a convolution kernel in the variables of the Lie algebra and we have the following result.
\begin{proposition}\label{firstreduction} If the function~$k_t$ defined in~{\rm(\ref{defkt})} satisfies \begin{equation}\label{Linftyboundk}
\forall t\in \mathop{\mathbb R\kern 0pt}\nolimits \, , \quad \| k_t \|_{L^\infty(\mathop{\mathbb R\kern 0pt}\nolimits^{n})} \leq \frac {C} {| t |^{ \frac{k}{2}}(1+ | t |^{\frac {p- 1} {2 }} )}\, \raise 2pt\hbox{,} \end{equation} then Theorem~{\rm\ref{dispgrad}} holds. \end{proposition}
\begin{proof} We write, according to~(\ref{formulaftx}), \begin{eqnarray*}
f(t,x) & = & \kappa \, \int_{\lambda\in\Lambda}\int_{\nu\in\mathfrak r^*_\lambda} \int_{y\in G} {\rm{tr}} \, \Big((u^{\lambda,\nu}_{X(\lambda,x)} )^* u^{\lambda,\nu}_{X(\lambda,y)} \, \Theta (\lambda) \, {\rm e}^{it|\nu|^2+it H(\lambda)} \Big) f_0(y) |{\mbox {Pf}} (\lambda) | \, d\nu\,d\lambda\,d\mu(y)\, \\
& = &
\kappa \, \int_{\lambda\in\Lambda}\int_{\nu\in\mathfrak r^*_\lambda} \int_{y\in G} {\rm{tr}} \, \Big( u^{\lambda,\nu}_{X(\lambda,y)} \, \Theta (\lambda) \, {\rm e}^{it|\nu|^2+it H(\lambda)} \Big) f_0(x \cdot y) |{\mbox {Pf}} (\lambda) | \, d\nu\,d\lambda\,d\mu(y)\, . \end{eqnarray*} Note that we have used the property that the map~$X \mapsto u^{\lambda,\nu}_X$ is a unitary representation, and the invariance of the Haar measure by translations.
\noindent Now we use the exponential law $y\mapsto Y=(P(\lambda,y), Q(\lambda,y),Z, R(\lambda,y))$ and the fact that $d\mu(y)=dY$ the Lebesgue measure, then we perform a linear orthonormal change of variables $$(P(\lambda,y),Q(\lambda,y),R(\lambda,y))\mapsto (\tilde P,\tilde Q,\tilde R)$$ so that $d\mu(y)=dY=d\tilde P\,d\tilde Q \,dZ\, d\tilde R$ and we write $$\displaylines{\qquad
f(t,x) = \kappa \, \int_{\lambda\in\Lambda}\int_{\nu\in\mathfrak r^*_\lambda} \int _{(\tilde P,\tilde Q,Z,\tilde R)\in \mathop{\mathbb R\kern 0pt}\nolimits^{n}} {\rm{tr}} \, \Big(u^{\lambda,\nu}_{(\tilde P,\tilde Q,Z,\tilde R)} \, \Theta (\lambda) \, {\rm e}^{it|\nu|^2+it H(\lambda)} \Big)
\cr
\times f_0(x\cdot {\rm exp}(\tilde P,\tilde Q,Z,\tilde R)) |{\mbox {Pf}} (\lambda) | \, d\nu\,d\lambda\,d\tilde P\,d\tilde Q \,dZ\,d\tilde R\, . \qquad \cr} $$ Thanks to the Fubini Theorem and Young inequalities, we can write (dropping the $\tilde\,$ on the variables), \begin{eqnarray*}
|f(t,x)| & = & \left| \int _{( P, Q,Z, R)\in \mathop{\mathbb R\kern 0pt}\nolimits^{n}} k_t (P,Q,Z,R) f_0(x \cdot {\rm exp}( P ,Q,Z,R)) dP\,dQ\,dR\, dZ\right|\\
&\leq & \| k_t\| _{L^\infty(G)} \left| \int _{(P,Q,Z,R)\in\mathop{\mathbb R\kern 0pt}\nolimits^{n}} f_0(x \cdot {\rm exp}( P, Q,Z,R)) dP\,dQ\,dR\, dZ\right| \\
& \leq & \| k_t\| _{L^\infty(G)} \| f_0\|_{L^1(G)}. \end{eqnarray*} Proposition~\ref{firstreduction} is proved. \end{proof}
\noindent In the next subsections, we make preliminary work by transforming the expression of $k_t$ and reducing the proof to statements equivalent to~(\ref{Linftyboundk}).
\subsection{Transformation of $k_t$: expression in terms of Hermite functions}\label{transformation} Decomposing the operator~$H(\lambda)$ in the basis of Hermite functions, and recalling notation~(\ref{eq:freqxy}) replaces~(\ref{defkt}) with $$
k_t(X)=\kappa \sum_{\alpha \in {\mathbb N}^d} \int_{\Lambda}\int_{\nu\in\mathfrak r^*_\lambda}
e^{it|\nu|^2+ it \zeta( \alpha, \lambda )} \prod_{j=1}^d \theta \big(\zeta_j ( \alpha, \lambda ) \big) \big( u^{\lambda,\nu}_{X} h_{\alpha,\eta(\lambda)} | h_{\alpha,\eta(\lambda)} \big) |{\mbox {Pf}} (\lambda) | \, d\nu\, d\lambda \, , \;\;X\in\mathop{\mathbb R\kern 0pt}\nolimits^{n} \, . $$ Using the explicit form of $u^{\lambda,\nu}_{X}$ recalled in~(\ref{defpilambda}) we find the following result.
\begin{lemma}\label{lem22}
There is a constant $\wt \kappa$ and a smooth function $F$ such that with the above notation, we have for $t \neq 0$ $$
k_t(P,Q,tZ, R)= \frac{\wt \kappa \, {\rm e}^{-i \frac {|R|^2}{ 4 t}}\,}{t^{\frac k 2} } \, \sum_{\alpha \in {\mathbb N}^d}
\int_{\Lambda} {\rm e}^{it\Phi_\alpha( Z , \lambda )} G_\alpha \big (P,Q,\eta(\lambda)\big) \, |{\rm {Pf}} (\lambda) | \, F(\lambda)\, d\lambda \, , $$ where the phase $\Phi_\alpha$ is given by $$ \Phi_\alpha( Z , \lambda ):= \zeta (\alpha,\lambda) -\lambda(Z) \, , $$ with Notation~{\rm(\ref{eq:freq})}, and the function $G_\alpha$ is given by the following formula, for all~$(P,Q,\eta) \in \mathop{\mathbb R\kern 0pt}\nolimits^{3d}$: \begin{equation}\label{defG}
G_\alpha(P,Q,\eta ):= \prod_{j=1}^d \theta \big( (2\alpha_j+1) \eta_j \big) \, g_{\alpha_j}\Big(\sqrt {\eta_j } \, P_j,\sqrt {\eta_j} \, Q_j \Big) \end{equation} while for each~$(\xi_1,\xi_2,n)$ in~$\mathop{\mathbb R\kern 0pt}\nolimits^2 \times {\mathbb N}$, using Notation~{\rm(\ref{defhn})}, \begin{equation}\label{defgxi1xi2} g_{n} (\xi_1,\xi_2):=e^{-i\frac{\xi_1 \xi_2 }2} \int_{\mathop{\mathbb R\kern 0pt}\nolimits} e^{-i \xi_2 \xi} h_{n}( \xi_1 +\xi) h_{n}(\xi) \, d\xi \, . \end{equation} \end{lemma}
\noindent Notice that~$(g_{n})_{n\in{\mathbf N}} $ is uniformly bounded in~$\mathop{\mathbb R\kern 0pt}\nolimits^2$ thanks to the Cauchy-Schwarz inequality and the fact that~$\|h_n\|_{L^2(\mathop{\mathbb R\kern 0pt}\nolimits) }=1$, and hence the same holds for~$(G_\alpha)_{\alpha\in{\mathbf N}^d}$ (in $\mathop{\mathbb R\kern 0pt}\nolimits^{3d}$).
\begin{proof} We begin by observing that for~$X = (P,Q,R,Z)$, \begin{eqnarray*}
I& : = & \left(u^{\lambda,\nu}_X h_{\alpha,\eta(\lambda)}\,|\,h_{\alpha,\eta(\lambda)}\right)\\
& = & {\rm e}^{-i\nu(R)-i\lambda(Z)}\int_{\mathop{\mathbb R\kern 0pt}\nolimits^d} {\rm e}^{-i\lambda([\xi +{1\over 2}P,Q])} h_{\alpha,\eta(\lambda)}(P+ \xi)h_{\alpha,\eta(\lambda)}(\xi)d \xi \, , \end{eqnarray*} with in view of \eqref{skw} $$\lambda\big(\big[ \xi +{1\over 2}P,Q\big]\big)=B(\lambda)\big(\xi +{1\over 2} P,Q\big)=\sum_{1\leq j\leq d} \eta_j(\lambda)Q_j\big(\xi_j+{1\over 2} P_j\big) \, .$$ As a consequence, $$I= {\rm e}^{-i\nu(R)-i\lambda(Z)} \prod_{1\leq j\leq d} \int_{\mathop{\mathbb R\kern 0pt}\nolimits} {\rm e}^{-i\eta_j(\lambda)(\xi_j+{1\over 2} P_j)Q_j} h_{\alpha_j,\eta_j(\lambda)}(P_j+ \xi_j)h_{\alpha_j,\eta_j(\lambda)}(\xi_j)d \xi_j \,.$$ The change of variables $\widetilde \xi_j=\sqrt{\eta_j(\lambda)}\, \xi_j$ gives, dropping the $ \, \widetilde~$ for simplicity, $$I= {\rm e}^{-i\nu(R)-i\lambda(Z)}\prod_{1\leq j\leq d}\int_{\mathop{\mathbb R\kern 0pt}\nolimits} {\rm e}^{-i \sqrt{\eta_j(\lambda)} \,Q_j \big(\xi_j+{1\over 2}\sqrt{\eta_j(\lambda)}P_j\big)} h_{\alpha_j}\big(\xi_j+\sqrt{\eta_j(\lambda)}P_j\big)h_{\alpha_j}(\xi_j)d\xi_j \, ,$$ which implies that
$$ k_t(P,Q,tZ,R)= \kappa \, \sum_{\alpha \in {\mathbb N}^d} \int_{\mathfrak r( \Lambda)}
{\rm e}^{-it\lambda(Z)-i\nu( R)} {\rm e}^{it \zeta( \alpha, \lambda )+it|\nu|^2}
G_\alpha \big(P,Q,\eta(\lambda) \big) |{\rm {Pf}} (\lambda) | \, d\nu\, d\lambda \, .
$$ It is well known (see for instance Proposition 1.28 in \cite{bcd}) that for $t \neq 0$ \begin{equation}\label{schrodispersion}
\int_{\mathop{\mathbb R\kern 0pt}\nolimits^k} {\rm e}^{-i (\nu\mid R) +it|\nu|^2} \, d\nu = \Big(\frac{i\pi}{t }\Big)^{\frac k 2} {\rm e}^{-i \frac {|R|^2}{ 4 t}}\, \raise 2pt\hbox{,}
\end{equation} where~$(\cdot\mid \cdot)$ denotes the euclidean scalar product on $\mathop{\mathbb R\kern 0pt}\nolimits^k$. This implies that, for $t \neq 0$ $$
\left| k_t(P,Q,tZ, R)\right| \lesssim \frac{1}{|t|^{\frac k 2} } \,\Bigl| \sum_{\alpha \in {\mathbb N}^d}
\int_{\Lambda} {\rm e}^{it\Phi_\alpha( Z , \lambda )} G_\alpha \big (P,Q,\eta(\lambda)\big) \, |{\mbox {Pf}} (\lambda) | \, F(\lambda)\, d\lambda\Bigr| \, , $$ with $F$ the Jacobian of the change of variables $f: \mathfrak r^*_\lambda \longrightarrow \mathop{\mathbb R\kern 0pt}\nolimits^k\, ,$ which is a smooth function. Lemma~\ref{lem22} is proved. \end{proof}
\subsection{Transformation of the kernel~$ k_t$: change of variable}\label{tokcov}
\noindent We are then reduced to establishing that the kernel $ \wt k_t(P,Q,tZ)$ defined by $$ \wt k_t(P,Q,tZ):= \sum_{\alpha \in {\mathbb N}^d}
\int_{\Lambda} {\rm e}^{it\Phi_\alpha( Z , \lambda )} G_\alpha\big(P,Q,\eta(\lambda)\big) \, |{\mbox {Pf}} (\lambda) | \, F(\lambda)\, d\lambda $$ satisfies \begin{equation}\label{simplest}
\forall t\in \mathop{\mathbb R\kern 0pt}\nolimits \, , \quad \| \wt k_t \|_{L^\infty(G)} \leq\frac {C} {1+ | t |^{\frac {p- 1} {2} } } \, \cdotp
\end{equation}To this end, let us define~$m: = |\alpha| = \displaystyle \sum_{j=1}^d \alpha_j$, and when~$m\neq 0$, let us set~$\gamma := m\lambda \in \mathop{\mathbb R\kern 0pt}\nolimits^p$. By construction of~$\eta(\lambda)$ (which is homogeneous of degree one), we have \begin{equation}\label{defetatilde} \forall m \neq 0 \, , \quad \eta(\lambda) =\widetilde \eta_m(\gamma): = \frac1m \eta(\gamma) \,. \end{equation} Let us check that if~$\lambda$ lies in the support of~$\theta \big (\zeta_j( \alpha, \cdot )\big) $, then~$\gamma$ lies in a fixed ring~$\mathcal C$ of~$\mathop{\mathbb R\kern 0pt}\nolimits^p$, independent of~$\alpha$.
On the one hand we note that there is a constant~$C>0$ such that on the support of~$\theta \big (\zeta_j( \alpha, \lambda )\big) $, the variable~$\gamma$ must satisfy \begin{equation}\label{gammabounded} \forall m \neq 0 \, , \quad (2 \alpha_j + 1) \eta_j(\gamma) \leq Cm \, , \end{equation}
for all $\alpha\in{\mathbf N}^d$ such that $|\alpha|=m$. Since for each~$j$ we know that~$ \eta_j(\gamma) $ is positive and homogeneous of degree one, we infer that the function~$ \sl \eta_j(\gamma)
$ goes to infinity with~$|\gamma|$ so~(\ref{gammabounded})
implies that~$\gamma$ must remain bounded on the support of~$\theta \big (\zeta_j( \alpha, \lambda )\big) $.
Moreover thanks to~(\ref{gammabounded}) again, it is clear that
the bound may be made uniform in~$m$.
\noindent Now let us prove that~$\gamma$ may be bounded from below uniformly. We know that there is a positive constant~$c$ such that for~$\lambda$ on the support of~$\theta \big (\zeta_j( \alpha, \lambda )\big) $, we have
\begin{equation}\label{constraint} \forall m \neq 0 \, , \quad \zeta_j (\alpha, \gamma) \geq cm\, . \end{equation}
Writing~$\gamma = |\gamma| \hat \gamma$ with~$\hat \gamma$ on the unit sphere of~$\mathop{\mathbb R\kern 0pt}\nolimits^p$, we find $$
|\gamma| \geq \frac {cm}{\zeta_j (\alpha, \hat \gamma)  } \,\cdotp $$
Defining
$$
C_j:=\max_{| \hat\gamma| = 1} \eta_j(\hat\gamma) < \infty \, ,
$$ it is easy to deduce that if~(\ref{constraint}) is satisfied, then $$
|\gamma| \geq \frac{cm }{(2m+d)\displaystyle \max_{1 \leq j \leq d }C_j} \, \raise 2pt\hbox{,}
$$ hence~$\gamma$ lies in a fixed ring of~$\mathop{\mathbb R\kern 0pt}\nolimits^p$, independent of~$\alpha \neq 0$. This fact will turn out to be important to perform the stationary phase argument.
\noindent Then we can rewrite the expression of~$ \wt k_t(P,Q,tZ)$ in terms of the variable~$\gamma$, which in view of the homogeneity of the Pfaffian produces the following formula: $$ \begin{aligned} \wt k_t(P,Q,tZ)&=
\int_\Lambda {\rm e}^{it\Phi_0(Z , \lambda)} G_0(P,Q,\eta(\lambda)) |{\mbox {Pf}} (\lambda) |\, F(\lambda)\,d \lambda\\
&\quad + \sum_{m \in {\mathbb N^*}}\sumetage{\alpha \in {\mathbb N}^d }{|\alpha | = m} m^{-p-d}
\int {\rm e}^{it\Phi_\alpha(Z, \frac \gamma m)} G_\alpha \big(P,Q, \widetilde\eta_m(\gamma) \big) \, |{\mbox {Pf}} (\gamma) | \, F \big(\frac \gamma m\big)\, d \gamma \,. \end{aligned} $$
Note that the series in~$m$ is convergent since the sum over~$|\alpha |= m$ contributes a power~$m^{d-1}$, whence a series of $m^{-p-1}$ which is convergent since $p \geq 1$. Since the functions $G_\alpha$ are uniformly bounded with respect to $\alpha \in {\mathbb N}^d$ and $F$ is smooth, there is a positive constant~$C$ such that
$$ \forall t\in \mathop{\mathbb R\kern 0pt}\nolimits\, , \quad \| \wt k_t \|_{L^\infty(G)} \leq C\, .$$ In order to establish the dispersive estimate, it suffices then to prove that \begin{equation}\label{goal}
\forall t\neq0 \, , \quad \| \wt k_t \|_{L^\infty(G)} \leq\frac {C} { | t |^{\frac {p- 1} {2}} } \, \cdotp \end{equation}
\section{End of the proof of the dispersive estimate} \label{sec:di}
\noindent In order to prove~\refeq{goal},
we decompose $\wt k_t$ into two parts, writing $$ \wt k_t(P,Q,tZ)= k^1_t (P,Q,tZ)+ k^2_t (P,Q,tZ) \, , \quad $$ with, for a constant $c_0$ to be fixed later on independently of $m$, \begin{equation}\label{defkt1} \begin{aligned}
&k^1_t(P,Q,tZ):= \int_{ |\nabla_\lambda \Phi_0 ( Z , \lambda )|\leq c_0 } {\rm e}^{it\Phi_0(Z, \lambda)} G_0 \big (P,Q,\eta(\lambda)\big) |{\mbox {Pf}} (\lambda) |\, F (\lambda)\, d \lambda\\
& + \, \sum_{m \in {\mathbb N^*}}\sumetage{\alpha \in {\mathbb N}^d }{|\alpha | = m} m^{-p-d}
\int_{ |\nabla_\gamma (\Phi_\alpha ( Z, \frac \gamma m ))|\leq c_0 } {\rm e}^{it\Phi _\alpha(Z, \frac \gamma m)} \, G_\alpha \big(P,Q, \widetilde\eta_m(\gamma) \big) F \big(\frac \gamma m\big) |{\mbox {Pf}} (\gamma) | \, d \gamma \,.\end{aligned} \end{equation} In the following subsections, we successively show~(\ref{goal}) for $k^1_t$ and $k^2_t$.
\subsection{Stationary phase argument for $k^1_t$}\label{statphas} To establish Estimate \eqref{goal}, let us first concentrate on~$k^1_t$. As usual in this type of problem, we define for each integral of the series defining~$k^1_t$ a vector field that commutes with the phase, prove an estimate for each term and finally check the convergence of the series. More precisely, in the case when~$\alpha \neq 0$ and $t>0$ (the case $t<0$ is dealt with exactly in the same manner), we consider the following first order operator: $$
{\mathcal L}_\alpha^1:=\frac{\mbox{Id} - i \nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m ))\cdot \nabla_\gamma}{1+t | \nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m ))|^2} \, \cdot $$ Clearly we have $$ {\mathcal L}_\alpha^1 \, {\rm e}^{it \Phi_\alpha ( Z , \frac \gamma m )}={\rm e}^{it \Phi_\alpha ( Z , \frac \gamma m )} \, .$$
\noindent Let us admit the next lemma for the time being. \begin{lemma}\label{transposeofL} For any integer~$N$, there is a smooth function~$\theta_N$ compactly supported on a ring of~$\mathop{\mathbb R\kern 0pt}\nolimits^p$ and a positive constant $C_N$ such that defining \begin{equation}\label{def:psialpha}
\psi_{\alpha}(\gamma):=G_\alpha \big(P,Q, \widetilde\eta_m(\gamma) \big) F \big(\frac \gamma m\big) \, |{\rm {Pf}} (\gamma) | \,, \end{equation} recalling notation~{\rm(\ref{defetatilde})}, we have $$
|(^t{} {\mathcal L}_\alpha^1 )^N\psi_{\alpha}(\gamma)\,| \leq C_N \, m^N \, \theta_N(\gamma) \,\big(1+\big|t^\frac12 \nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m ))\big|^2\big)^{-N} \, . $$ \end{lemma} \noindent Returning to~$k^1_t$, let us define (recalling that~$\gamma$ belongs to a fixed ring~$ {\mathcal C}$) $$
\displaystyle {\mathcal C}_\alpha(Z) :=\big \{\gamma \in {\mathcal C}; | \nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m ))|\leq c_0\big \} $$ and let us write for any integer~$N$ and~$\alpha \neq 0$ (which we assume to be the case for the rest of the computations) \begin{equation}\label{defIalpha} \begin{aligned} I_{\alpha} (Z)&:= \int_{ {\mathcal C}_\alpha(Z) } {\rm e}^{it \Phi_\alpha ( Z, \frac \gamma m )} \psi_{\alpha}(\gamma) \, d \gamma \\ &= \int_{ {\mathcal C}_\alpha(Z) } {\rm e}^{it \Phi_\alpha ( Z, \frac \gamma m )} (^t{} {\mathcal L}_\alpha^1 )^N \psi_{\alpha}(\gamma) \, d \gamma \, , \end{aligned} \end{equation} where
$ \psi_{\alpha}(\gamma)$ has been defined in~(\ref{def:psialpha}). Then by Lemma~\ref{transposeofL} we find that for each integer~$N$ there is a constant~$C_N$ such that \begin{equation}\label{usformula}
|I_{\alpha}(Z)|\leq C_N \,m^{N } \int _{ {\mathcal C}_\alpha(Z) } \theta_N(\gamma) \big(1+t \big| \nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m ))\big|^2\big)^{-N}\, d \gamma \, . \end{equation} Then the end of the proof relies on three steps: \begin{enumerate} \item a careful analysis of the properties of the support of the integral, \item a change of variables which leads to the estimate in $t^{-(p-1)/2}$, \item a control in $m$ in order to prove the convergence of the sum over~$m $. \end{enumerate}
Before entering into details for each step, let us observe that by definition, we have
$$
\Phi_\alpha ( Z, \frac \gamma m ) = \frac 1 m \big(\zeta (\alpha,\gamma) -\gamma(Z) \big)\, , $$
with $ \gamma(Z) = (A\gamma | Z) = (\gamma | ^tA Z)$ for some invertible matrix~$A$. Performing a change of variables in $\gamma$ if necessary, we can assume without loss of generality that $A={\rm Id}$. Thus we write \begin{equation}\label{formulaPhialphazeta}
\nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m ))= \frac 1 m \big(\nabla_\gamma \zeta (\alpha,\gamma) - Z \big)\, . \end{equation}
\subsubsection{Analysis of the support of the integral defining $I_\alpha(Z)$} Let us prove the following result. \begin{proposition}\label{gammaZnotzero} One can choose the constant~$c_0$ in~{\rm(\ref{defkt1})} small enough so that if~$\gamma $ belongs to~${\mathcal C}_\alpha(Z) $, then~$\gamma \cdot Z\neq 0$. \end{proposition} \begin{proof} We write
$$ \gamma \cdot Z = \gamma \cdot \nabla_\gamma\zeta(\alpha,\gamma) + \gamma \cdot (Z-\nabla_\gamma\zeta(\alpha,\gamma))\,,$$ and observing that thanks to homogeneity arguments $\gamma \cdot \nabla_\gamma\zeta(\alpha,\gamma)= \zeta(\alpha,\gamma)$, we deduce that for any~$\displaystyle \gamma \in {\mathcal C}_\alpha(Z) $
$$\left| \gamma \cdot Z \right| \geq \left| \zeta(\alpha,\gamma)\right|- \left| \gamma \right| \left| Z-\nabla_\gamma\zeta(\alpha,\gamma) \right| \, .$$
Since as argued above, $\gamma$ belongs to a fixed ring and $\zeta(\alpha,\lambda)=0$ if and only if $\lambda =0$ (as noticed in Section~\ref{freq}), there is a positive constant $c$ such that for any $\displaystyle \gamma \in {\mathcal C}_\alpha(Z) $
$$ \left| \zeta(\alpha,\gamma)\right| \geq m c \, ,$$
which implies in view of the definition of $\displaystyle {\mathcal C}_\alpha(Z) $ that there is a positive constant $\wt c$ depending only on the ring $ {\mathcal C}$ such that
$$\left| \gamma \cdot Z \right| \geq m c - m c_0 \, \wt c \,.$$ This ensures the desired result, by choosing the constant $c_0$ in the definition of $k_t^1$ smaller than~$ c / { \wt c}$. Proposition~\ref{gammaZnotzero} is proved. \end{proof}
\subsubsection{A change of variables: the diffeomorphism ${\mathcal H}$} We can assume without loss of generality (if not the integral is zero) that ${\mathcal C}_\alpha(Z) $ is not empty, and in view of Proposition~\ref{gammaZnotzero}, we can write for any~$\gamma \in\ {\mathcal C}_\alpha(Z) $ the following orthogonal decomposition (since~$Z \neq 0$): \begin{equation}\label{chgtvariable}
\frac 1 m \, \nabla_\gamma \zeta (\alpha, \gamma)=\wt \Gamma_1\, \hat Z_1+\wt \Gamma'\, , \quad \, \hbox{with}\, \quad
\wt \Gamma_{1}:= \Bigl(\frac 1 m \, \nabla_\gamma \zeta (\alpha, \gamma)\Big|\hat Z\Bigr)\, \, \hbox{and}\, \, \hat Z_1 := \frac Z {|Z|}\, \cdot \end{equation} Since~$\wt \Gamma'$ is orthogonal to the vector~$Z$, we infer that \begin{equation}\label{bound}
\big |\nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m )) \big | = \frac 1 m \, \left|Z- \nabla_\gamma \zeta (\alpha, \gamma)\right| \geq |\wt \Gamma'|\, . \end{equation} Let us consider in $ \mathop{\mathbb R\kern 0pt}\nolimits^p$ an orthonormal basis $(\hat Z_1 , \dots ,\hat Z_p )$. Thanks to Proposition~\ref{gammaZnotzero}, we have~$\gamma\cdot\hat Z_1\not=0$ on the support of the integral defining $I_\alpha(Z)$. Obviously, the vector $\wt \Gamma'$ defined by~\eqref{chgtvariable} belongs to the vector space generated by~$(\hat Z_2 , \dots ,\hat Z_p )$. To investigate the integral~$I_{\alpha}(Z)$ defined in~(\ref{defIalpha}), let us consider the map~$ {\mathcal H}:\gamma \mapsto \wt \gamma' $ defined by \begin{equation}\label{defchgtvariable}
{\mathcal C}_\alpha(Z) \ni \gamma \longmapsto {\mathcal H} (\gamma):= (\gamma \cdot\hat Z_1)\, \hat Z_1 +\sum^{p}_{k=2}(\wt \Gamma' \cdot\hat Z_k)\, \hat Z_k=: \sum^{p}_{k=1}\wt \gamma'_k\, \hat Z_k\, . \end{equation} \begin{proposition}\label{diffeo} The map~${\mathcal H}$ realizes a diffeomorphism from~$ {\mathcal C}_\alpha(Z)$ into a fixed compact set of~$ \mathop{\mathbb R\kern 0pt}\nolimits^p$.\end{proposition} \begin{proof}
It is clear that the smooth function ${\mathcal H}$ maps ${\mathcal C}_\alpha(Z) $ into a fixed compact set ${\mathcal K}$ of~$ \mathop{\mathbb R\kern 0pt}\nolimits^p$ and that
$$
\wt \gamma'_1= \gamma \cdot \hat Z_1 \, , \,\quad \mbox{and for} \, \quad 2 \leq k \leq p \, , \, \, \, \displaystyle \wt \gamma'_k= \frac 1 m \, \nabla_\gamma \zeta (\alpha, \gamma) \cdot \hat Z_k \, .
$$
$ $
\noindent Now let us prove that thanks to Assumption~{\ref{keyp}}, the map ${\mathcal H}$ constitutes a diffeomorphism. Indeed, by straightforward computations we find that~$D{\mathcal H}$, the differential of ${\mathcal H}$ satisfies: $$ \begin{aligned}\langle D{\mathcal H}(\gamma) \hat Z_1, \hat Z_1\rangle &= 1 \\ \langle D{\mathcal H}(\gamma) \hat Z_1, \hat Z_k\rangle &= \Big\langle \frac 1 m D_\gamma^2 \zeta (\alpha, \gamma) \hat Z_1, \hat Z_k \Big\rangle \quad \hbox{for} \, \, 2 \leq k \leq p \,, \\ \langle D{\mathcal H}(\gamma) \hat Z_j, \hat Z_k \rangle &= \Big\langle \frac 1 m D_\gamma^2 \zeta (\alpha, \gamma) \hat Z_j, \hat Z_k \Big\rangle \quad \hbox{for} \, \, 2 \leq j, k \leq p \quad \hbox{and}\\ \langle D{\mathcal H}(\gamma) \hat Z_j, \hat Z_1 \rangle &= 0 \quad \hbox{for} \, \, 2 \leq j \leq p\,. \end{aligned}$$ Proving that ${\mathcal H}$ is a diffeomorphism amounts to showing that for any $\displaystyle \gamma \in {\mathcal C}_\alpha(Z) $, the kernel of~$D{\mathcal H}(\gamma)$ reduces to $\{0\}$. In view of the above formulas, if~$ \displaystyle V= \sum^{p}_{j=1} V_j \hat Z_j$ belongs to the kernel of~$D{\mathcal H}(\gamma)$ then~$V_1= V \cdot \hat Z_1 =0$ and~$D_\gamma^2\zeta(\alpha,\gamma)V\cdot \hat Z_k=0$ for $2\leq k\leq p$. Thus we can write~$D^2_\gamma\zeta(\alpha,\gamma)V=\tau \hat Z_1$ for some $\tau\in\mathop{\mathbb R\kern 0pt}\nolimits$. Let us point out that since the function $\zeta(\alpha,\cdot)$ is homogeneous of degree $1$, then $D_\gamma^2 \zeta (\alpha, \gamma) \gamma = 0$. We deduce that $$0=D^2_\gamma\zeta(\alpha,\gamma)\gamma \cdot V= \gamma \cdot D^2_\gamma\zeta(\alpha,\gamma) V =\tau\, \gamma \cdot \hat Z_1\,.$$ Since for all $\displaystyle \gamma \in {\mathcal C}_\alpha(Z) $, $\gamma\cdot \hat Z_1\not=0$, we find that
$\tau=0$ and therefore $D^2_\gamma\zeta(\alpha,\gamma)V=0$. But Assumption~\ref{keyp} states that the Hessian $ \displaystyle D_\gamma^2 \zeta (\alpha, \gamma)$ is of rank $p-1$, so we conclude that $V$ is colinear to $\gamma$. But we have seen that~$V \cdot \hat Z_1 =0$, which contradicts the fact that~$\gamma \cdot \hat Z_1\neq 0$. This entails that~$V$ is null and ends the proof of the proposition. \end{proof}
\medbreak
\noindent We can therefore perform the change of variables defined by \eqref{defchgtvariable} in the right-hand side of~\eqref{usformula}, to obtain $$ |I_{\alpha}(Z)|
\leq C_N \,m^{ N } \int_{{\mathcal K}}
\frac 1 {\left(1+t|\wt \gamma'|^2\right)^{N}}\, d\wt \gamma'\,d \wt \gamma_1 \, . $$
\subsubsection{End of the proof: convergence of the series} Choosing $ N = p-1$ implies by the change of variables~$ \gamma^\sharp = t^{\frac 1 2} \wt \gamma'$ that there is a constant~$C$ such that $$
|I_{\alpha}(Z)| \leq C |t|^{-\frac{p-1}2} \, m^{p-1} \, , $$ which gives rise to $$
\Big| \int_{ {\mathcal C}_\alpha ( Z ) } {\rm e}^{it \Phi_\alpha ( Z, \frac \gamma m )} \psi_\alpha(\gamma) d \gamma\Big|\leq
C |t|^{-\frac{p-1}2} m^{p-1 }. $$ We get exactly in the same way that $$
\Big | \int_{ |\nabla_\lambda \Phi_0 ( Z , \lambda )|\leq c_0 } {\rm e}^{it\Phi_0(Z, \lambda)} G_0 \big (P,Q,\eta(\lambda)\big) |{\mbox {Pf}} (\lambda) |\, F (\lambda)\, d \lambda \Big | \leq C |t|^{-\frac{p-1}2} . $$ Finally returning to the kernel~$k^1_t$ defined in~(\ref{defkt1}), we get $$ \begin{aligned}
|k^1_t(P,Q,tZ)| & \leq C |t|^{-\frac{p-1}2} + C\, |t|^{-\frac{p-1}2}\, \sum_{m \in {\mathbb N^*}} m^{d-1}m^{-d-p} \, m^{p-1} \\
& \leq C |t|^{-\frac{p-1}2} \, , \end{aligned} $$ since the series over~$m$ is convergent. The dispersive estimate is thus proved for $k^1_t$.\\
\subsection{Stationary phase argument for $k_t^2$}
\noindent We now prove~(\ref{goal}) for $k^2_t $, which is easier since the gradient of the phase is bounded from below. We claim that there is a constant $C$ such that \begin{equation}\label{estphas1}
|k^2_t (P,Q,tZ) | \leq \frac {C} {\,t^{\frac{p-1}2} }\, \cdot \end{equation} This can be achieved as above by means of adequate integrations by parts. Indeed, in the case when~$\alpha \neq 0$, consider the following first order operator: $$
{\mathcal L}_\alpha^2:= - i \frac{ \nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m )) \cdot \nabla_\gamma}{| \nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m ))|^2} \, \cdot$$ Note that when $\alpha=0$, the arguments are the same without performing the change of variable $\lambda={\gamma/ m}$.\\ The operator ${\mathcal L}^2_\alpha$ obviously satisfies $${\mathcal L}_\alpha^2 \, {\rm e}^{it \Phi_\alpha ( Z, \frac \gamma m )}= t \, {\rm e}^{it \, \Phi_\alpha ( Z, \frac \gamma m )}\, ,$$ hence by repeated integrations by parts, we get $$ \begin{aligned}
J_\alpha(P,Q,tZ) &:= \int_{ | \nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m ))|\geq c_0 } {\rm e}^{it \Phi_\alpha ( Z, \frac \gamma m )} \psi_{\alpha}(\gamma)\, d \gamma \\
&= \frac{ 1}{t^N}\, \int_{ | \nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m ))|\geq c_0 } {\rm e}^{it \Phi_\alpha ( Z, \frac \gamma m ))} ({}^t{\mathcal L}_\alpha^2)^N \psi_{\alpha}(\gamma)\, d\lambda \, . \end{aligned} $$ Let us admit the following lemma for a while. \begin{lemma}\label{transposeofI} For any integer~$N$, there is a smooth function~$\theta_N$ compactly supported on a compact set of~$\mathop{\mathbb R\kern 0pt}\nolimits^p$ such that $$
\big|(^t{} {\mathcal L}_\alpha^2 )^N\psi_{\alpha}(\gamma) \, \big| \leq \frac{\theta_N(\gamma) \, m^{N } }{| \nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m ))|^{N } }\, \cdot $$ \end{lemma} \noindent One then observes that if $\gamma$ is in the support of the integral defining $k^2_t$, the lemma implies $$
\big|(^t{} {\mathcal L}_\alpha^2 )^N\psi_{\alpha}(\gamma) \, \big| \leq \frac{\theta_N(\gamma)}{c_0^N} \, m^{N}. $$ This estimate ensures the result as in Section~\ref{statphas} by taking $N=p-1$.
\subsection{Proofs of Lemma {\bf\ref{transposeofL}} and Lemma {\ref{transposeofI}}}\label{prooflemmas}
Lemma~\ref{transposeofL} is an obvious consequence of the following Lemma~\ref{transposeofLbis}, taking~$(a,b)\equiv (0,0)$. We omit the proof of Lemma~{\rm\ref{transposeofI}} which consists in a straightforward modification of the arguments developed below.
\begin{lemma}\label{transposeofLbis} For any integer~$N$, one can write $$ (^t{} {\mathcal L}_\alpha^1 )^N\psi_{\alpha}(\gamma) = f_{N,m} \big(\gamma,t^\frac12 \nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m ))\big ) \, , $$
with $|\alpha|=m$, and where~$f_{N,m}$ is a smooth function supported on~${\mathcal C} \times \mathop{\mathbb R\kern 0pt}\nolimits^{p}$ with~$\mathcal C$ a fixed ring of~$ \mathop{\mathbb R\kern 0pt}\nolimits^{p}$, such that for any couple~$(a,b) \in {\mathbb N}^p \times {\mathbb N}^{p}$, there is a constant~$C$ (independent of $m$) such that $$
|\nabla_\gamma^a \nabla_\Theta^b f_{N,m} (\gamma,\Theta) | \leq C \, m^{N+|a| } (1+|\Theta|^2)^{-N - \frac{ |b|}{2}} \, . $$ \end{lemma} \begin{proof}[Proof of Lemma~{\rm\ref{transposeofLbis}}] Let us prove the result by induction over~$N$.
We start with the case when~$N$ is equal to zero. Notice that in that case the function~$ f_{0,m} (\gamma,\Theta) = \psi_{\alpha}(\gamma) $ does not depend on the quantity~$\Theta = t^\frac12 \nabla_\gamma (\Phi_\alpha ( Z, \frac \gamma m) )$, so we need to check that for any~$a \in {\mathbb N}^p$, there is a constant~$C$ such that \begin{equation}\label{formula}
|\nabla_\gamma^a \psi_{\alpha}(\gamma) | \leq C \, m^{|a| }\, , \end{equation}
when $|\alpha|=m$. The case when~$a=0$ is obvious thanks to the uniform bound on~$G_\alpha$. To deal with the case~$|a| \geq 1 $, we state the following technical result which will be proved at the end of this paragraph. \begin{lemma}\label{hermitederivatives} For any integer~$k$, there is a constant~$C $ such that the following bound holds for the function~$g_n$ defined in~{\rm(\ref{defgxi1xi2})}, $n\in{\mathbb N}$: $$
\forall (\xi_1,\xi_2) \in \mathop{\mathbb R\kern 0pt}\nolimits^2 \, , \quad \big | ( \xi_1 \partial_{\xi_1}+\xi_2 \partial_{\xi_2} )^k g_n (\xi_1, \xi_2) \big | \leq C n^k \, . $$ \end{lemma} \noindent Let us now compute~$\nabla_\gamma^a \psi_{\alpha}(\gamma)$. Recall that according to~(\ref{def:psialpha}), $$ \begin{aligned}
\psi_{\alpha}(\gamma) &= G_\alpha\big (P,Q,\widetilde \eta_m(\gamma)\big ) F \Big(\frac \gamma m\Big) \, |{\mbox {Pf}} (\gamma) |\\
&= F\left({\gamma\over m}\right)\, \prod_{j=1}^d \psi_{\alpha,j}(\gamma)\, , \end{aligned} $$ where $$ \psi_{\alpha,j}(\gamma) := \eta_j(\gamma)\widetilde \theta \big ((2\alpha_j+1) \widetilde \eta_{j,m}(\gamma)\big) \, g_{\alpha_j} \big(\sqrt { \widetilde \eta_{j,m}(\gamma) }P_j,\sqrt { \widetilde \eta_{j,m}(\gamma) }Q_j\big) \, , \quad \widetilde\eta_{j,m}(\gamma) := \frac1m \eta_{ j}(\gamma) \, . $$ We compute $$ \begin{aligned}
\nabla_\gamma^a \psi_{\alpha,j}(\gamma) &= \sumetage{b \in {\mathbb N}^p}{0 \leq |b| \leq |a|} \binom{b}{a} \nabla_\gamma^b \big( \theta \big ((2\alpha_j+1) \widetilde \eta_{j,m}(\gamma)\big) \big)\nabla_\gamma^{a-b} \big( \eta_j(\gamma) g_{\alpha_j} \big(\sqrt { \widetilde \eta_{j,m}(\gamma) }P_j,\sqrt { \widetilde \eta_{j,m}(\gamma) }Q_j\big)\big) \, . \end{aligned} $$
Let us assume first that~$|a-b| = 1$. Then we write, for some~$1 \leq \ell \leq p$, $$ \begin{aligned} \partial_{\gamma_\ell} \Big( \eta_j(\gamma) g_{\alpha_j} \big(\sqrt { \widetilde \eta_{j,m}(\gamma) }P_j&,\sqrt { \widetilde \eta_{j,m}(\gamma) }Q_j\big)\Big)= \partial_{\gamma_ \ell} \eta_j(\gamma) g_{\alpha_j} \big(\sqrt { \widetilde \eta_{j,m}(\gamma) }P_j,\sqrt { \widetilde \eta_{j,m}(\gamma) }Q_j\big) \\ & \quad + \eta_j(\gamma) \frac{ \partial_{\gamma_ \ell} \widetilde \eta_{j,m}(\gamma) }{ 2 { \widetilde \eta_{j,m}(\gamma)} } \times \big( (\xi_1 \partial_{\xi_1} + \xi_2\partial_{\xi_2}) g_{\alpha_j} \big)\Big(\sqrt { \widetilde \eta_{j,m}(\gamma) }P_j,\sqrt { \widetilde \eta_{j,m}(\gamma) }Q_j\Big) \, . \end{aligned} $$ Next we use the fact that there is a constant~$C$ such that on the support of~$ \theta \big ((2\alpha_j+1) \widetilde \eta_{j,m}(\gamma)\big)$, $$
\widetilde \eta_{j,m}(\gamma) \geq \frac1{Cm}\quad \mbox{and} \quad \left|
\partial_{\gamma_ \ell} \widetilde \eta_{j,m}(\gamma) \right| \leq \frac C m\,,$$ so applying Lemma~\ref{hermitederivatives} gives $$
|\nabla_\gamma \psi_{\alpha,j}(\gamma) | \lesssim { \alpha_j }\,. $$
Recalling that~$\alpha_j \leq m$ and that, for all $i \in \{1,\cdots,d\}$, $\psi_{\alpha,j}$ is uniformly bounded, this easily achieves the proof of Estimate \eqref{formula} in the case~$|a|=1$ by taking the product over~$j$. Once we have noticed that
$$\alpha^{a_1}_1 \cdots \alpha^{a_d}_d \lesssim (\alpha_1+\cdots+\alpha_d)^{a_1+\cdots+a_d}\, ,$$ the general case (when~$|a|>1$) is dealt with identically, we omit the details.
\noindent Finally let us proceed with the induction: assume that for some integer~$N$ one can write $$ (^t{} {\mathcal L}_\alpha^1 )^{N-1}\psi_{\alpha}(\gamma) = f_{N-1,m} \big(\gamma,t^\frac12 \nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m ))\big ) $$ where~$f_{N-1,m}$ is a smooth function supported on~${\mathcal C} \times \mathop{\mathbb R\kern 0pt}\nolimits^{p}$, such that for any couple~$(a,b) \in {\mathbb N}^p \times {\mathbb N}^{p}$, there is a constant~$C$ (independent of $m$) such that \begin{equation}\label{induction}
|\nabla_\gamma^a \nabla_\Theta^b f_{N-1,m} (\gamma,\Theta) | \leq C \, m^{ {N -1 + |a|}} (1+|\Theta|^2)^{- (N-1) - \frac{ |b|}{2}} \, . \end{equation} We compute for any function~$\Psi(\gamma)$, \begin{eqnarray*}
^t{} {\mathcal L}_\alpha^1 \Psi(\gamma)& = & i \frac{ \nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m ))\, \cdot \nabla_\gamma \Psi(\gamma)}{ 1+t | \nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m ))|^2 }+ \frac{ 1 + i \Delta ( \Phi_\alpha ( Z , \frac \gamma m ))}{ 1+t | \nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m ))|^2 }\Psi(\gamma) \\
&-& 2it \sum_{1\leq j, k\leq p} \frac{ \partial_{\gamma_j} \partial_{\gamma_k} ( \Phi_\alpha ( Z , \frac \gamma m ))\partial_{\gamma_j} ( \Phi_\alpha ( Z , \frac \gamma m ))\partial_{\gamma_k} ( \Phi_\alpha ( Z , \frac \gamma m )) }{ (1+t |\nabla_\gamma ( \Phi_\alpha ( Z , \frac \gamma m ))|^2)^2 }\Psi(\gamma) \, . \end{eqnarray*} We apply that formula to~$\Psi := f_{N-1} \big(\gamma,t^\frac12 \nabla_\gamma ( \Phi_\alpha ( Z , \frac \gamma m ))\big )
$ and estimating each of the three terms separately we find (using the fact that~$m \geq 1$),
$$
\begin{aligned}
\Big | ^t{} {\mathcal L}_\alpha^1 \Big ( f_{N-1} \big(\gamma,t^\frac12 \nabla_\gamma ( \Phi_\alpha ( Z , \frac \gamma m ))\big )\Big)\Big | &\leq C \big( 1+t | \nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m ))|^2 \big)^{-1} \\
& \quad \times m^{ {N -1 +1 }} (1+t | \nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m ))|^2)^{- (N-1) } \\
& + C \big( 1+t | \nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m ))|^2 \big)^{-1} \\
& \quad \times m^{ {N -1 }} (1+t | \nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m ))|^2)^{- (N-1) } \\
& + C t \, | \nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m ))|^2 \big( 1+t | \nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m ))|^2 \big)^{-2} \\
& \quad \times m^{ {N -1 }} (1+t | \nabla_\gamma( \Phi_\alpha ( Z , \frac \gamma m ))|^2)^{- (N-1) }
\end{aligned}
$$
thanks to the induction assumption~(\ref{induction}) along with~\eqref{formula} and the fact that on $ {\mathcal C}_\alpha(Z)$, all the derivatives of the function $\nabla_\gamma\big( \Phi_\alpha \big( Z , \frac \gamma m \big)\big)$ are uniformly bounded with respect to $\alpha$ and $Z$. A similar argument allows to control derivatives in~$\gamma$ and~$\Theta$, so Lemma~{\rm\ref{transposeofLbis}} is proved. \end{proof}
\begin{proof}[Proof of Lemma~{\rm\ref{hermitederivatives}}] By definition of~$g_n$ and using the change of variable $$ \xi \mapsto \xi - \frac{\xi_1}2 $$ we recover the Wigner-type formula $$ g_n(\xi_1,\xi_2) = \int_{\mathop{\mathbb R\kern 0pt}\nolimits} e^{-i \xi_2 \xi} h_n \big (\xi + \frac{\xi_1}2 \big) h_n \big (\xi - \frac{\xi_1}2 \big) \, d\xi \, . $$ Then an easy computation shows that for all~$k$ $$
|( \xi_1 \partial_{\xi_1} + \xi_2 \partial_{\xi_2} )^kg_n(\xi_1,\xi_2) |
\leq \int_{\mathop{\mathbb R\kern 0pt}\nolimits}\Big |( \xi_1 \partial_{\xi_1} + \xi \partial_{\xi} + 1 )^k \Big( h_n \big (\xi + \frac{\xi_1}2 \big) h_n \big (\xi - \frac{\xi_1}2
\big)\Big) \Big | \, d\xi \, . $$ By the Cauchy-Schwarz inequality (and a change of variables to transform~$\xi + \frac{\xi_1}2$ and~$\xi - \frac{\xi_1}2$ into~$(\xi,\xi')$), it remains therefore to check that for all~$k$ $$
\|(\xi\partial_{\xi})^k h_n\|_{L^2(\mathop{\mathbb R\kern 0pt}\nolimits)} \leq C_k n^k \, . $$ This again reduces to checking that \begin{equation}\label{finalestimatereferee}
\|\xi^{2k} h_n\|_{L^2(\mathop{\mathbb R\kern 0pt}\nolimits)} + \| h_n^{(2k)}\|_{L^2(\mathop{\mathbb R\kern 0pt}\nolimits)} \leq C_k n^k \, . \end{equation} This estimate is a consequence of the identification of the domain of~$\sqrt H$ $$ D(\sqrt H ) = \Big \{ u \in L^2(\mathop{\mathbb R\kern 0pt}\nolimits) \, / \, \xi u \, , u' \in L^2(\mathop{\mathbb R\kern 0pt}\nolimits)\Big \} $$ which classically extends to powers of~$\sqrt H$ $$ D(H^\frac p2 ) = \Big \{ u \in L^2(\mathop{\mathbb R\kern 0pt}\nolimits) \, / \, \xi ^{p-\ell}u^{(\ell)} \in L^2(\mathop{\mathbb R\kern 0pt}\nolimits) \, , 0 \leq \ell \leq p\Big \} \, . $$
Then~(\ref{finalestimatereferee}) is finally obtained by applying this to~$p=2k$, recalling that~$H^k h_n = (2n+1)^k h_n$. The proposition is proved. \end{proof}
\section{Optimality of the dispersive estimates}\label{optimality}
\noindent In this section, we first end the proof of Theorem~\ref{dispgrad} by proving the optimality of the dispersive estimates for groups satisfying Assumption~\ref{keyp}. Then, we prove Proposition~\ref{remnodisp}.
\subsection{Optimality for groups satisfying Assumption~\ref{keyp}}
Let us now end the proof of Theorem~\ref{dispgrad} by establishing the optimality of the dispersive estimate~\eqref{eq:gradeddispS}. We use the fact that there always exists $\lambda^*\in\Lambda$ such that
\begin{equation}\label{prop:lambda0}
\nabla_\lambda \zeta(0,\lambda^*)\not=0,
\end{equation}
where the function $\zeta$ is defined in~(\ref{eq:freqxy}). Indeed, if not, the map $\lambda\mapsto \zeta(0,\lambda)$ would be constant which is in contradiction with the fact that $\zeta$ is homogeneous of degree~$1$. We prove the following proposition, which yields the optimality of the dispersive estimate of Theorem~\ref{dispgrad}.
\begin{proposition}\label{prop:optimal}
Let $\lambda^*\in\Lambda$ satisfying~{\rm(\ref{prop:lambda0})}.
There is a function $g\in{\mathcal D}(\mathop{\mathbb R\kern 0pt}\nolimits^p)$ compactly supported in a connected open neighborhood of $\lambda^*$ in $\Lambda$, such that for the initial data~$f_0$ defined by \begin{equation}\label{deff0optimality} \forall (\lambda,\nu) \in\mathfrak r(\Lambda),\;\; {\mathcal F}(f_0) (\lambda, \nu) h_{\alpha,\eta(\lambda)} = 0 \, \, \hbox{for} \, \, \alpha \neq 0 \, \, \hbox{and} \, \, {\mathcal F}(f_0) (\lambda, \nu) h_{0,\eta(\lambda)} = g(\lambda) h_{0,\eta(\lambda)}\, , \end{equation}
there exists $c_0> 0$ and $x\in G$ such that
$$| {\rm e}^{-it\Delta_G} f_0(x)| \geqslant c_0\, |t|^{-{k+p-1\over 2}} \, .$$
\end{proposition}
\begin{proof} Let~$g$ be any smooth compactly supported function over~$\mathop{\mathbb R\kern 0pt}\nolimits^p$, and define~$f_0$ by~(\ref{deff0optimality}). For any point $x= e^X\in G$ under the form $X=(P=0,Q=0,Z,R)$, the inversion formula gives
$${\rm e}^{-it\Delta_G}f_0(x)= \kappa \, \int_{\lambda\in\Lambda}\int_{\nu\in\mathfrak r^*_\lambda} {\rm e}^{it|\nu|^2+it\zeta(0,\lambda)-i\lambda(Z)-i\nu(R)}
g(\lambda)|{\mbox {Pf}} (\lambda)|d\nu d\lambda \, .$$ To simplify notations, we set $\zeta_0(\lambda):=\zeta(0,\lambda)$.
Setting $Z=tZ^*$ with~$Z^*:=\nabla_\lambda\zeta(0,\lambda^*)\not=0$, we get as in~(\ref{schrodispersion})
$$
\Big |
{\rm e}^{-it\Delta_G}f_0(x)
\Big | = c_1 |t|^{-{k\over 2}}
\Big | \int_{\lambda\in\mathop{\mathbb R\kern 0pt}\nolimits^p} {\rm e}^{it (\lambda\cdot Z^*-\zeta_0(\lambda))}
g(\lambda)|{\mbox {Pf}}(\lambda)| d\lambda
\Big |
\, $$
for some constant $c_1>0$.
Without loss of generality, we can assume
$$
\lambda^*=(1,0,\dots,0)
$$ (if not, we perform a change of variables~$\lambda\mapsto \Omega\lambda$ where $\Omega$ is a fixed orthogonal matrix), and we now shall perform a stationary phase in the variable $\lambda'$, where we have written~$\lambda=(\lambda_1,\lambda')$. For any fixed~$\lambda_1$, the phase $$\Phi_{\lambda_1}(\lambda',Z):= Z\cdot\lambda -\zeta_0(\lambda)$$ has a stationary point $\lambda'$ if and only if $Z'=\nabla_{\lambda'}\zeta_0(\lambda)$ (with the same notation~$Z=(Z_1,Z')$). We observe that the homogeneity of the function $\zeta_0$ and the definition of $Z^*$ imply that $$Z^*=\nabla_\lambda \zeta_0(1,0,\dots,0)=\nabla_\lambda\zeta_0(\lambda_1,0,\dots,0),\;\;\forall \lambda_1\in\mathop{\mathbb R\kern 0pt}\nolimits \, , $$ hence if $\lambda'=0$, then the phase $\Phi_{\lambda_1} (0,Z^*)$ has a stationary point.
\noindent From now on we choose~$g$ supported near those stationary points~$(\lambda_1,0)$, and vanishing in the neighborhood of any other stationary point.
\noindent Let us now study the Hessian of~$\Phi_{\lambda_1}$ in $\lambda'=0$. Again because of the homogeneity of the function $\zeta_0$, we have $$\left[{\rm Hess} \,\zeta_0(\lambda)\right]\lambda=0,\;\;\forall \lambda\in\mathop{\mathbb R\kern 0pt}\nolimits^p.$$ In particular, for all $\lambda_1\not=0$, ${\rm Hess}\,\zeta_0(\lambda_1,0,\cdots,0)\left(\lambda_1,0,\cdots,0\right)=0$ and the matrix ${\rm Hess}\,\zeta_0(\lambda_1,0,\cdots,0)$ in the canonical basis is of the form $${\rm Hess}\,\zeta_0(\lambda_1,0,\cdots,0)=\begin{pmatrix} 0 & 0 \\ 0 & {\rm Hess}_{\lambda',\lambda'}\,\zeta_0(\lambda_1,0,\cdots,0)\end{pmatrix}.$$ Using that ${\rm Hess}\,\zeta_0(\lambda_1,0,\cdots,0)$ is of rank $p-1$, we deduce that $ {\rm Hess}_{\lambda',\lambda'}\,\zeta_0(\lambda_1,0,\cdots,0)$ is also of rank~$p-1$ and we conclude by the stationary phase theorem (\cite{stein}, Chap. VIII.2), choosing~$g$ so that the remaining integral in~$\lambda_1$ does not vanish.
\end{proof}
\subsection{Proof of Proposition~\ref{remnodisp}}
Assume that $G$ is a step~2 stratified Lie group whose radical index is null and for which $\zeta( 0,\lambda)$ is a linear form on each connected component of the Zariski-open subset~$\Lambda$. Let~$g$ be a smooth nonnegative function supported in one of the connected components of $\Lambda$ and define~$f_0$ by $$ {\mathcal F}(f_0) (\lambda) h_{\alpha,\eta(\lambda)} = 0 \, \, \hbox{for} \, \, \alpha \neq 0 \, \, \hbox{and} \, \, {\mathcal F}(f_0) (\lambda) h_{0,\eta(\lambda)} = g(\lambda) h_{0,\eta(\lambda)}\, .$$ By the inverse Fourier formula, if $x=e^X\in G$ is such that $X=(P=0,Q=0,tZ)$, then we have
$$ {\rm e}^{-it\Delta_G} (x) = \kappa \, \int\,{\rm e}^{-i t\,\lambda(Z)} {\rm e}^{it \zeta(0, \lambda)}g(\lambda) \,|{\mbox {{\mbox {Pf}}}} (\lambda) | \, d\lambda \, .$$
Since $ \zeta(0, \lambda)$ is a linear form on each connected component of $\Lambda$, there exists $Z_0$ in~$\mathfrak z$ such that for
$$\forall\lambda\in \mathfrak z^*\cap {\rm supp} \,g, \;\; -\lambda(Z_0)+ \zeta(0, \lambda) = 0\,.$$ As a consequence, choosing $Z=Z_0$, we obtain
$$ {\rm e}^{-it\Delta_G} (x) = \kappa \, \int g(\lambda) \,|{\mbox {{\mbox {Pf}}}} (\lambda) | \, d\lambda \, \not=0,$$ which ends the proof of the result.
\appendix
\section{ On the inversion formula in Schwartz space}
\noindent This section is dedicated to the proof of the inversion formula in the Schwartz space ${\mathcal S}(G)$ (Proposition~\ref{inversioninS} page~\pageref{inversioninS}). \label{appendixinversion}
\begin{proof}
We first observe that to establish \eqref{inversionformula}, it suffices to prove that \begin{equation} \label{inverssimp}f(0)
= \kappa \, \int_{\lambda\in\Lambda}\int_{\nu\in\mathfrak r^*_\lambda} {\rm{tr}} \, \Big({\mathcal F}(f)(\lambda,\nu) \Big)\, |{\mbox {Pf}} (\lambda) |\, d\nu\,d\lambda \,. \end{equation} Indeed, introducing the auxiliary function $g$ defined by $g(x') := f(x \cdot x')$ which obviously belongs to~${\mathcal S}(G)$ and satisfies ${\mathcal F}(g)(\lambda,\nu)= u^{\lambda,\nu}_{X(\lambda,x^{-1})} \circ {\mathcal F}(f)(\lambda,\nu)$, and assuming~(\ref{inverssimp}) holds, we get
\begin{eqnarray*} f(x)= g(0) &=& \kappa \, \int_{\lambda\in\Lambda}\int_{\nu\in\mathfrak r^*_\lambda} {\rm{tr}} \, \Big( {\mathcal F}(g)(\lambda,\nu) \Big)\, |{\mbox {Pf}} (\lambda) |\, d\nu\,d\lambda \\
& =& \kappa \, \int_{\lambda\in\Lambda}\int_{\nu\in\mathfrak r^*_\lambda} {\rm{tr}} \, \Big(u^{\lambda,\nu}_{X(\lambda,x^{-1})} {\mathcal F}(f)(\lambda,\nu) \Big)\, |{\mbox {Pf}} (\lambda) |\, d\nu\,d\lambda
\, , \end{eqnarray*} which is the desired result.
\noindent Let us now focus on~(\ref{inverssimp}).
In order to compute the right-hand side of Identity \refeq{inverssimp}, we introduce
\begin{eqnarray*}
A&:=&
\int_{\lambda\in\Lambda}\int_{\nu\in\mathfrak r^*_\lambda} {\rm{tr}} \, \Big( {\mathcal F}(f)(\lambda,\nu) \Big)\, |{\mbox {Pf}} (\lambda) |\, d\nu\,d\lambda \\
&= & \int_{\lambda\in\Lambda}\int_{\nu\in\mathfrak r^*_\lambda} \int_{x \in G} \sum_{\alpha \in {\mathbb N}^d} \big( u^{\lambda,\nu}_{X(\lambda,x)} h_{\alpha,\eta(\lambda)} | h_{\alpha,\eta(\lambda)} \big)\, |{\mbox {Pf}} (\lambda) | \, f(x) \, d\mu(x) \, d\nu\,d\lambda \, ,
\end{eqnarray*}
with the notation of Section~\ref{Fourier}.
In order to carry on the calculations, we need to resort to a Fubini argument which comes from the following identity:
\begin{equation}\label{hyp:inv}
\sum_{\al \in {\mathbb N}^d} \int_{\lambda\in\Lambda}\int_{\nu\in\mathfrak r^*_\lambda} \|{\mathcal F}(f)(\lambda,\nu) h_{\alpha,\eta(\lambda)}\|_{L^2( \mathfrak p_\lambda)} |{\mbox {Pf}} (\lambda) |\, d\nu\,d\lambda\, < \infty \, . \end{equation} We postpone the proof of~(\ref{hyp:inv}) to the end of this section. Thanks to~(\ref{hyp:inv}), the order of integration does not matter and we can transform the expression of $A$: we use the fact that for any $\alpha \in {\mathbb N}^d$
$$ \big( u^{\lambda,\nu}_{X(\lambda,x)} h_{\alpha,\eta(\lambda)} | h_{\alpha,\eta(\lambda)} \big)= {\rm e}^{-i\nu(R) -i\lambda( Z)}\int_{ \mathop{\mathbb R\kern 0pt}\nolimits^d} {\rm e}^{-i \sum_j\eta_j(\lambda)\, (\xi_j+{1\over 2} P_j)\, Q_j} h_{\alpha,\eta(\lambda)}(P+ \xi)\, h_{\alpha,\eta(\lambda)}(\xi)d \xi \, ,$$ where we have identified ${\mathfrak p}_\lambda$ with $\mathop{\mathbb R\kern 0pt}\nolimits^d$, and this gives rise to
$$ \displaylines{
A= \int_{\lambda\in\Lambda}\int_{\nu\in\mathfrak r^*_\lambda} \int_{x \in G} \int_{\xi \in \mathop{\mathbb R\kern 0pt}\nolimits^d} \, \sum_{\alpha \in {\mathbb N}^d} \, {\rm e}^{-i\nu( R) -i\lambda (Z)} {\rm e}^{-i \sum_j\eta_j(\lambda) \,(\xi_j+{1\over 2} P_j) \, Q_j} \, h_{\alpha,\eta(\lambda)}(P+ \xi)\, \cr
{} {} \qquad\qquad \qquad\qquad \qquad\qquad \qquad\qquad \times \, h_{\alpha,\eta(\lambda)}(\xi) \, |{\mbox {Pf}} (\lambda) | \, f(x) \, d\mu(x) \, d \xi \, d\nu\,d\lambda \, ,} $$ where we recall that $$ h_{\alpha,\eta(\lambda)}(\xi) = \prod_{j=1}^d h_{\alpha_j,\eta_j(\lambda)}(\xi_j) \quad \mbox{with} \quad h_{\alpha_j,\eta_j(\lambda)}(\xi_j)= \eta_j(\lambda)^{\frac 1 4} \, h_{\alpha_j} \Big(\sqrt{\eta_j(\lambda)} \,\xi_j\Big)\, .$$
\noindent
Performing the change of variables
$$
\left\{ \begin{array}{l} \wt \xi_j = \sqrt{\eta_j(\lambda)} \,\xi_j\\ \wt P_j = \sqrt{\eta_j(\lambda)} \,P_j\\ \wt Q_j = \sqrt{\eta_j(\lambda)} \,Q_j\, \end{array} \right. $$ for $j \in \{1,\dots, d\}$, we obtain, dropping the $\,\widetilde\,$ on the variables,
$$ \displaylines{
A= \int_{\lambda\in\Lambda}\int_{\nu\in\mathfrak r^*_\lambda} \int_{(P,Q,R,Z)\in\mathop{\mathbb R\kern 0pt}\nolimits^{n}} \int_{\xi \in \mathop{\mathbb R\kern 0pt}\nolimits^d} \, \sum_{\alpha \in {\mathbb N}^d} \, {\rm e}^{-i\nu( R) -i\lambda ( Z)} {\rm e}^{-i \sum_\ell (\xi_\ell+{1\over 2} P_\ell) \cdot Q_\ell} \, \prod_{j=1}^d \, h_{\alpha_j}(P_j+ \xi_j)\, h_{\alpha_j}(\xi_j) \cr
{} {} \quad \, \times \, f\big(\eta^{-{1\over 2}}(\lambda)\,P, \eta^{-{1\over 2}}(\lambda) \,Q, R, Z\big) \, dP \,dQ \,dR\,dZ\, d \xi \, d\nu\,d\lambda \, ,} $$ with $\eta^{-{1\over 2}}(\lambda) \,P:=(\eta_1^{-{1\over 2}}(\lambda) \,P_1, \dots, \eta_d^{-{1\over 2}}(\lambda) \,P_d)$ and similarly for $Q$.
\noindent Then using the change of variables $\xi_j'= \xi_j+P_j$, for $j \in \{1,\dots, d\}$, gives $$ \displaylines{
A= \int_{\lambda\in\Lambda}\int_{\nu\in\mathfrak r^*_\lambda} \int_{(\xi',Q,R,Z)\in\mathop{\mathbb R\kern 0pt}\nolimits^n} \int_{\xi \in \mathop{\mathbb R\kern 0pt}\nolimits^d} \, \sum_{\alpha \in {\mathbb N}^d} \, {\rm e}^{-i\nu(R) -i\lambda (Z)} {\rm e}^{-{i\over 2} \sum_\ell (\xi_\ell+\xi_\ell' ) \cdot Q_\ell} \, \prod_{j=1}^d \, h_{\alpha_j}( \xi'_j)\, h_{\alpha_j}(\xi_j) \cr
{} {} \quad \, \times \, f\big(\eta^{-{1\over 2}}(\lambda)\,(\xi'-\xi), \eta^{-{1\over 2}}(\lambda) \,Q, R, Z\big) \,d\xi' \, dQ \,dR\,dZ\, d \xi \, d\nu\,d\lambda \, .} $$
\noindent Because $(h_{\alpha})_{\alpha\in{\mathbf N}^d}$ is a Hilbert basis of $L^2(\mathop{\mathbb R\kern 0pt}\nolimits^d),$ we have for all $\varphi \in L^2(\mathop{\mathbb R\kern 0pt}\nolimits^d)$ $$ \varphi(\xi) = \sum_{\alpha \in {\mathbb N}^d} \, \int_{\xi' \in \mathop{\mathbb R\kern 0pt}\nolimits^d} \varphi(\xi') \, h_{\alpha}(\xi') \,d\xi' \,h_{\alpha}(\xi) \,,$$ which leads to $$ A= \int_{\lambda\in\Lambda}\int_{\nu\in\mathfrak r^*_\lambda} \int_{(Q,R,Z)\in\mathop{\mathbb R\kern 0pt}\nolimits^{d+k+p}} \int_{\xi \in \mathop{\mathbb R\kern 0pt}\nolimits^d} \, {\rm e}^{-i\nu( R) -i\lambda (Z)} {\rm e}^{-i \xi \cdot Q} \, f\big(0, \eta^{-{1\over 2}}(\lambda) \,Q, R, Z\big) \, dQ \,dR\,dZ\, d \xi \, d\nu\,d\lambda \, .$$ Applying the Fourier inversion formula successively on $\mathop{\mathbb R\kern 0pt}\nolimits^d$, $\mathop{\mathbb R\kern 0pt}\nolimits^k$ and on $\mathop{\mathbb R\kern 0pt}\nolimits^p$ (and identifying ${\mathfrak r}(\Lambda)$ with~$\mathop{\mathbb R\kern 0pt}\nolimits^p\times\mathop{\mathbb R\kern 0pt}\nolimits^k$), we conclude that there exists a constant $\kappa>0$ such that $$ A= \kappa \, f(0) \,,$$ which ends the proof of \eqref{inverssimp}.
\noindent Let us conclude the proof by showing~(\ref{hyp:inv}). We choose $M$ a nonnegative integer. According to the obvious fact that the function~$(\mbox{Id}-\Delta_{G})^M f$ also belongs to~${\mathcal S}(G)$ (hence to $L^1(G)$), we get in view of Identity \eqref{formulafourierdelta}
$$ {\mathcal F}(f)(\lambda,\nu)h_{\alpha,\eta(\lambda)}= \Big(1+ |\nu|^2+\zeta(\alpha,\lambda)\Big)^{-M} {\mathcal F} \left( (\mbox{Id}-\Delta_{G})^M f\right)(\lam,\nu) h_{\alpha,\eta(\lambda)} \, .$$ In view of the definition of the Fourier transform on the group $G$, we thus have
$$\displaylines{\quad \| {\mathcal F}(f)(\lambda,\nu)h_{\alpha,\eta(\lambda)}\|_{L^2( \mathfrak p_\lambda)}^2 = \Big(1+ |\nu|^2+\zeta(\alpha,\lambda)\Big)^{-2M}
\cr
\times\!\! \int_{\mathfrak p_\lambda} \! \biggl( \int_{G} \!\!\! \big ((\mbox{Id}-\Delta_{G }\!\big)^M\! f(x))u^{\lambda,\nu}_{X(\lambda,x) } h_{\alpha,\eta(\lambda)}(\xi) \,d\mu(x) \overline{\int_{G} \!\!\!\big((\mbox{Id}\!-\!\Delta_{ G }\!\big)^M\! f(x'))u^{\lambda,\nu} _{X(\lambda,x') } h_{\alpha,\eta(\lambda)}(\xi)\, d\mu(x')\!}\biggr)d\xi\, .} $$ Now, by Fubini's theorem, we get
$$\displaylines{\quad \| {\mathcal F}(f)(\lambda,\nu)h_{\alpha,\eta(\lambda)}\|_{L^2( \mathfrak p_\lambda)}^2 = \Big(1+|\nu|^2+\zeta(\alpha,\lambda)\Big)^{-2M}
\cr
\times \int_{G} \int_{G}\! (\mbox{Id}-\Delta_{G }\!)^M f(x)
\overline{ (\mbox{Id}-\Delta_{G }\!)^Mf(x')} (
u^{\lambda,\nu}_{X(\lambda,x) } h_{\alpha,\eta(\lambda)}\!\mid\! u^{\lambda,\nu}_{X(\lambda,x') } h_{\alpha,\eta(\lambda)})_{L^2(\mathfrak p_\lambda)}\, d\mu(x) \, d\mu(x')\, .\quad}$$
Since the operators~$u^{\lambda,\nu}_{X(\lambda,x) } $ and~$u^{\lambda,\nu}_{X(\lambda,x') } $ are unitary on~$\mathfrak p_\lambda$ and the family~$(h_{\alpha,\eta(\lambda)})_{\alpha\in{\mathbb N}^d}$ is a Hilbert basis of~$\mathfrak p_\lambda$, we deduce that
$$\| {\mathcal F}(f)(\lambda,\nu)h_{\alpha,\eta(\lambda)}\|_{L^2( \mathfrak p_\lambda)} \leq \Big(1+ |\nu|^2+\zeta(\alpha,\lambda)\Big)^{-M} \|(\mbox{Id}- \Delta_{ G })^M f\|_{L^1(G)}\, .$$ Because
$$ {\rm Card}\Bigl(\Bigl\{\alpha\in{\mathbb N}^d\,/\, |\alpha|=m\Bigr\}\Bigr)=\binom{m+d-1}{m} \leq C(m+1)^{d-1}\, , $$
this ensures that $$ \longformule{
\sum_{\al \in {\mathbb N}^d} \int_{\lambda\in\Lambda}\int_{\nu\in\mathfrak r^*_\lambda} \|{\mathcal F}(f)(\lambda,\nu) h_{\alpha,\eta(\lambda)}\|_{L^2( \mathfrak p_\lambda)} |{\mbox {Pf}} (\lambda) |\, d\nu\,d\lambda\, \lesssim \|(\mbox{Id}- \Delta_{G })^M f\|_{L^1(G)} } {{}
\times \sum _m (m+1)^{d-1}\int_{\lambda\in\Lambda}\int_{\nu\in\mathfrak r^*_\lambda}\Big(1+|\nu|^2+\zeta(\alpha,\lambda)\Big)^{-M} |{\mbox {Pf}} (\lambda) |\, d\nu\,d\lambda\, .} $$ Hence taking $M=M_1+M_2,$ with $\displaystyle M_2 > \frac k 2$ implies that $$ \longformule{
\sum_{\al \in {\mathbb N}^d} \int_{\lambda\in\Lambda}\int_{\nu\in\mathfrak r^*_\lambda} \|{\mathcal F}(f)(\lambda,\nu) h_{\alpha,\eta(\lambda)}\|_{L^2( \mathfrak p_\lambda)} |{\mbox {Pf}} (\lambda) |\, d\nu\,d\lambda\, \lesssim \|(\mbox{Id}- \Delta_{G })^M f\|_{L^1(G)} } { {}
\times\sum _m (m+1)^{d-1}\int_{\lambda\in\Lambda}\Big(1+\zeta(\alpha,\lambda)\Big)^{-M_1} |{\mbox {Pf}} (\lambda) |\, d\lambda\, .} $$
Noticing that~$\zeta(\alpha,\lambda)=0$ if and only if $\lambda=0$ and using the homogeneity of degree~$1$ of $\zeta$, yields that there exists $c>0$ such that $\zeta(\alpha, \lambda )\geq c \, m\, | \lambda |$. Therefore, we can end the proof of~(\ref{hyp:inv}) by choosing~$M_1$ large enough and performing the change of variable~$\mu = m \, \lam $ in each term of the above series.
\noindent Proposition~\ref{inversioninS} is proved.
\end{proof}
\end{document} | arXiv |
Fractional matching
In graph theory, a fractional matching is a generalization of a matching in which, intuitively, each vertex may be broken into fractions that are matched to different neighbor vertices.
Definition
Given a graph G = (V, E), a fractional matching in G is a function that assigns, to each edge e in E, a fraction f(e) in [0, 1], such that for every vertex v in V, the sum of fractions of edges adjacent to v is at most 1:[1]
$\forall v\in V:\sum _{e\ni v}f(e)\leq 1$
A matching in the traditional sense is a special case of a fractional matching, in which the fraction of every edge is either 0 or 1: f(e) = 1 if e is in the matching, and f(e) = 0 if it is not. For this reason, in the context of fractional matchings, usual matchings are sometimes called integral matchings.
The size of an integral matching is the number of edges in the matching, and the matching number $\nu (G)$ of a graph G is the largest size of a matching in G. Analogously, the size of a fractional matching is the sum of fractions of all edges. The fractional matching number of a graph G is the largest size of a fractional matching in G. It is often denoted by $\nu ^{*}(G)$.[2] Since a matching is a special case of a fractional matching, for every graph G one has that the integral matching number of G is less than or equal to the fractional matching number of G; in symbols:
$\nu (G)\leq \nu ^{*}(G).$
A graph in which $\nu (G)=\nu ^{*}(G)$ is called a stable graph.[3] Every bipartite graph is stable; this means that in every bipartite graph, the fractional matching number is an integer and it equals the integral matching number.
In a general graph, $\nu (G)>{\frac {2}{3}}\nu ^{*}(G).$ The fractional matching number is either an integer or a half-integer.[4]
Matrix presentation
For a bipartite graph G = (X+Y, E), a fractional matching can be presented as a matrix with |X| rows and |Y| columns. The value of the entry in row x and column y is the fraction of the edge (x,y).
Perfect fractional matching
A fractional matching is called perfect if the sum of weights adjacent to each vertex is exactly 1. The size of a perfect matching is exactly |V|/2.
In a bipartite graph G = (X+Y, E), a fractional matching is called X-perfect if the sum of weights adjacent to each vertex of X is exactly 1. The size of an X-perfect fractional matching is exactly |X|.
For a bipartite graph G = (X+Y, E), the following are equivalent:
• G admits an X-perfect integral matching,
• G admits an X-perfect fractional matching, and
• G satisfies the condition to Hall's marriage theorem.
The first condition implies the second because an integral matching is a fractional matching. The second implies the third because, for each subset W of X, the sum of weights near vertices of W is |W|, so the edges adjacent to them are necessarily adjacent to at least |W| vertices of Y. By Hall's marriage theorem, the last condition implies the first one.[5]
In a general graph, the above conditions are not equivalent - the largest fractional matching can be larger than the largest integral matching. For example, a 3-cycle admits a perfect fractional matching of size 3/2 (the fraction of every edge is 1/2), but does not admit perfect integral matching - the largest integral matching is of size 1.
Algorithmic aspects
A largest fractional matching in a graph can be easily found by linear programming, or alternatively by a maximum flow algorithm. In a bipartite graph, it is possible to convert a maximum fractional matching to a maximum integral matching of the same size. This leads to a simple polynomial-time algorithm for finding a maximum matching in a bipartite graph.[6]
If G is a bipartite graph with |X| = |Y| = n, and M is a perfect fractional matching, then the matrix representation of M is a doubly stochastic matrix - the sum of elements in each row and each column is 1. Birkhoff's algorithm can be used to decompose the matrix into a convex sum of at most n2-2n+2 permutation matrices. This corresponds to decomposing M into a convex sum of at most n2-2n+2 perfect matchings.
Maximum-cardinality fractional matching
A fractional matching of maximum cardinality (i.e., maximum sum of fractions) can be found by linear programming. There is also a strongly-polynomial time algorithm,[7] using augmenting paths, that runs in time $O(|V||E|)$.
Maximum-weight fractional matching
Suppose each edge on the graph has a weight. A fractional matching of maximum weight in a graph can be found by linear programming. In a bipartite graph, it is possible to convert a maximum-weight fractional matching to a maximum-weight integral matching of the same size, in the following way:[8]
• Let f be the fractional matching.
• Let H be a subgraph of G containing only the edges e with non-integral fraction, 0<f(e)<1.
• If H is empty, then we are done.
• if H has a cycle, then it must be even-length (since the graph is bipartite), so we can construct a new fractional matching f1 by transferring a small fraction ε from even edges to odd edges, and a new fractional matching f2 by transferring ε from odd edges to even edges. Since f is the average of f1 and f2, the weight of f is the average between the weight of f1 and of f2. Since f has maximum weight, all three matchings must have the same weight. There exists a choice of ε for which at least one of f1 or f2 has less non-integral fractions. Continuing in the same way leads to an integral matching of the same weight.
• Suppose H has no cycle, and let P be a longest path in H. The fraction of every edge adjacent to the first or last vertex in P must be 0 (if it is 1 - the first / last edge in P violates the fractional matching condition; if it is in (0,1) - then P is not the longest). Therefore, we can construct new fractional matchings f1 and f2 by transferring ε from odd edges to even edges or vice versa. Again f1 and f2 must have maximum weight, and at least one of them has less non-integral fractions.
Fractional matching polytope
Main article: Matching polytope
Given a graph G = (V,E), the fractional matching polytope of G is a convex polytope that represents all possible fractional matchings of G. It is a polytope in R|E| - the |E|-dimensional Euclidean space. Each point (x1,...,x|E|) in the polytope represents a matching in which the fraction of each edge e is xe. The polytope is defined by |E| non-negativity constraints (xe ≥ 0 for all e in E) and |V| vertex constraints (the sum of xe, for all edges e that are adjacent to a vertex v, is at most 1). In a bipartite graph, the vertices of the fractional matching polytope are all integral.
References
1. Aharoni, Ron; Kessler, Ofra (1990-10-15). "On a possible extension of Hall's theorem to bipartite hypergraphs". Discrete Mathematics. 84 (3): 309–313. doi:10.1016/0012-365X(90)90136-6. ISSN 0012-365X.
2. Liu, Yan; Liu, Guizhen (2002). "The fractional matching numbers of graphs". Networks. 40 (4): 228–231. doi:10.1002/net.10047. ISSN 1097-0037. S2CID 43698695.
3. Beckenbach, Isabel; Borndörfer, Ralf (2018-10-01). "Hall's and Kőnig's theorem in graphs and hypergraphs". Discrete Mathematics. 341 (10): 2753–2761. doi:10.1016/j.disc.2018.06.013. ISSN 0012-365X. S2CID 52067804.
4. Füredi, Zoltán (1981-06-01). "Maximum degree and fractional matchings in uniform hypergraphs". Combinatorica. 1 (2): 155–162. doi:10.1007/BF02579271. ISSN 1439-6912. S2CID 10530732.
5. "co.combinatorics - Fractional Matching version of Hall's Marriage theorem". MathOverflow. Retrieved 2020-06-29.
6. Gärtner, Bernd; Matoušek, Jiří (2006). Understanding and Using Linear Programming. Berlin: Springer. ISBN 3-540-30697-8.
7. Bourjolly, Jean-Marie; Pulleyblank, William R. (1989-01-01). "König-Egerváry graphs, 2-bicritical graphs and fractional matchings". Discrete Applied Mathematics. 24 (1): 63–82. doi:10.1016/0166-218X(92)90273-D. ISSN 0166-218X.
8. Vazirani, Umesh (2012). "Maximum Weighted Matchings" (PDF). U. C. Berkeley.{{cite web}}: CS1 maint: url-status (link)
See also
• Fractional coloring
| Wikipedia |
\begin{document}
\title[Canonical embedding of an unramified morphism]{The canonical embedding of an unramified morphism in an \'etale morphism} \author{David Rydh}
\address{Department of Mathematics, University of California, Berkeley, 970 Evans Hall \#3840, Berkeley, CA 94720-3840 USA} \thanks{Supported by the Swedish Research Council.} \email{[email protected]} \date{2010-05-13} \subjclass[2000]{Primary 14A20} \keywords{unramified, \'etale, \'etale envelope, stack}
\begin{abstract} We show that every unramified morphism $X\to Y$ has a canonical and universal factorization $X\inj E_{X/Y}\to Y$ where the first morphism is a closed embedding and the second is \'{e}tale{} (but not separated). \end{abstract}
\maketitle
\begin{section}{Introduction} It is well-known that any unramified morphism $\map{f}{X}{Y}$ of schemes (or Deligne--Mumford stacks) is an \'{e}tale{}-local embedding, i.e., there exists a commutative diagram
{\def*{*} \begin{equation}\label{E:local-embedding} \vcenter{\xymatrix{X'\ar[r]^{f'}\ar[d] & Y'\ar[d]\\
X\ar[r]^f & Y\ar@{}[ul]|\circ}} \end{equation}}
where $f'$ is a closed embedding and the vertical morphisms are \'{e}tale{} and surjective. To see this, take \'{e}tale{} presentations $Y'\to Y$ and $X'\to X\times_Y Y'$ such that $X'$ and $Y'$ are schemes and then apply~\cite[Cor.~18.4.7]{egaIV}. This proof utterly fails if $Y$ is a stack which is not Deligne--Mumford and the existence of a diagram~\eqref{E:local-embedding} appears to be unknown in this case. Also, if we require $Y'\to Y$ to be separated, then in general there is no \emph{canonical} choice of the diagram~\eqref{E:local-embedding}.
The purpose of this article is to show that for an arbitrary unramified morphism of algebraic stacks, there is
a \emph{canonical} \'{e}tale{} morphism $E_{X/Y}\to Y$ and a closed embedding $X\inj E_{X/Y}$ over $Y$. If $\map{f}{X}{Y}$ is an unramified morphism of \emph{schemes} (or algebraic spaces), then $E_{X/Y}$ is an algebraic space.
\begin{remark}\label{R:immersion} If $\map{f}{X}{Y}$ is an \emph{immersion}, then there is a canonical factorization $X\inj U\to Y$ where $X\inj U$ is a closed immersion and $U\to Y$ is an open immersion. Here $U$ is the largest open neighborhood of $X$ such that $X$ is closed in $U$. Explicitly, $U=Y\setminus (\overline{X}\setminus X)$. This factorization commutes with flat base change if $f$ is quasi-compact but not with arbitrary base change unless $f$ is a closed immersion.
The canonical factorization that we will construct is slightly different and commutes with arbitrary base change but is not separated. For an immersion $\map{f}{X}{Y}$, the scheme $E_{X/Y}$ is the gluing of $U$ and $Y$ along the open subsets $U\setminus X=Y\setminus \overline{X}$. \end{remark}
\begin{theorem}\label{T:main-theorem} Let $\map{f}{X}{Y}$ be an unramified morphism of algebraic stacks. Then there exists an \'{e}tale{} morphism $\map{e=e_f}{E_{X/Y}}{Y}$ together with a closed immersion $\injmap{i=i_f}{X}{E_{X/Y}}$ and an open immersion $\map{j=j_f}{Y}{E_{X/Y}}$ such that $f=e\circ i$, $\id{Y}=e\circ j$ and the complement of $i(X)$ is $j(Y)$. We have that:
\begin{enumerate} \item The triple $(e,i,j)$ is unique up to unique $2$-isomorphism, i.e., if $\map{e'}{E'}{Y}$ is an \'{e}tale{} morphism, $\injmap{i'}{X}{E'}$ is a closed immersion and $\map{j'}{Y}{E'}$ is an open immersion over $Y$ such that the complement of $i'(X)$ is $j'(Y)$, then there is an isomorphism $\map{\varphi}{E'}{E_{X/Y}}$ such that $e'=e\circ\varphi$, $i=\varphi\circ i'$ and $j=\varphi\circ j'$, and $\varphi$ is unique up to unique $2$-isomorphism. \label{TI:uniqueness}
\item Let $\map{g}{Y'}{Y}$ be any morphism and let $\map{f'}{X'}{Y'}$ be the pull-back of $f$ along $g$. Then the pull-backs of $e_f$, $i_f$ and $j_f$ along $g$ coincide with $e_{f'}$, $i_{f'}$ and $j_{f'}$.
\label{TI:base-change}
\item $e$ is an isomorphism if and only if $X=\emptyset$. \label{TI:iso}
\item $e$ is separated if and only if $f$ is \'{e}tale{} and separated. \label{TI:sep}
\item $e$ is universally closed (resp.\ quasi-compact, resp.\ representable) if and only if $f$ is so. In particular, $e$ is universally closed, quasi-compact and representable if $f$ is finite. \label{TI:uc,qc,repr}
\item $e$ is of finite presentation (resp.\ quasi-separated) if and only if $f$ is of constructible finite type (resp.\ quasi-separated and locally of constructible finite type). For the definition of the latter notions, see Appendix~\ref{A:constructible}. \label{TI:qs,finite-pres}
\item $e$ is a local isomorphism if and only if $f$ is a local immersion. \label{TI:loc-immersion}
\item If $\map{g}{V}{X}$ is an \'{e}tale{} morphism, then there exists a unique \'{e}tale{} morphism $\map{g_*}{E_{V/Y}}{E_{X/Y}}$ such that the pull-back of $i_{f}$ (resp.\ $j_{f}$) along $g_*$ is $i_{f\circ g}$ (resp.\ $j_{f\circ g}$). If $g$ is surjective (resp.\ representable, resp.\ an open immersion), then so is $g_*$. \label{TI:etale-pushfwd}
\item If $\map{g}{V}{X}$ is a closed immersion then there is a natural surjective morphism $\map{g^*}{E_{X/Y}}{E_{V/Y}}$ such that $i_{f\circ g}=g^*\circ i_f\circ g$ and $j_{f\circ g}=g^*\circ j_f$. The morphism $g^*$ is an isomorphism if and only if $g$ is a nil-immersion (i.e., a bijective closed immersion).
If $g$ is an open and closed immersion, then $g^*g_*=\id{E_{V/Y}}$.
\label{TI:closed-pullback}
\end{enumerate} \end{theorem}
We call the \'{e}tale{} morphism $\map{e}{E_{X/Y}}{Y}$ the \emph{\'{e}tale{} envelope} of $X\to Y$. Note that the fibers of $e$ coincide with the fibers of $X\amalg Y\to Y$. In Definition~\pref{D:etale-envelope:repr} (resp.~\pref{D:etale-envelope:general}) we give a functorial description of $E_{X/Y}$ in the representable (resp.\ general) case.
For the definitions of representable and unramified morphisms of stacks, see Appendices~\ref{A:stacks} and~\ref{A:unramified-and-etale}. If the reader does not care about stacks, then rest assured that any scheme (or algebraic space) is an algebraic stack and that any morphism of schemes (or algebraic spaces) is representable. For schemes (or algebraic spaces), unique up to unique $2$-isomorphism means unique up to unique isomorphism.
\begin{remark} Even if $\map{f}{X}{Y}$ is a morphism of schemes (as is the case if $Y$ is a scheme and $f$ is representable and separated), it is often the case that $E_{X/Y}$ is not a scheme but an algebraic space, cf.\ Example~\pref{E:alg-space}. However, if $f$ is a local immersion, then $E_{X/Y}$ is a scheme by~\ref{TI:loc-immersion}. \end{remark}
\begin{remark} For any representable morphism $\map{f}{X}{Y}$ locally of finite type one can define a natural operation $\map{f_{\#}}{\Setp(X)}{\Setp(Y)}$ on \'{e}tale{} sheaves of \emph{pointed sets} such that if $f$ is unramified, then the \'{e}tale{} envelope $E_{X/Y}$ is the sheaf $f_{\#}\{0,1\}_X$. Here $\{0,1\}_X$ denotes the constant sheaf of a pointed set with two elements. If $f$ is \'{e}tale{}, then $f_{\#}$ is left adjoint to the pull-back $f^{-1}$ of pointed sets and if $f$ is a monomorphism, then $f_{\#}=f_{!}$ is extension by zero. We do not develop the general theory of $f_{\#}$ in this article. \end{remark}
\begin{remark} Note that ``quasi-compact'' is equivalent to ``finite type'' for unramified morphisms. When $Y$ is non-noetherian, the question of finite presentation (or equivalently of quasi-separatedness) of $E_{X/Y}\to Y$ is somewhat delicate, cf.\ Appendix~\ref{A:constructible}. \end{remark}
We begin with a few examples of the \'{e}tale{} envelope in Section~\ref{S:examples}. The proof of Theorem~\pref{T:main-theorem} in the representable case is given in Section~\ref{S:representable-case} and the general case is dealt with in Section~\ref{S:general-case}. Some applications of the main theorem are outlined in Section~\ref{S:applications}.
In Appendix~\ref{A:stacks} we give precise meanings to ``algebraic space'', ``algebraic stack'' and ``representable''. In Appendix~\ref{A:unramified-and-etale} we define unramified and \'{e}tale{} morphisms of stacks and establish their basic properties.
Some limit results used in the non-noetherian case are given in Appendix~\ref{A:limits}.
Finally, in Appendix~\ref{A:constructible} we define the technical condition ``of constructible finite type'' which is only used to give a characterization of the unramified morphisms having a finitely presented \'{e}tale{} envelope in the non-noetherian case.
Theorem~\pref{T:main-theorem} was inspired by a similar result recently obtained by Anca and Andrei Musta\c{t}\v{a}~\cite{mustata-mustata_local-embeddings}. They study the case when $\map{f}{X}{Y}$ is a finite unramified morphism between proper integral noetherian Deligne--Mumford stacks and construct a stack $F_{X/Y}$ such that $F_{X/Y}\to Y$ is \'{e}tale{} and universally closed and such that $F_{X/Y}\times_Y f(X)$ is a union of closed substacks $\{F_i\}$ which admit \'{e}tale{} and universally closed morphisms $F_i\to X$. The stack $F_{X/Y}$ has an explicit groupoid description but a functorial interpretation is missing. In general, $F_{X/Y}$ is different from $E_{X/Y}$ and does not commute with arbitrary base change. \end{section}
\begin{section}{Examples}\label{S:examples}
\begin{example} If $\map{f}{X}{Y}$ is \'{e}tale{}. Then $E_{X/Y}=X\amalg Y$. \end{example}
\begin{example} Let $Y$ be a scheme and let $X=\coprod_{i=1}^n X_i$ be the disjoint union of closed subschemes $X_i\inj Y$. Then $E_{X/Y}$ is a scheme and can be described as the gluing of $n+1$ copies of $Y$ as follows. Let $Y_i=Y$ for $i=1,\dots,n$. Glue each $Y_i$ to $j(Y)=Y$ along $Y\setminus X_i$. The resulting scheme is $E_{X/Y}$. Note that $Y_i\cap Y_j=Y\setminus (X_i\cup X_j)$. \end{example}
\begin{example}\label{E:node} The following example is a special case of the previous example. Let $Y=\Spec(k[x,y]/xy)$ be the union of the two coordinate axes in the affine plane and let $X=\A{1}\amalg \A{1}$ be the normalization of $Y$. Then $E_{X/Y}$ can be covered by three affine open subsets isomorphic to $Y$. If we denote these three subsets by $j(Y),Y_1,Y_2$, then $j(Y)\cap Y_1$ is the open subset $y\neq 0$, $j(Y)\cap Y_2$ is the open subset $x\neq 0$ and $Y_1\cap Y_2=\emptyset$. \end{example}
\begin{example}\label{E:nodal-cubic} Let $Y$ be a nodal cubic curve in $\P{2}$ and let $\map{f}{X}{Y}$ be the normalization. Let $0\in Y$ be the node and let $\{+1,-1\}\subseteq X$ be its preimage. The scheme $E_{X/Y}$ has two irreducible components $X$ and $\overline{j(Y)}$ and $\overline{j(Y)}$ is isomorphic to the gluing of $Y$ with $X$ along $Y\setminus \{0\}$ and $X\setminus \{+1,-1\}$. The scheme $E_{X/Y}$ is covered by two open separated subschemes $j(Y)$ and $U$. The open subset $U=X_1\cup X_2$ is the union of two copies of $X$, the first is $i(X)$ and the second is $\overline{j(Y)}\setminus \{0\}$, such that $\pm 1\in X_1$ is identified with $\mp 1\in X_2$. The intersection of $j(Y)$ and $U$ is $j(Y)\setminus 0=X_2\setminus \{+1,-1\}$. \end{example}
\begin{example}\label{E:alg-space} Let $Y$ be an irreducible scheme, let $Z\inj Y$ be an irreducible closed subscheme, $Z\neq Y$, and let $\map{g}{X}{Z}$ be a non-trivial \'{e}tale{} double cover. Then $E_{X/Y}$ is an algebraic space which is not a scheme. In fact, let $E=E_{X/Y}\setminus j(Z)$. Then $E\subseteq E_{X/Y}$ is open and
$\map{e|_E}{E}{Y}$ is universally closed and such that $e|_E$ is an isomorphism outside $Z$ and coincides with $g$ over $Z$. If $\xi$ is the generic point of $Z$, then $E_\xi=\{\eta\}$ where $\eta$ is the generic point of $X$. If $E$ was a scheme, then $E\times_Y \Spec(\sO_{Y,\xi})$ would be a local scheme with closed point $\eta$ and in particular separated. This would imply that $E\times_Y \Spec(\sO_{Y,\xi})\to \Spec(\sO_{Y,\xi})$ is finite and \'{e}tale{}.
But $E\to Y$ has generic rank $1$ and special rank $2$. \end{example}
\tikzset{graph norm/.style={line width=1 pt}} \tikzset{graph border/.style={line width=0.4 pt}} \tikzset{graph over/.style={double distance=1 pt, line width=0.8 pt, draw=white,double=black}}
\noindent \begin{tikzpicture}[scale=1.9]
\draw[graph norm] plot[variable=\t,smooth,domain=-1:1] ({\t},{0.5*\t-0.1*exp(-5.0 * \t^2)^8});
\draw[graph norm,red] plot[variable=\t,domain=-1:1] ({\t},{-0.5*\t-0.1});
\draw[graph over] plot[variable=\t,domain=-0.15:0] ({\t},{0.5*\t});
\draw[graph norm] plot[variable=\t,domain=-1:1] ({\t},{0.5*\t});
\draw[graph over] plot[variable=\t,smooth,domain=-1:1] ({\t},{-0.5*\t+0.1*exp(-5.0 * \t^2)^8});
\draw[graph over] plot[variable=\t,domain=0.05:0.15] ({\t},{-0.5*\t});
\draw[graph norm] plot[variable=\t,domain=-1:1] ({\t},{-0.5*\t});
\draw[graph over,double=red] plot[variable=\t,domain=-1:-0.05] ({\t},{0.5*\t+0.1});
\draw[graph norm,red] plot[variable=\t,domain=-1:1] ({\t},{0.5*\t+0.1});
\node [red,above,left] at (0.45,0.5) {$i(X)$};
\node [right] at (0.3,0) {$\overline{j(Y)}$};
\node [below, text centered] at (0,-1) {Example~\pref{E:node}};
\begin{scope}[xshift=2.2cm]
\draw[graph norm] plot[variable=\t,smooth,domain=0:1.33] ({\t^2-1},{0.5*(\t^3-\t)-0.1*exp(-5.0 * (\t^2-1)^2)^10});
\draw[graph norm,red] plot[variable=\t,smooth,domain=-1.3:0.1] ({\t^2-0.9},{0.5*(\t^3-0.9*\t)-0.1*(-\t)});
\draw[graph over] plot[variable=\t,smooth,domain=0.92:1] ({\t^2-1},{0.5*(\t^3-\t)});
\draw[graph norm] plot[variable=\t,smooth,domain=-0.1:1.33] ({\t^2-1},{0.5*(\t^3-\t)});
\draw[graph over] plot[variable=\t,smooth,domain=-1.33:0] ({\t^2-1},{0.5*(\t^3-\t)+0.1*exp(-5.0 * (\t^2-1)^2)^10});
\draw[graph over] plot[variable=\t,smooth,domain=-1.02:-1.05] ({\t^2-1},{0.5*(\t^3-\t)});
\draw[graph norm] plot[variable=\t,smooth,domain=-1.33:0.1] ({\t^2-1},{0.5*(\t^3-\t)});
\draw[graph over] plot[variable=\t,smooth,domain=0:0.92] ({\t^2-0.9},{0.5*(\t^3-0.9*\t)+0.1*(\t)});
\draw[graph norm,red] plot[variable=\t,smooth,domain=-0.1:1.3] ({\t^2-0.9},{0.5*(\t^3-0.9*\t)+0.1*(\t)});
\node [red,above,left] at (0.45,0.5) {$i(X)$};
\node [right] at (0.3,0) {$\overline{j(Y)}$};
\node [below, text centered] at (0,-1) {Example~\pref{E:nodal-cubic}}; \end{scope}
\begin{scope}[xshift=4.4cm]
\draw[graph border] plot[variable=\t,samples=4,domain=20:380] ({1.2*cos(\t)},{sin (\t)*0.8});
\draw[graph border] plot[variable=\t,smooth,domain=0:360] ({0.5*cos(\t)},{0.5*sin(\t)*0.6});
\draw[graph over,double=red] plot[variable=\t,smooth,domain=0:360] ({0.5*cos(\t-110)},{0.5*sin(\t-110)*0.6+0.2-0.1*cos(\t/2)});
\draw[graph over,double=red] plot[variable=\t,smooth,domain=360:720] ({0.5*cos(\t-110)},{0.5*sin(\t-110)*0.6+0.2-0.1*cos(\t/2)});
\node [red,right] at (0.5,0.15) {$i(X)$};
\node [below] at (0.3,-0.35) {$j(Y)$};
\node [left, thin] at (-0.45,-0.1) {$j(Z)$};
\node [below, text centered] at (0,-1) {Example~\pref{E:alg-space}}; \end{scope} \end{tikzpicture}
\end{section}
\begin{section}{The representable case}\label{S:representable-case} In this section we prove Theorem~\pref{T:main-theorem} for representable unramified morphisms.
\begin{definition}\label{D:etale-envelope:repr} Let $\map{f}{X}{Y}$ be an unramified morphism of algebraic spaces. We define a contravariant functor $\map{E_{X/Y}}{\Sch_{/Y}}{\Set}$ as follows. For any scheme $T$ and morphism $T\to Y$, we let $E_{X/Y}(T)$ be the set of commutative diagrams
$$\xymatrix{& X\times_Y T\ar[d]^{\pi_2}\\ W\ar@{(->}[r]\ar[ur] & T}$$
such that $W\to X\times_Y T$ is an open immersion and $W\to T$ is a closed immersion. Pull-backs are defined by pulling back such diagrams. \end{definition}
The presheaf $E_{X/Y}$ is a presheaf of \emph{pointed sets}. The distinguished element of $E_{X/Y}(T)$ is given by $W=\emptyset$.
It is also naturally a presheaf in partially ordered sets and if $f$ is separated, then any two elements $W_1,W_2\in E_{X/Y}(T)$ have a greatest lower bound given by $W_1\cap W_2$.
By fpqc-descent of open subsets and of closed immersions, we have that $E_{X/Y}$ is a sheaf in the fpqc topology. Let $E_{X/Y,\text{\'et}}$ denote the restriction of $E_{X/Y}$ to the small \'{e}tale{} site on $Y$ so that $E_{X/Y,\text{\'et}}$ is an \'{e}tale{} sheaf. The first goal is to show that $E_{X/Y}$ is \emph{locally constructible}, i.e., that $E_{X/Y}$ is the extension of $E_{X/Y,\text{\'et}}$ to the big \'{e}tale{} site.
\begin{lemma}\label{L:etale-envelope-is-loc-of-fp} The functor $E_{X/Y}$ is \emph{locally of finite presentation}, i.e., for every inverse limit of affine schemes $T=\varprojlim T_\lambda$ over $Y$ we have that
$$\varinjlim_{\lambda} E_{X/Y}(T_\lambda)\to E_{X/Y}(T)$$
is bijective. \end{lemma} \begin{proof} An element of $E_{X/Y}(T)$ is an open immersion $\map{w}{W}{X\times_Y T}$ such that $\map{\pi_2\circ w}{W}{T}$ is a closed immersion. As $w$ is locally of finite presentation and $W$ is affine, there is by Proposition~\pref{P:semi-std-limits} an \'{e}tale{} morphism $\map{w_\lambda}{W_\lambda}{X\times_Y T_\lambda}$ such that $W_\lambda$ is quasi-compact and quasi-separated and the pull-back of $w_\lambda$ along $T\to T_\lambda$ is $w$. After increasing $\lambda$ we can also assume that the morphism $\map{\pi_2\circ w_\lambda}{W_\lambda}{T_\lambda}$ is a closed immersion by Proposition~\pref{P:limit-of-ft+qs}. Then $w_\lambda$ is an \'{e}tale{} monomorphism and hence an open immersion. The open immersion $w_\lambda$ determines an element of $E_{X/Y}(T_\lambda)$ which maps to $w$ so the map in the lemma is surjective.
That the map is injective follows immediately from~\cite[Thm.~8.8.2 (i)]{egaIV} since if $\map{w_\lambda}{W_\lambda}{X\times_Y T_\lambda}$ is an object of $E_{X/Y}(T_\lambda)$ then $W_\lambda$ is quasi-compact and quasi-separated and $w_\lambda$ is locally of finite presentation. \end{proof}
The following lemma is well-known for separated unramified morphisms.
\begin{lemma}\label{L:unramified-over-strictly-henselian} Let $S=\Spec(A)$ be the spectrum of a strictly henselian local ring with closed point $s$, let $X$ be an algebraic space and let $X\to S$ be an unramified morphism.
\begin{enumerate} \item Let $\map{x}{\Spec(k(s))}{X_s}$ be a point in the closed fiber. Then the henselian local scheme $X(x):=\Spec(\sO_{X,x})$ is an open subscheme of $X$ and $X(x)\to S$ is a closed immersion. In particular, $X=X_1\cup X_2$ is a union of open subspaces where $X_1$ is a scheme and $X_2\cap X_s=\emptyset$.
\item There is a one-to-one correspondence between points of $|X_s|$ and non-empty open subspaces $W\subseteq X$ such that $W\to S$ is a closed immersion. This correspondence takes $x\in |X_s|$ to $X(x)\subseteq X$ and
$W\subseteq X$ to $W\cap |X_s|$. \end{enumerate}
\end{lemma} \begin{proof} Let $V\to X$ be an \'{e}tale{} presentation with $V$ a separated scheme and choose a lifting $\map{v}{\Spec(k(s))}{V_s}$ of $x$. Then $V_1=\Spec(\sO_{V,v})\iso X(x)$ is an open and closed neighborhood of $v$ and $V_1\to S$ is finite and hence a closed immersion. It follows that $X(x)\iso V_1\to X$ is an open immersion. The second statement follows immediately from the first.
\end{proof}
\begin{lemma}\label{L:points-of-small-E} Let $\map{f}{X}{Y}$ be an unramified morphism of algebraic spaces and let $\overline{y}\to Y$ be a geometric point. The stalk
$(E_{X/Y,\text{\'et}})_{\overline{y}}$ equals $|X_{\overline{y}}|\cup \{\emptyset\}$
where $|X_{\overline{y}}|$ is the underlying set of the geometric fiber $X_{\overline{y}}=X\times_Y \Spec(k(\overline{y}))$.
\end{lemma} \begin{proof} Let $Y(\overline{y})=\Spec(\sO_{Y,\overline{y}})$ denote the strict henselization of $Y$ at $\overline{y}$.
We have that $(E_{X/Y,\text{\'et}})_{\overline{y}}=\varinjlim_{U} E_{X/Y}(U)$ where the limit is over all \'{e}tale{} neighborhoods $U\to Y$ of $\overline{y}$. The induced map $(E_{X/Y,\text{\'et}})_{\overline{y}}\to E_{X/Y}\bigl(Y(\overline{y})\bigr)$ is a bijection since the functor $E_{X/Y}$ is locally of finite presentation. The latter set equals
$|X_{\overline{y}}|\cup \{\emptyset\}$ by Lemma~\pref{L:unramified-over-strictly-henselian} (ii). \end{proof}
\begin{lemma}\label{L:E-is-loc-constructible} The sheaf $E_{X/Y}$ is \emph{locally constructible}, i.e., for any scheme $T$ and morphism $\map{\pi}{T}{Y}$, there is a natural isomorphism $\pi^{-1}{E_{X/Y,\text{\'et}}}\to E_{X\times_Y T/T,\text{\'et}}$. \end{lemma} \begin{proof} There is a natural transformation $E_{X/Y,\text{\'et}}\to \pi_{*}{E_{X\times_Y T/T,\text{\'et}}}$ and hence by adjunction a natural transformation $\map{\varphi}{\pi^{-1}{E_{X/Y,\text{\'et}}}}{E_{X\times_Y T/T,\text{\'et}}}$. It is enough to verify that $\varphi$ is an isomorphism on geometric points. This follows from Lemma~\pref{L:points-of-small-E}. \end{proof}
\begin{proposition}\label{P:etale-envelope-is-representable} The sheaf $E_{X/Y}$ is an algebraic space and the natural morphism $\map{e}{E_{X/Y}}{Y}$ is \'{e}tale{} and representable. \end{proposition} \begin{proof} Indeed, this statement is equivalent to Lemma~\pref{L:E-is-loc-constructible}, cf.~\cite[Exp.\ IX, pf.\ Prop.\ 2.7]{sga4} or~\cite[Ch.~V, Thm.~1.5]{milne_etale_coh}. The space $E_{X/Y}$ is of finite presentation over $Y$ if and only if the sheaf $E_{X/Y}$ is constructible. \end{proof}
\begin{remark} The algebraicity of $E_{X/Y}$ can also be shown as follows (and this is essentially the method used in the following section). The question is local on $Y$ so we can assume that $Y$ is affine and choose a diagram~\eqref{E:local-embedding} as in the beginning of the introduction. It can then be shown that there is an \'{e}tale{} representable and surjective morphism $E_{X'/Y'}\to E_{X/Y}$ and that $E_{X'/Y'}$ is represented by the scheme given as the gluing of two copies of $Y'$ along $Y'\setminus X'$. Lemmas~\pref{L:etale-envelope-is-loc-of-fp}--\pref{L:E-is-loc-constructible} are corollaries of this result and we do not need to use Appendix~\ref{A:limits}.
\end{remark}
The distinguished section of $E_{X/Y}(Y)$, corresponding to $W=\emptyset$, gives a section $j$ of $\map{e}{E_{X/Y}}{Y}$. As the diagonal of $\map{f}{X}{Y}$ is open, we have a morphism $\map{i}{X}{E_{X/Y}}$ corresponding to the diagonal $\{X\to X\times_Y X\}\in E_{X/Y}(X)$.
\begin{lemma}\label{L:etale-envelope-satisfies-conditions} The morphism $\map{i}{X}{E_{X/Y}}$ is a closed immersion and $E_{X/Y}\setminus i(X)=j(Y)$. \end{lemma} \begin{proof} Let $T$ be a $Y$-scheme and let $\map{g}{T}{E_{X/Y}}$ be a morphism. To show that $i$ is a closed immersion, it is enough to show that the pull-back of $i$ along $g$ is a closed immersion. Let $\map{w}{W}{X\times_Y T}$ be the open immersion corresponding to $g$ so that $\map{\pi_2\circ w}{W}{T}$ is a closed immersion. Then the squares
$$\vcenter{ \xymatrix{W\ar[r]^{\pi_1\circ w}\ar@{(->}[d]^{\pi_2\circ w} & X\ar[d]^{i}\\ T\ar[r]^g & E_{X/Y}}}\quad\text{and}\quad \vcenter{ \xymatrix{T\setminus W\ar[r]\ar[d] & Y\ar[d]^{j}\\ T\ar[r]^g & E_{X/Y}}}$$
are commutative. The verification that these squares are cartesian is straight-forward. \end{proof}
\begin{lemma}\label{L:etale-envelope-is-unique} The triple $\map{e}{E_{X/Y}}{Y}$, $\map{i}{X}{E_{X/Y}}$, $\map{j}{Y}{E_{X/Y}}$, is determined up to unique isomorphism by the condition that $E_{X/Y}\setminus i(X)=j(Y)$. \end{lemma} \begin{proof} Let $\map{e'}{E'}{Y}$, $\map{i'}{X}{E'}$ and $\map{j'}{Y}{E'}$ be another triple of an \'{e}tale{} morphism, a closed immersion and an open immersion such that $E'\setminus i'(X)=j'(Y)$. There is only one possible morphism $\map{\varphi}{E'}{E_{X/Y}}$ such that $i=\varphi\circ i'$ and $j=\varphi\circ j'$, since the graph of $\varphi$ --- an open subset of $E'\times_Y E_{X/Y}$ --- would be given as the union of the images of $\map{(i',i)}{X}{E'\times_Y E_{X/Y}}$ and $\map{(j',j)}{Y}{E'\times_Y E_{X/Y}}$.
The graph of the map $i'$ determines an element of $E_{X/Y}(E')$, i.e., a morphism $\map{\varphi}{E'}{E_{X/Y}}$, such that $i=\varphi\circ i'$ and $j=\varphi\circ j'$. As $\varphi$ is a bijective \'{e}tale{} monomorphism, it is an isomorphism. \end{proof}
\begin{proof}[Proof of Theorem~\pref{T:main-theorem} (representable case)] We postpone the proof of the existence and uniqueness of $E_{X/Y}$ for non-representable morphisms $\map{f}{X}{Y}$ to the following section. Similarly, for now, we only prove the functorial properties \ref{TI:etale-pushfwd} and \ref{TI:closed-pullback} in the representable case.
The existence of $\map{e}{E_{X/Y}}{Y}$, $i$ and $j$ with the required properties, for an unramified morphism $\map{f}{X}{Y}$ of algebraic spaces, follows from Proposition~\pref{P:etale-envelope-is-representable} and Lemma~\pref{L:etale-envelope-satisfies-conditions}. The triple $(e,i,j)$ is unique with these properties by Lemma~\pref{L:etale-envelope-is-unique}. That the triple commutes with base change follows from the uniqueness or directly from the functorial description.
If $Y$ is an algebraic stack and $\map{f}{X}{Y}$ is a representable unramified morphism, then we construct the representable and \'{e}tale{} morphism $E_{X/Y}\to Y$ locally on $Y$~\cite[Ch.~14]{laumon}. We can also treat $E_{X/Y}$ as a cartesian lisse-\'{e}tale{} sheaf of sets on $Y$.
This settles~\ref{TI:uniqueness} and~\ref{TI:base-change} in the representable case. \ref{TI:iso} is trivial.
\ref{TI:sep} If $E_{X/Y}\to Y$ is separated then $j$ is closed and $i$ is open and it follows that $f$ is \'{e}tale{} and separated. If $f$ is \'{e}tale{} then $E_{X/Y}=X\amalg Y$ and $E_{X/Y}\to Y$ is separated if and only if $f$ is separated.
\ref{TI:uc,qc,repr} That $E_{X/Y}\to Y$ is universally closed (resp.\ quasi-compact, resp.\ representable) if and only if $f$ is so, follows from the fact that $i$ is a closed immersion and that $i\amalg j$ is a surjective monomorphism (hence stabilizer preserving).
\ref{TI:qs,finite-pres} If $\map{e}{E_{X/Y}}{Y}$ is quasi-separated then $j$ is quasi-compact so that $i$ is of constructible finite type by Proposition~\pref{P:closed-imm-constructible}. It follows that $f=e\circ i$ is quasi-separated and locally of constructible finite type. Conversely, if $f$ is quasi-separated and locally of constructible finite type, then so is $i$ by Proposition~\pref{P:loc-constructible-type}. Hence $j$ is quasi-compact and, a fortiori, so is $\map{i\amalg j}{X\amalg Y}{E_{X/Y}}$. As $f\amalg \id{Y}=e\circ (i\amalg j)$ is quasi-separated it follows that $\map{e}{E_{X/Y}}{Y}$ is quasi-separated. Finally, note that $e$ is finitely presented if and only if $e$ is quasi-compact and quasi-separated and that $f$ is of constructible finite type if and only if $f$ is quasi-compact, quasi-separated and locally of constructible finite type.
\ref{TI:etale-pushfwd} and \ref{TI:closed-pullback} (representable case) Let $\map{g}{V}{X}$ be \'{e}tale{} (resp.\ a closed immersion). We will construct a morphism $\map{g_*}{E_{V/Y}}{E_{X/Y}}$ (resp.\ $\map{g^*}{E_{X/Y}}{E_{V/Y}}$) using the functorial description.
In the \'{e}tale{} case, an element of $E_{V/Y}(T)$ corresponding to an open subspace $W\subseteq V\times_Y T$ is mapped to the element corresponding to the composition $W\to V\times_Y T\to X\times_Y T$. This composition, a priori only \'{e}tale{}, is an open immersion since $W\to T$ is a closed immersion. That the pull-back of $i_X$ (resp.\ $j_X$) along $g_*$ is $i_V$ (resp.\ $j_V$) is easily verified. If $g$ is an open immersion, then $g_*$ is a monomorphism and hence an open immersion.
In the case of a closed immersion, an element of $E_{X/Y}(T)$ corresponding to an open subspace $W\subseteq X\times_Y T$ is mapped to the pull-back $g_T^{-1}W\subseteq V\times_Y T$. If $\map{y}{\Spec(k)}{Y}$ is a point, then the morphism $\map{g^*_y}{E_{X_y/y}=X_y\cup \{y\}}{E_{V_y/y}=V_y\cup \{y\}}$ is an isomorphism over the open and closed subscheme $V_y\cup \{y\}$ and maps $X_y\setminus V_y$ onto the distinguished point $y$. It follows that $i_V=g^*\circ i_X\circ g$, that $j_V=g^*\circ j_X$, that $g^*$ is surjective and that $g^*$ is a monomorphism if and only if $g$ is bijective.
\ref{TI:loc-immersion} If $\map{e}{E_{X/Y}}{Y}$ is a local isomorphism, then $f=e\circ i$ is a local immersion. Conversely, assume that $f$ is a local immersion. The question whether $e$ is a local isomorphism is Zariski-local on $E_{X/Y}$ and $Y$. We can thus, using~\ref{TI:etale-pushfwd}, assume that $f$ is a closed immersion. Then $E_{X/Y}=Y\cup_{Y\setminus X} Y\to Y$ is a local isomorphism.
\end{proof}
\end{section}
\begin{section}{The general case}\label{S:general-case} In this section we prove Theorem~\pref{T:main-theorem} for general unramified morphisms of stacks.
\begin{definition}\label{D:etale-envelope:general} If $\map{f}{X}{Y}$ is any (not necessarily representable) unramified morphism, then we define a stack $E_{X/Y}$ over $\Sch_{/Y}$ (with the \'{e}tale{} topology) as follows. The objects of the category $E_{X/Y}$ are $2$-commutative diagrams
$$\xymatrix{W\ar[r]^p\ar@{(->}[d]^q\drtwocell<\omit>{\varphi} & X\ar[d]\\ T\ar[r] & Y}$$
such that $T$ is a scheme, $\map{(p,\varphi,q)}{W}{X\times_Y T}$ is \'{e}tale{} and $q$ is a closed immersion. Morphisms $(p',\varphi',q')\to (p,\varphi,q)$ are $2$-commutative diagrams
$$\xymatrix{W'\ar[r]\rruppertwocell<8>^{p'}{<-2.5>}\ar@{(->}[d]^{q'}\drtwocell<\omit>
& W\ar[r]_p\ar@{(->}[d]^q\drtwocell<\omit>{\varphi} & X\ar[d]\\
T'\ar[r]\rrlowertwocell<-8>{<2.2>} & T\ar[r] & Y}$$
such that the left square is $2$-cartesian and the pasting of the diagram is $\varphi'$. The functor $E_{X/Y}\to \Sch_{/Y}$ is the functor mapping the diagrams above onto their bottom rows. By \'{e}tale{} descent, the category $E_{X/Y}$, which is fibered in groupoids, is a stack in the \'{e}tale{} topology. \end{definition}
\begin{lemma} Let $\injmap{q}{W}{T}$ be a closed immersion and let $Z\to W$ be an \'{e}tale{} morphism of stacks. Then $q_*Z\to T$ is \'{e}tale{}. If $Z\to W$ is representable (resp.\ surjective, resp.\ an open immersion) then so is $q_*Z\to T$. Here $q_*Z$ denotes the stack over $\Sch_{/T}$ which associates to a scheme $T'\in \Sch_{/T}$ the groupoid $\mathbf{Hom}_W(W\times_T T',Z)$. \end{lemma} \begin{proof} The question is fppf-local on $T$ and we can thus assume that $T$ is a scheme. Then $Z$ is Deligne--Mumford and we can pick an \'{e}tale{} presentation $U\to Z$. It is enough to show that $q_*U\to q_*Z$ and $q_*U\to T$ are \'{e}tale{} and representable and that the first map is surjective. We can thus assume that $Z\to W$ is representable. Then $Z$ is a locally constructible sheaf and it follows that $q_*Z$ is locally constructible by the proper base change theorem, i.e., $q_*Z\to T$ is \'{e}tale{} and representable.
If $Z\to W$ is surjective, then so is $q_*Z\to T$. Indeed, this can be checked on stalks. Let $t\in T$ be a point. If $t\in W$, then $(q_*Z)_{\overline{t}}=Z_{\overline{t}}\neq \emptyset$. If $t\notin W$, then $(q_*Z)_{\overline{t}}=Z(\emptyset)$ is the final object --- the one-point set.
If $Z\to W$ is an open immersion, then $q_*Z=T\setminus (W\setminus Z)$ as can be checked by passing to fibers. \end{proof}
\begin{lemma}\label{L:etale-pushfwd} Let $\map{g}{V}{X}$ be an \'{e}tale{} morphism. Then there is a natural \'{e}tale{} morphism $\map{g_*}{E_{V/Y}}{E_{X/Y}}$. If $g$ is representable (resp.\ surjective, resp.\ an open immersion) then so is $g_*$. \end{lemma} \begin{proof} This is similar to the proof of Theorem~\pref{T:main-theorem} \ref{TI:etale-pushfwd} in the representable case. Let $\xi\in E_{V/Y}$ be an object corresponding to morphisms $\map{p}{W}{V}$, $\injmap{q}{W}{T}$. We let $g_*(\xi)\in E_{X/Y}$ be the object corresponding to $g\circ p$ and $q$. On morphisms $g_*$ is defined in the obvious way.
Let $T\to E_{X/Y}$ be a morphism corresponding to morphisms $\map{p}{W}{X}$ and $\injmap{q}{W}{T}$. If $T'$ is a $T$-scheme, then the $T'$-points of the pull-back $E_{V/Y}\times_{E_{X/Y}} T\to T$ is the groupoid of liftings of $\map{p'}{W\times_T T'}{X}$ over $\map{g}{V}{X}$, or equivalently, the groupoid of sections of $V\times_X W\times_T T'\to W\times_T T'$. This description is compatible with pull-backs so that $E_{V/Y}\times_{E_{X/Y}} T$ is the stack $q_*(V\times_X W)$ which is algebraic and \'{e}tale{} over $T$ by the previous lemma. Moreover, if $V\to X$ is representable (resp.\ surjective, resp.\ an open immersion) then so are $q_*(V\times_X W)\to T$ and $E_{V/Y}\to E_{X/Y}$. \end{proof}
\begin{lemma}\label{L:etale-envelope-is-algebraic} The stack $E_{X/Y}$ is algebraic. \end{lemma} \begin{proof} Let $Y'\to Y$ be a smooth presentation. Then $E_{X\times_Y Y'/Y'}\to E_{X/Y}$ is representable, smooth and surjective. Replacing $X$ and $Y$ with $X\times_Y Y'$ and $Y'$ respectively, we can thus assume that $Y$ is a scheme.
Since $X\to Y$ is unramified, we have that $X$ is a Deligne--Mumford stack. Let $V\to X$ be an \'{e}tale{} presentation. By Lemma~\pref{L:etale-pushfwd}, there is an \'{e}tale{} representable surjection $E_{V/Y}\to E_{X/Y}$ and by Proposition~\pref{P:etale-envelope-is-representable}, $E_{V/Y}$ is an algebraic space. This shows that $E_{X/Y}$ is algebraic. \end{proof}
\begin{proof}[Proof of Theorem~\pref{T:main-theorem} (general case)] We have already proved that $E_{X/Y}$ is algebraic in Lemma~\pref{L:etale-envelope-is-algebraic} and as in the representable case, we can define morphisms $\map{i}{X}{E_{X/Y}}$ and $\map{j}{Y}{E_{X/Y}}$. That $i$ is a closed immersion and $j$ is an open immersion such that $j(Y)$ is the complement of $i(X)$ follows exactly as in the proof of Lemma~\pref{L:etale-envelope-satisfies-conditions}.
The uniqueness (which is up to unique $2$-isomorphism) of $E_{X/Y}$, $i$ and $j$ satisfying $E_{X/Y}\setminus i(X)=j(Y)$ follows as in the proof of Lemma~\pref{L:etale-envelope-is-unique} (because any morphism $E\to E_{X/Y}$ commuting with $i$ and $j$ is representable).
\ref{TI:etale-pushfwd} is Lemma~\pref{L:etale-pushfwd} and \ref{TI:closed-pullback} follows exactly as in the representable case. \end{proof}
\end{section}
\begin{section}{Applications}\label{S:applications} There are two important consequences of Theorem~\pref{T:main-theorem}. The first is that the classical description of unramified morphisms as \'{e}tale{}-local embeddings remains valid when the target is not necessary Deligne--Mumford. The second is that we obtain a \emph{canonical} factorization of an unramified morphism into a closed immersion and an \'{e}tale{} morphism. The following example illustrates the first consequence.
\begin{example} It can be shown that if $X\to Y$ is an \'{e}tale{}, finitely presented and representable morphism or a closed immersion of stacks and $\widetilde{X}\to X$ is a blow-up, then there exists a blow-up $\widetilde{Y}\to Y$ and an $X$-morphism $\widetilde{Y}\times_Y X\to \widetilde{X}$. The analogous result for a representable unramified morphism $X\to Y$ of constructible finite type (e.g., of finite presentation) then follows from the existence of the \'{e}tale{} envelope. \end{example}
In the remainder of the section we outline an application where the canonicity of the \'{e}tale{} envelope is crucial. It is shown in~\cite{rydh_submersion_and_descent} that quasi-compact universally subtrusive morphisms (e.g., universally submersive morphisms between noetherian spaces) are morphisms of effective descent for the fibered category of finitely presented \'{e}tale{} morphisms. Using Theorem~\pref{T:main-theorem} we obtain a similar effective descent statement for unramified morphisms.
\begin{notation} Let $\map{g}{S'}{S}$ be a morphism of algebraic spaces. Let $S''=S'\times_S S'$ be the fiber product and let $\map{\pi_1,\pi_2}{S''}{S'}$ be the two projections. \end{notation}
\begin{proposition}[Descent] Let $\map{g}{S'}{S}$ be universally submersive. Let $X\to S$ and $Y\to S$ be unramified morphisms of algebraic spaces. Then the sequence
$$\xymatrix{ \Hom_S(X_\red,Y_\red)\ar[r]^{g^*}
& \Hom_{S'}(X'_\red,Y'_\red)\ar@<.5ex>[r]^{\pi_1^*}\ar@<-.5ex>[r]_{\pi_2^*}
& \Hom_{S''}(X''_\red,Y''_\red)}$$
is exact. Here $X'$ and $Y'$ are the pull-backs of $X$ and $Y$ along $S'\to S$, and $X''$ and $Y''$ are the pull-backs of $X$ and $Y$ along $S''\to S$. \end{proposition} \begin{proof} A morphism $\map{f}{X_\red}{Y_\red}$ corresponds to an open subspace
$\Gamma\subseteq X_\red\times_S Y_\red$ such that the projection $\Gamma\to X_\red$ is an isomorphism. Equivalently, since $Y\to S$ is unramified, an open subset $\Gamma\subseteq |X\times_S Y|$ corresponds to a morphism $X_\red\to Y_\red$ if and only if $\Gamma_\red\to X_\red$ is universally injective, surjective and proper. As $g$ is surjective, it follows that $\Hom_S(X_\red,Y_\red)\to\Hom_{S'}(X'_\red,Y'_\red)$ is injective.
Now if $\Gamma'\subseteq |X'\times_{S'} Y'|$ is an open subset such that
$\pi_1^{-1}\Gamma'=\pi_2^{-1}\Gamma'$ as subsets of $|X''\times_{S''} Y''|$, then $\Gamma'$ is the pull-back of an open subset $\Gamma\subseteq |X\times_S Y|$ since $g$ is universally submersive. If in addition $\Gamma'$ corresponds to a morphism $X'_\red\to Y'_\red$, then $\Gamma'_\red\to X'_\red$ is universally injective, surjective and proper. As $g$ is universally submersive, it follows that $\Gamma_\red\to X_\red$ also is universally injective, surjective and proper. Thus $\Gamma$ corresponds to a morphism $X_\red\to Y_\red$ lifting $X'_\red\to Y'_\red$. \end{proof}
\begin{theorem}[Effective descent] Let $\map{g}{S'}{S}$ be a quasi-compact and quasi-separated universally subtrusive morphism of algebraic spaces. Let $X'\to S'$ be an unramified morphism of constructible finite type (e.g., of finite presentation) of algebraic spaces equipped with a ``reduced descent datum'' relative to $S'\to S$, i.e., an isomorphism $\map{\theta}{(\pi_1^*X')_\red}{(\pi_2^*X')_\red}$ satisfying the usual cocycle condition after passing to reductions.
Then there is a unique unramified morphism $X\to S$ of constructible finite type and a schematically dominant morphism $X'\to X$ such that $X'\to X\times_S S'$ is a nil-immersion. \end{theorem} \begin{proof} Let $X''_i=\pi_i^{*}X'$ for $i=1,2$ so that $X'':=(X''_1)_\red\iso (X''_2)_\red$. Consider the \'{e}tale{} envelopes $E_{X'/S'}$, $E_{X''/S''}$ and $E_{X''_i/S''}$. The nil-immersions $X''\inj X''_i$ induce natural isomorphisms $E_{X''_i/S''}\to E_{X''/S''}$. As the \'{e}tale{} envelope commutes with pull-back, there is a canonical isomorphism $E_{X''/S''}\iso \pi_1^{*}E_{X'/S'}\iso \pi_2^{*}E_{X'/S'}$ which equips $E_{X'/S'}$ with a descent datum.
The morphism $E_{X'/S'}\to S'$ is \'{e}tale{} and of finite presentation. Thus, it descends to a morphism $E\to S$ which is \'{e}tale{} and of finite presentation~\cite[Thm.~5.17]{rydh_submersion_and_descent}. The induced morphism $\map{h}{E_{X'/S'}}{E}$ is a pull-back of $g$ and thus universally subtrusive. As $h$ is surjective and $\pi_1^{-1}(i'(X'))=\pi_2^{-1}(i'(X'))$ as sets, there is a unique subset $X\subseteq E$ such that $h^{-1}(X)=i'(X')$. Since $h$ is subtrusive and $i'(X')\subseteq E_{X'/S'}$ is closed and constructible, it follows that $X$ is closed and constructible. We consider the set $X$ as a closed subspace of $E$ by taking the ``schematic image'' of $X'\inj E_{X'/S'}\to E$. Then $X\to S$ satisfies the conditions of the theorem. \end{proof}
\newcommand{\mathbf{Unr}_{\mathrm{cons}}}{\mathbf{Unr}_{\mathrm{cons}}}
\begin{corollary} Let $\mathbf{Unr}_{\mathrm{cons}}(S)$ be the category of unramified morphisms $X\to S$ of constructible finite type with $X$ reduced and let $\mathbf{Unr}_{\mathrm{cons}}(S'\to S)$ be the category of unramified morphisms $X'\to S'$, of constructible finite type, equipped with a reduced descent datum and with $X'$ reduced. There is a natural functor $\mathbf{Unr}_{\mathrm{cons}}(S)\to \mathbf{Unr}_{\mathrm{cons}}(S'\to S)$ taking $X\to S$ to $(X\times_S S')_\red\to S'$ and the induced descent datum. This functor is an equivalence of categories. \end{corollary}
\end{section}
\appendix
\begin{section}{Algebraic spaces and stacks}\label{A:stacks} A sheaf of sets $F$ on the category of schemes $\Sch$ with the \'{e}tale{} topology is an \emph{algebraic space} if there exists a scheme $X$ and a morphism $X\to F$ which is represented by surjective \'{e}tale{} morphisms of schemes~\cite[D\'ef.~5.7.1]{raynaud-gruson}, i.e., for any scheme $T$ and morphism $T\to F$, the fiber product $X\times_F T$ is a scheme and $X\times_F T\to T$ is surjective and \'{e}tale{}.
A \emph{stack} is a category fibered in groupoids over $\Sch$ with the \'{e}tale{} topology satisfying the usual sheaf condition~\cite{laumon}.
A morphism $\map{f}{X}{Y}$ of stacks is \emph{representable} if for any scheme $T$ and morphism $T\to Y$, the $2$-fiber product $X\times_Y T$ is an algebraic space. A stack $X$ is \emph{algebraic} if there exists a smooth presentation, i.e., a smooth, surjective and representable morphism $U\to X$ where $U$ is a scheme. A stack $X$ is \emph{Deligne--Mumford} if there exists an \'{e}tale{} presentation. A stack $X$ is Deligne--Mumford if and only if $X$ is algebraic and the diagonal $\Delta_X$ is unramified. A morphism $\map{f}{X}{Y}$ of stacks is \emph{quasi-separated} if the diagonal $\Delta_{X/Y}$ is quasi-compact and quasi-separated, i.e., if both $\Delta_{X/Y}$ and its diagonal are quasi-compact.
\begin{remark}[Quasi-separatedness] We do not require that algebraic spaces and stacks are quasi-separated nor that the diagonal of an algebraic stack is separated. The queasy reader may assume that the diagonals of all stacks and algebraic spaces are separated and quasi-compact (as in~\cite{knutson_alg_spaces,laumon}) but this is not necessary in this paper. The reader should however note that unless we work with noetherian stacks or finitely presented unramified morphisms, stacks and algebraic spaces with non-quasi-compact diagonals will appear.
The diagonal of a (not necessarily quasi-separated) algebraic space is representable by schemes. This follows by effective fppf-descent of monomorphisms which are locally of finite type. Indeed, more generally the class of locally quasi-finite and separated morphisms is an effective class in the fppf-topology (cf.\ \cite[App.]{murre}, \cite[Exp.~X, Lem.~5.4]{sga3} or~\cite[pf.\ of 5.7.2]{raynaud-gruson}).
The diagonal of an algebraic stack $X$ is representable. This follows by~\cite[pf.\ of Prop.~4.3.1]{laumon} as~\cite[Cor.~1.6.3]{laumon} generalizes to arbitrary algebraic spaces.
The characterization of Deligne--Mumford stacks as algebraic stacks with unramified diagonal is valid for arbitrary algebraic stacks. Indeed, the proof of~\cite[Thm.~8.1]{laumon} does not use that the diagonal is separated and quasi-compact. \end{remark} \end{section}
\begin{section}{Unramified and \'{e}tale{} morphisms of stacks}\label{A:unramified-and-etale} We use the modern terminology of unramified morphisms~\cite{raynaud_hensel_rings}: an unramified morphism of schemes is a formally unramified morphism which is \emph{locally of finite type} (and not necessarily locally of finite presentation). Equivalently, an unramified morphism is a morphism locally of finite type such that the diagonal is an open immersion~\cite[17.4.1.2]{egaIV}. Recall that an \'{e}tale{} morphism of schemes is a formally \'{e}tale{} morphism which is locally of finite presentation or equivalently, a flat and unramified morphism which is locally of finite presentation~\cite[17.6.2]{egaIV}. These definitions generalize to include \emph{non-representable} morphisms as follows:
\begin{definition} A morphism $\map{f}{X}{Y}$ of algebraic stacks is \emph{unramified} if $f$ is locally of finite type and the diagonal $\Delta_f$ is \'{e}tale{}. A morphism $\map{f}{X}{Y}$ of algebraic stacks is \emph{\'{e}tale} if $f$ is locally of finite presentation, flat and unramified. \end{definition}
For representable $f$ this definition of unramified agrees with the usual since an \'{e}tale{} monomorphism is an open immersion~\cite[Thm.~17.9.1]{egaIV}.
The notions of unramified and \'{e}tale{} are fpqc-local on the target and \'{e}tale-local on the source~\cite[2.2.11~(iv), 2.7.1, 17.7.3, 17.7.7]{egaIV}.
\begin{proposition}\label{P:etale=smooth+unramified} Let $\map{f}{X}{Y}$ be a morphism of algebraic stacks. The following are equivalent:
\begin{enumerate} \item $f$ is \'{e}tale{}. \item $f$ is smooth and unramified. \end{enumerate}
\end{proposition} \begin{proof} As a smooth morphism is flat and locally of finite presentation (ii) implies (i). To see that (i) implies (ii), take a smooth presentation $U\to X$. If $f$ is \'{e}tale{} then $U\times_X U\to U\times_Y U$ is \'{e}tale{}. Thus, the projections $U\times_Y U\to U$ are smooth at the points in the image of $U\times_X U$. Since $U\times_X U\to U$ is surjective and $U\to Y$ is flat, it follows that $U\to Y$ is smooth by flat descent and, a fortiori, that $X\to Y$ is smooth.
\end{proof}
\begin{proposition}\label{P:unramified-char} Let $\map{f}{X}{Y}$ be a morphism of algebraic stacks. The following are equivalent:
\begin{enumerate} \item $f$ is unramified. \item $f$ is locally of finite type and for every point $\Spec(k)\to Y$ we have that $X\times_Y \Spec(k)\to \Spec(k)$ is unramified. \item $f$ is locally of finite type and for every point $\Spec(k)\to Y$ we have that $X\times_Y \Spec(k)$ is geometrically reduced, Deligne--Mumford and discrete.
\end{enumerate} \end{proposition} \begin{proof}
Clearly (i)$\implies$(ii). If $f$ is representable, then it is well-known that (ii)$\implies$(iii)$\implies$(i)~\cite[17.4.1.2]{egaIV}.
For general $f$, to see that (ii)$\implies$(iii) we can assume that $Y=\Spec(k)$ so that $X$ is Deligne--Mumford. As both (ii) and (iii) are \'{e}tale{}-local on $X$ we can also assume that $X$ is a scheme so that $f$ is representable and (ii)$\implies$(iii) by the representable case.
If (iii) holds, then the fibers of the diagonal are unramified and hence $\Delta_f$ is unramified, i.e., $f$ is Deligne--Mumford. Let $Y'\to Y$ be a smooth presentation and let $X'\to X\times_Y Y'$ be an \'{e}tale{} presentation. Then the representable morphism $X'\to Y'$ also satisfies condition (iii) and hence is unramified. This shows that (iii)$\implies$(i). \end{proof}
In the remainder of this section we will show that the definitions of unramified and \'{e}tale{} given above have a more standard formal description.
\begin{definition} Let $S$ be a stack and let $X$ and $Y$ be stacks over $S$. We let $\mathbf{Hom}_S(X,Y)$ be the groupoid with objects $2$-commutative diagrams
$$\xymatrix{X\ar[r]^f\rrlowertwocell{<2>\tau} & Y\ar[r] & S}$$
and morphisms $\map{\varphi}{(f_1,\tau_1)}{(f_2,\tau_2)}$, $2$-commutative diagrams
$$\vcenter{\xymatrix{X\rtwocell<3>^{f_1}_{f_2}{\varphi}\rrlowertwocell<-12>{<3>\tau_2}
& Y\ar[r] & S}} \quad \text{such that $\tau_2\circ\varphi=\tau_1$.}$$
\end{definition}
We note that if $Y\to S$ is representable, then the groupoid $\mathbf{Hom}_S(X,Y)$ is equivalent to a set.
\begin{definition} Let $\map{f}{X}{Y}$ be a morphism of stacks. We say that $f$ is \emph{formally unramified} (resp.\ \emph{formally Deligne--Mumford}, resp.\ \emph{formally smooth}, resp.\ \emph{formally \'{e}tale{}}) if for every
$Y$-scheme $T$ and every closed subscheme $T_0\inj T$ defined by a nilpotent ideal sheaf the functor
$$\mathbf{Hom}_Y(T,X)\to \mathbf{Hom}_Y(T_0,X)$$
is fully faithful (resp.\ faithful, resp.\ essentially surjective, resp.\ an equivalence of categories). \end{definition}
\begin{remark} The functor $\mathbf{Hom}_Y(T,X)\to \mathbf{Hom}_Y(T_0,X)$ is essentially surjective if and only if for every $2$-commutative diagram
$$\xymatrix{T_0\ar[r]\ar@{(->}[d]\drtwocell<\omit>{\tau} & X\ar[d]\\ T\ar[r] & Y}$$
there exists a morphism $T\to X$ and a $2$-commutative diagram
$$\vcenter{\xymatrix{T_0\ar[r]\ar@{(->}[d]\rtwocell<\omit>{<1.5>\varphi} & X\ar[d]\\ T\ar[r]\ar[ur]\rtwocell<\omit>{<-2>\psi} & Y}} \quad \text{such that $\tau=\psi\circ\varphi$.}$$
If $\map{f}{X}{Y}$ is locally of finite presentation, then it can be shown that it is enough to consider strictly henselian $T$ and closed subschemes $T_0\inj T$ defined by a square-zero ideal, cf.~\cite[Prop.~4.15 (ii)]{laumon}.
\end{remark}
Formally unramified (resp.\ \dots) morphisms are stable under base change, products and composition, cf.~\cite[Prop.~17.1.3]{egaIV}.
\begin{proposition}\label{P:form-unram/etale-and-diagonal} Let $\map{f}{X}{Y}$ be a morphism of stacks. Then $f$ is formally unramified (resp.\ formally Deligne--Mumford) if and only if the diagonal $\Delta_f$ is formally \'{e}tale{} (resp.\ formally unramified). \end{proposition} \begin{proof} Let $T$ be a $Y$-scheme and let $\injmap{j}{T_0}{T}$ be a closed subscheme defined by a nilpotent ideal. Let $(f_1,\tau_1)$ and $(f_2,\tau_2)$ be two objects of $\mathbf{Hom}_{Y}(T,X)$. This determines a morphism $\map{F=(f_1,\tau_2^{-1}\circ\tau_1,f_2)}{T}{X\times_Y X}$. Conversely, a morphism $\map{F}{T}{X\times_Y X}$ gives rise to a (non-unique) pair $(f_1,\tau_1),(f_2,\tau_2)$ of objects in $\mathbf{Hom}_{Y}(T,X)$ such that $F=(f_1,\tau_2^{-1}\circ\tau_1,f_2)$.
Fix a pair of objects $(f_1,\tau_1),(f_2,\tau_2)$ and a morphism $\map{F}{T}{X\times_Y X}$ as above. As the diagonal of $f$ is representable, the groupoid $\mathbf{Hom}_{X\times_Y X}(T,X)$ is equivalent to the set $\Hom_{X\times_Y X}(T,X):=\pi_0 \mathbf{Hom}_{X\times_Y X}(T,X)$.
There is a natural bijection between the set of $2$-morphisms $\Hom(f_1,f_2)$ and the set $\Hom_{X\times_Y X}(T,X)$. Thus $\Hom(f_1,f_2)\to \Hom(f_1\circ j,f_2\circ j)$ is bijective (resp.\ injective) if and only if $\Hom_{X\times_Y X}(T,X)\to\Hom_{X\times_Y X}(T_0,X)$ is bijective (resp.\ injective). \end{proof}
\begin{corollary} Let $\map{f}{X}{Y}$ and $\map{g}{Y}{Z}$ be two morphisms.
\begin{enumerate} \item If $g\circ f$ is formally Deligne--Mumford then so is $f$. \item If $g\circ f$ is formally unramified and $g$ is formally Deligne--Mumford, then $f$ is formally unramified. \end{enumerate}
\end{corollary}
\begin{corollary} Let $\map{f}{X}{Y}$ be a morphism of stacks.
\begin{enumerate} \item $f$ is smooth if and only if $f$ is locally of finite presentation and formally smooth.\label{CI:smooth} \item $f$ is \'{e}tale{} if and only if $f$ is locally of finite presentation and formally \'{e}tale{}.\label{CI:etale} \item $f$ is unramified if and only if $f$ is locally of finite type and formally unramified.\label{CI:unramified} \item $f$ is Deligne--Mumford (i.e., $\Delta_f$ is unramified) if and only if $f$ is formally Deligne--Mumford.\label{CI:D-M} \end{enumerate}
\end{corollary} \begin{proof} If $f$ is representable, then~\ref{CI:smooth}, \ref{CI:etale} and~\ref{CI:unramified} are definitions and~\ref{CI:D-M} is trivial. For general $f$, statement~\ref{CI:unramified} [resp.\ \ref{CI:D-M}] follows from Proposition~\pref{P:form-unram/etale-and-diagonal} using statement~\ref{CI:etale} [resp.\ \ref{CI:unramified}] for the representable diagonal $\Delta_f$. Statement~\ref{CI:smooth} is~\cite[Prop.~4.15 (ii)]{laumon}. Finally~\ref{CI:etale} follows from Proposition~\pref{P:etale=smooth+unramified} and~\ref{CI:smooth} and~\ref{CI:unramified}.
\end{proof}
\end{section}
\begin{section}{Auxiliary limit results}\label{A:limits} In this appendix we give some fairly standard limit results. For simplicity we state these results for algebraic spaces although they remain valid for algebraic stacks.
\begin{proposition}\label{P:semi-std-limits} Let $S_0$ be an algebraic space and let $S=\varprojlim_{\lambda} S_\lambda$ be an inverse limit of algebraic spaces that are affine over $S_0$.
Let $X$ be a \emph{quasi-compact and quasi-separated} algebraic space and let $X\to S$ be a morphism locally of finite presentation. Then there exists an index $\lambda$, a quasi-compact and quasi-separated algebraic space $X_\lambda$, a morphism $X_\lambda\to S_\lambda$ locally of finite presentation and an $S$-isomorphism $X_\lambda\times_{S_\lambda} S\to X$. If $X\to S$ is \'{e}tale{} then it can be arranged so that $X_\lambda\to S_\lambda$ also is \'{e}tale{}.
\end{proposition} \begin{proof} Since $X$ is quasi-compact, we can assume that $S_0$ is quasi-compact after replacing $S_0$ by an open subspace. Let $V_0\to S_0$ be an \'{e}tale{} presentation with $V_0$ an affine scheme. Let $V_\lambda=V_0\times_{S_0} S_\lambda$ and $V=V_0\times_{S_0} S$. Finally choose an affine scheme $U$ and an \'{e}tale{} morphism $U\to V\times_S X$ such that $U\to X$ is surjective. Note that $U\to X$ and $U\to V$ are of finite presentation. Let $R=U\times_X U$ and note that $\map{j}{R}{U\times_S U}$ is a monomorphism of finite presentation as $X$ is quasi-separated.
Since $U\to V$ and $j$ are of finite presentation, there is for sufficiently large $\lambda$ a finitely presented scheme $U_\lambda\to V_\lambda$, a finitely presented monomorphism $\map{j_\lambda}{R_\lambda}{U_\lambda\times_{S_\lambda} U_\lambda}$ and cartesian diagrams
$$\vcenter{\xymatrix{U\ar[r]\ar[d] & U_\lambda\ar[d]\\
V\ar[r] & V_\lambda\ar@{}[ul]|\square}}\quad\text{and}\quad \vcenter{\xymatrix{R\ar[r]\ar[d]^{j} & R_\lambda\ar[d]^{j_\lambda}\\
U\times_S U\ar[r] & U_\lambda\times_{S_\lambda} U_\lambda\ar@{}[ul]|\square}}$$
such that $\map{s_\lambda,t_\lambda}{R_\lambda}{U_\lambda}$ are \'{e}tale{} with $s_\lambda=\pi_1\circ j_\lambda$ and $t_\lambda=\pi_2\circ j_\lambda$, and $R_\lambda$ is quasi-compact.
The morphism $j_\lambda=(s_\lambda,t_\lambda)$ defines an equivalence relation if and only if
\begin{enumerate} \item[(R)] the pull-back of $j_\lambda$ along
$\map{\Delta_{U_\lambda}}{U_\lambda}{U_\lambda\times_{S_\lambda}
U_\lambda}$ is an isomorphism, \item[(S)] the pull-back of $j_\lambda$ along
$\map{(t_\lambda,s_\lambda)}{R_\lambda}{U_\lambda\times_{S_\lambda}
U_\lambda}$ is an isomorphism, and \item[(T)] the pull-back of $j_\lambda$ along
$\map{(s\circ \pi_1,t\circ \pi_2)} {R_\lambda\times_{t_\lambda,U_\lambda,s_\lambda} R_\lambda} {U_\lambda\times_{S_\lambda} U_\lambda}$ is an isomorphism. \end{enumerate}
The pull-back of the above maps along $U\to U_\lambda$, $R\to R_\lambda$ and $R\times_U R\to R_\lambda\times_{U_\lambda} R_\lambda$ respectively are isomorphisms since $j$ is an equivalence relation. Noting that $j_\lambda$ is of finite presentation and $U_\lambda$, $R_\lambda$ and $R_\lambda\times_{U_\lambda} R_\lambda$ are quasi-compact, we conclude that $j_\lambda$ is an equivalence relation for sufficiently large $\lambda$ by~\cite[Thm.~8.10.5 (i)]{egaIV}.
The quotient $X_\lambda$ of this equivalence relation is a quasi-compact and quasi-separated algebraic space which is locally of finite presentation over $S_\lambda$.
The last assertion follows from~\cite[Prop.~17.7.8 (ii)]{egaIV}. \end{proof}
Note that Proposition~\pref{P:semi-std-limits} reduces to the standard limit result on finitely presented objects if $S_0$ is quasi-compact and quasi-separated.
\begin{proposition}\label{P:limit-of-ft+qs} Let $S_0$ be an affine scheme and let $S=\varprojlim_\lambda S_\lambda$ be an inverse limit of affine $S_0$-schemes. Let $X_0$ be an algebraic space and let $\map{f_0}{X_0}{S_0}$ be of \emph{finite type and quasi-separated}. Let $\map{f_\lambda}{X_\lambda}{S_\lambda}$ and $\map{f}{X}{S}$ denote the base changes of $f_0$. Then $f$ is a monomorphism (resp.\ closed immersion) if and only if $f_\lambda$ is a monomorphism (resp.\ closed immersion) for sufficiently large $\lambda$. \end{proposition} \begin{proof} The condition is clearly sufficient. To see that the condition is necessary for the property ``monomorphism'', recall that a morphism $f$ is a monomorphism if and only if its diagonal $\Delta_f$ is an isomorphism. As the diagonal is strongly representable and finitely presented the necessity in this case follows from~\cite[Thm.~8.10.5 (i)]{egaIV}. If $f$ is a closed immersion then by the previous case $f_\lambda$ is a monomorphism for sufficiently large $\lambda$. In particular $f_\lambda$ is quasi-finite and separated so that $f_\lambda$ is strongly representable~\cite[Thm.~A.2]{laumon} and Zariski's main theorem~\cite[Cor.~18.12.13]{egaIV} gives rise to a factorization $X_\lambda\to Y_\lambda\to S_\lambda$ of $f_\lambda$ where the first morphism is a quasi-compact open immersion and the second morphism is finite. As $X\to Y_\lambda\times_{S_\lambda} S$ is an open and closed immersion so is $X_\lambda\to Y_\lambda$ for sufficiently large $\lambda$. In particular $X_\lambda\to S_\lambda$ is a proper monomorphism and hence a closed immersion. \end{proof}
More generally Proposition~\pref{P:limit-of-ft+qs} holds for properties such as: proper, finite, affine, quasi-affine, separated; but not for other properties such as being an isomorphism. \end{section}
\begin{section}{Morphisms of constructible finite type} \label{A:constructible} In this section we define morphisms (locally) of \emph{constructible finite type}. A morphism (locally) of finite presentation is (locally) of constructible finite type and a morphism (locally) of constructible finite type is (locally) of finite type. For morphisms of noetherian stacks, all these notions coincide.
Let $X$ be a scheme. Recall that a subset $W\subseteq X$ is ind-constructible (resp.\ pro-constructible) if locally $W$ is a union (resp.\ an intersection) of constructible subsets~\cite[D\'ef.~7.2.2]{egaI_NE}. If $\map{p}{U}{X}$ is locally of finite presentation and surjective, then $W$ is ind-constructible (resp.\ pro-constructible, resp.\ constructible) if and only if $p^{-1}(W)$ is so~\cite[Cor.~7.2.10]{egaI_NE}. Now let $X$ be an algebraic stack. We define a subset $W\subseteq X$ to be ind-constructible (resp.\ pro-constructible, resp.\ constructible) if $p^{-1}(W)$ is so for some presentation $\map{p}{U}{X}$ with $U$ a scheme. This definition does not depend on the choice of presentation.
\begin{definition} Let $\map{f}{X}{Y}$ be a morphism of algebraic stacks. The morphism $f$ is \emph{ind-constructible} if the image under $f$ of any ind-constructible subset is ind-constructible. If this holds after arbitrary base change $Y'\to Y$, then we say that $f$ is \emph{universally ind-constructible}. \end{definition}
The primary example of an ind-constructible morphism is a morphism which is locally of finite presentation~\cite[Prop.~7.2.3]{egaI_NE}.
\begin{definition} A morphism $\map{f}{X}{Y}$ of stacks is \emph{locally of construct\-ible finite type} if $f$ is locally of finite type and universally ind-constructible. A morphism $f$ is of \emph{constructible finite type} if $f$ is quasi-compact, quasi-separated and locally of constructible finite type. \end{definition}
Morphisms (locally) of finite presentation are (locally) of constructible finite type.
The image of a pro-constructible set under a quasi-compact morphism is pro-constructible~\cite[Prop.~7.2.3]{egaI_NE}. It follows that a morphism of constructible finite type takes constructible subsets to constructible subsets~\cite[Prop.~7.2.9]{egaI_NE}.
\begin{proposition}\label{P:loc-constructible-type} Let $\map{f}{X}{Y}$ and $\map{g}{Y}{Z}$ be morphisms of algebraic stacks.
\begin{enumerate} \item If $f$ and $g$ are locally of constructible finite type, then so is $g\circ f$. \label{PI:ic:1} \item If $g\circ f$ is locally of constructible finite type and if $g$ is locally of finite type, then $f$ is locally of constructible finite type. \label{PI:ic:2} \end{enumerate}
\end{proposition} \begin{proof} \ref{PI:ic:1} is obvious. \ref{PI:ic:2} As the diagonal of $g$ is locally of finite presentation, we have that $f$ is the composition of a morphism locally of constructible finite type and a morphism locally of finite presentation, hence locally of constructible finite type. \end{proof}
\begin{proposition}\label{P:closed-imm-constructible} Let $\injmap{f}{Z}{X}$ be a closed immersion of algebraic stacks. The following are equivalent:
\begin{enumerate} \item $f$ is of constructible finite type.
\item The subset $|Z|\subseteq |X|$ is constructible. \item The open immersion $X\setminus Z\to X$ is quasi-compact. \end{enumerate}
\end{proposition} \begin{proof} Immediate from the fact that an open immersion is pro-constructible if and only if it is quasi-compact~\cite[Prop.~7.2.3]{egaI_NE}. \end{proof}
Not every quasi-separated morphism of finite type is of constructible finite type. For example, there are closed immersions which are not constructible. A morphism locally of finite presentation, e.g., an \'{e}tale{} morphism, is of constructible finite type if and only if it is of finite presentation.
Let $\map{f}{X}{Y}$ be an unramified morphism with a factorization $X\inj X_1\to Y$ where $X\inj X_1$ is a nil-immersion and $X_1\to Y$ is unramified and of finite presentation. Then $f$ is of constructible finite type. Conversely, if $Y$ is quasi-compact and quasi-separated it is likely that every unramified morphism $f$ of constructible finite type has such a factorization.
\end{section}
\end{document} | arXiv |
Louise Hay Award
The Louise Hay Award is a mathematics award planned in 1990 and first issued in 1991 by the Association for Women in Mathematics in recognition of contributions as a math educator. The award was created in honor of Louise Hay.[1]
Recipients
The following women have been honored with the Hay Award:[2]
Year Recipient
2023 Nicole M. Joseph
2022 Vilma Mesa
2021 Lynda Wiest
2020 Erika Camacho[3]
2019 Jacqueline Dewar[4]
2018 Kristin Umland[5]
2017 Cathy Kessel[6]
2016 Judy L. Walker[7]
2015 T. Christine Stevens[8]
2014 Sybilla Beckmann[9]
2013 Amy Cohen[10]
2012 Bonnie Gold[11]
2011 Patricia Campbell[12]
2010 Phyllis Chinn[13]
2009 Deborah Loewenberg Ball[14]
2008 Harriet Pollatsek[15]
2007 Virginia Warfield[16][17]
2006 Patricia Clark Kenschaft[18]
2005 Susanna Epp[19]
2004 Bozenna Pasik-Duncan[20]
2003 Katherine Puckett Layton
2002 Annie Selden
2001 Patricia D. Shure
2000 Joan Ferrini-Mundy
1999 Martha K. Smith
1998 Deborah Hughes Hallett
1997 Marilyn Burns
1996 Glenda Lappan and Judith Roitman
1995 Etta Falconer
1994 Kaye A. de Ruiz
1993 Naomi Fisher
1992 Olga Beaver
1991 Shirley Frye
See also
• List of mathematics awards
References
1. "Louise Hay Awards". awm-math.org. Association for Women in Mathematics. Retrieved 2019-03-19.
2. "Prizes, Awards, and Honors for Women Mathematicians". www.agnesscott.edu. Retrieved 2016-08-07.
3. "Hay Award 2020". Association for Women in Mathematics. Retrieved 2020-05-12.
4. "Hay Award 2019". Association for Women in Mathematics. Retrieved 2019-01-29.
5. Umland, Kristin. "Twenty-Eight Annual Louise Hay Award". Association for Women in Mathematics. Retrieved 26 Jan 2019.
6. Lauter, Kristing (September–October 2016). "President's Report". Association for Women in Mathematics Newsletter. 46 (5): 1. Retrieved 7 September 2016.
7. Lauter, Kristin (November–December 2015). "President's Report". Association for Women in Mathematics Newsletter. 45 (6): 5. Retrieved 7 September 2016.
8. "AWM Awards Given in San Antonio" (PDF). Notices of the AMS. 62 (5). May 2015. Retrieved 2016-08-07.
9. "Sybilla Beckmann - AWM Association for Women in Mathematics". sites.google.com. Retrieved 2019-01-26.
10. "Professor Amy Cohen Honored with Hay Award | Rutgers Women in Science". sciencewomen.rutgers.edu. Retrieved 2016-08-07.
11. Kehoe, Elaine (May 2012). "AWM Awards Given in Boston" (PDF). Notices of the AMS. 59 (5). Retrieved 7 August 2016.
12. "UM: COE: Research: Faculty Spotlight". www.education.umd.edu. Archived from the original on 2016-09-20. Retrieved 2016-08-07.
13. "Professor Wins National Award For Excellence In Math Education - Humboldt State Now". now.humboldt.edu. Retrieved 2016-08-07.
14. Education, University of Michigan School of. "Deborah Ball receives award for contributions in mathematics education from the Association for Wome | University of Michigan School of Education". www.soe.umich.edu. Archived from the original on 2017-07-02. Retrieved 2016-08-07.
15. "18th Louise Hay Award: Harriet S. Pollatsek". awm-math.org. Retrieved 2019-01-26.
16. "Awards and Recognition | UW ADVANCE". advance.washington.edu. Retrieved 2016-08-07.
17. "Science and Technology Newsletter: February 2007". www.brynmawr.edu. Archived from the original on 2013-10-20. Retrieved 2016-08-07.
18. "AWM Awards Presented in San Antonio" (PDF). Notices of the AMS. May 2006. Retrieved 2016-08-07.
19. "AWM Prizes Presented in Atlanta" (PDF). Notices of the AMS. 52 (5). May 2005. Retrieved 2016-08-07.
20. "Bozenna Pasik-Duncan - Biography". awm-math.org. Retrieved 2019-01-26.
| Wikipedia |
microsystems & nanoengineering
2.5-Dimensional Parylene C micropore array with a large area and a high porosity for high-throughput particle and cell separation
Yaoping Liu1 na1,
Han Xu1 na1,
Wangzhi Dai1,
Haichao Li2 &
Wei Wang1,3,4
Microsystems & Nanoengineering volume 4, Article number: 13 (2018) Cite this article
Large-area micropore arrays with a high porosity are in high demand because of their promising potential in liquid biopsy with a large volume of clinical sample. However, a micropore array with a large area and a high porosity faces a serious mechanical strength challenge. The filtration membrane may undergo large deformation at a high filtration throughput, which will decrease its size separation accuracy. In this work, a keyhole-free Parylene molding process has been developed to prepare a large (>20 mm × 20 mm) filtration membrane containing a 2.5-dimensional (2.5D) micropore array with an ultra-high porosity (up to 91.37% with designed pore diameter/space of 100 μm/4 μm). The notation 2.5D indicates that the large area and the relatively small thickness (approximately 10 μm) of the fabricated membranes represent 2D properties, while the large thickness-to-width ratio (10 μm/ < 4 μm) of the spaces between the adjacent pores corresponds to a local 3D feature. The large area and high porosity of the micropore array achieved filtration with a throughput up to 180 mL/min (PBS solution) simply driven by gravity. Meanwhile, the high mechanical strength, benefiting from the 2.5D structure of the micropore array, ensured a negligible pore size variation during the high-throughput filtration, thereby enabling high size resolution separation, which was proven by single-layer and multi-layer filtrations for particle separation. Furthermore, as a preliminary demonstration, the prepared 2.5-dimensional Parylene C micropore array was implemented as an efficient filter for rare cancer cell separation from a large volume, approximately 10 cells in 10 mL PBS and undiluted urine, with high recovery rates of 87 ± 13% and 56 ± 13%, respectively.
The successful separation of rare cancer cells from a clinical sample is extremely important for liquid biopsies in precision medicine1,2,3,4,5,6,7,8,9,10,11. Among the developed techniques, filtration has been considered the most promising approach due to its capability of achieving a high throughput, usually approximately 1 mL/min6,7,8,9,10,11. Various filtration techniques have been developed in the past decade, but for the liquid biopsy of large-volume clinical samples, such as bronchoalveolar lavage fluid (~80 mL), urine (~100 mL), and pleural effusion (~500 mL), the present filtration throughput still needs further improvement.
The current state of the art of filtration-based liquid biopsy is summarized in Table 1. Polymer filtration membranes fabricated via a track etching method can be traced back to the 1960s12,13 and have been widely utilized in biological studies and clinical practice for cell enrichment6,14. A large area is achievable for these track-etched membranes. However, the porosity of these membranes is very low (less than 1%), and therefore they cannot be used for high-throughput liquid biopsy. The other strategy to prepare filtration membranes is the lithography-assisted microfabrication technique15,16,17,18,19,20,21,22,23,24,25. Wit et al.15 utilized deep reactive iron etching (DRIE) and potassium hydroxide etching approaches to produce Si micropore arrays with an area of 0.64 cm2 and a porosity of 10%. Adams et al.16,17,18 fabricated an SU-8 filtration membrane with micropore arrays distributed within a 9 mm circular area with a porosity of <12.5% and achieved a throughput of >1 mL/min. Yoon-Tae Kang et al.19 developed a tapered-slit SU-8 membrane with a porosity up to 11% to increase the high throughput of viable circulating tumor cell (CTC) isolation19. Hosokawa et al.20,21 fabricated microcavity-arrayed poly(ethylene terephthalate) (PET) and Ni membranes via laser drilling20 and photolithography-based electroforming21, respectively. Although the area of these microcavity array could be easily extended to >1 cm2, the porosity of their filtration membrane was less than 2.25%, so the throughput from the aforementioned microcavity arrays was still less than 0.2 mL/min.
Table 1 The previously reported filtering devices for CTC (peripheral blood)-based liquid biopsy
Tang et al.22 fabricated a mechanically strong polyethylene (glycol) diacrylate (PEGDA) microfilter containing conical holes via ultraviolet-assisted molding, but its porosity remained very low (<5.88%), and the filtration throughput was consequently limited to only 0.2–2.0 mL/min. Harouaka et al.23 fabricated Parylene C microspring structures via simple photolithography, achieving a throughput of 0.75 mL/min. Xu et al.24 developed a rectangular-pore array Parylene C membrane and achieved a porosity of 18%, along with a throughput >0.2 mL/min. Zheng et al.25 fabricated a 1 cm2 Parylene C micropore-array membrane via photolithography-based patterning and direct etching of Parylene C. The edge-to-edge spaces between adjacent pores were large (>10 μm), and the porosity was small (<5.6% with the large supporting structures considered), resulting in a filtration throughput of approximately 0.1 mL/min 25.
In short, owing to the fabrication difficulties and mechanical limitations, the previously reported micropore-array filtration structures15,16,17,18,19,20,21,22,24,25,26,27,28 usually had a relatively large supporting space (edge-to-edge distance) between the adjacent micropores and thereby also had a low porosity (the maximum reported was 18%24), which led to a limited, although high enough for CTC-based liquid biopsy, filtration throughput (<5 mL/min). Moreover, the low porosity caused a large filtration resistance and thus a large pressure during filtration, which seriously damaged the cells trapped in the micropores. Zheng et al.26 and Zhou et al.27 further developed a 3-dimensional (3D) double-layered Parylene C membrane filter to decrease the stress experienced by the trapped cells. In this filter, the space between the adjacent pores on the bottom membrane supported the cells trapped on the top membrane and thus relieved the stress on the cells trapped in the top membrane. Yusa et al.28 also developed a 3D palladium filter containing micropocket arrays with two different pore sizes in the upper and lower layers by using sequential lithography and electroforming techniques. Although the produced device exhibited a large area of 1 cm2, it still suffered from a relatively low throughput (2.0–2.5 mL/min) subject to the low porosity of <5.5%. Moreover, the non-transparency of palladium degraded its applicability in cellular research studies.
Parylene C is a popular polymer material in the field of biomedical microelectromechanical systems due to its high biocompatibility, optical transparency, and compatibility with the microfabrication processes. Due to the limited anisotropic etching capability of Parylene C29, it is difficult to obtain a micropore array with a small spaces (high porosity) and steep sidewall profiles (precision and uniformity of the pore size) by direct patterning and dry etching. A Parylene C molding procedure was developed by Suzuki et al.30 and Kuo et al.31 to fabricate suspended microsprings and microbeams with high aspect ratios and relatively large feature sizes (>10 μm). By using the Parylene C molding technique, our previous work32 reported a large, close-packed hexagonal micropore array with an ultra-high porosity and good size controllability. Cell separation by the prepared Parylene C micropore array with high throughput (~2 mL/min) and recovery rate (up to 96.5%) was preliminarily demonstrated33. However, in the previous work, the mechanical properties of the filtration membrane and the separation precision were not studied, which are key parameters for robust and reliable liquid biopsy. In this work, a modified molding process was developed to prepare a keyhole-free 2.5-dimensional (2.5D) Parylene C micropore array. The notation 2.5D indicates that the large area and the relatively small thickness (approximately 10 μm) of the fabricated membranes represent 2D properties, while the large thickness-to-width ratio (10 μm/ < 4 μm) of the spaces (edge-to-edge distances) between the adjacent pores corresponds to a local 3D feature. A chromatic confocal imaging (CCI) platform34,35 was applied to characterize the mechanical properties of the prepared micropore array filtration membranes. The separation precision of the filtration membrane was scrutinized through single-layer and multi-layer filtrations of rigid particles. Samples of 10 mL phosphate-buffered saline (PBS) and undiluted healthy urine containing ~10 spiked bladder epithelial cancer cells were filtered to demonstrate the high recovery rate of the prepared 2.5D Parylene C micropore array.
Design of the micropore-array filtration membrane
To maximize the porosity of the filtration membrane, a close-packed, hexagonal micropore array was designed. Different pore diameters/spaces were investigated to optimize the filtration performance, cases 1-4, as listed in Table 2. In addition to the edge-to-edge space, defined as s, there are two parameters that describe the feature size of the hexagonal pore, d and d', which correspond to the diagonal and edge-to-edge lengths, respectively. In this work, the diagonal lengths were designed to be 8, 10, 12, 15, and 100 μm, and two edge-to-edge spaces (2 and 4 μm) were tested. For each filtration membrane, the overall size was 20 mm × 20 mm with an effective filtration area of >13 mm × 13 mm. The prepared micropore array featured a 2.5D property: the large area (20 mm × 20 mm) and small thickness (~10 μm) of the filtration membrane represents a 2D geometry, while the high thickness-to-width ratio (10 μm/ < 4 μm) of the spaces between the adjacent micropores corresponds to a local 3D feature.
Table 2 The parameters of 2.5D Parylene C micropore-arrayed membrane for filtration investigation and optimization
Fabrication process
The fabrication procedure is schematically described in Fig. 1. First, a template containing the high-density, close-packed hexagonal micropillar array with a height of 10 μm on a 4-inch Si wafer was prepared via conventional photolithography patterning and DRIE (Fig. 1a). Second, a Parylene C layer with a specified thickness was deposited onto the Si template using a commercial Parylene deposition instrument (SCS, PDS2010; Fig. 1b). After that, reactive ion etching (RIE) of the deposited Parylene C layer was performed until the top of the silicon pillars was exposed (Fig. 1c1). An additive annealing at a temperature of 320 °C in a nitrogen atmosphere for 2 h before and after the RIE step was compared (see Fig. 1c2, c3, respectively). Finally, the prepared micropore-array Parylene C membrane was released from the Si substrate by soaking it in an HNA (HF:HNO3:HAc = 5:7:11, v/v) bath (Fig. 1d).
Fig. 1: Schematic illustrations of the conventional and modified molding processes.
Conventional molding process utilized for the fabrication of 2.5D micropore-array Parylene C filtration membranes without annealing (a, b, c1, d1) and a modified procedure that includes annealing treatment at 320 °C in nitrogen atmosphere for 2 h after (a, b, c2, d2) and before (a, b, c3, d3) the RIE stage
The mechanical properties of the fabricated 2.5D micropore-array Parylene C membranes were examined using the CCI-based platform described in detail elsewhere34,35. Briefly, a concentrated load was applied to the membrane surface, and the vertical displacement corresponding to the applied stress was measured and recorded. Subsequently, the theory of a circular plate with a large deflection by Timoshenko et al.36 was used to analyze the experimental data, and an equivalent Young's modulus was calculated according to Eq. (1)35 to characterize the mechanical strength of the membrane.
Supposing the membrane was clamped at the edges and the concentrated load q c was applied at the center of the membrane, the relationship between the deflection d v at the center and the load can be expressed by Eq. (1):
$$\frac{{353 - 191\mu }}{{648(1 - \mu )}}Ehd_v^3 + \frac{{4Eh^3d_v}}{{3(1 - \mu )}} = \frac{{q_ca^2}}{\pi },$$
where μ represents the Poisson ratio of Parylene C, which is 0.5; h is the thickness of the membrane; a is the radius of the area of membrane; and E is the equivalent Young's modulus of the 2.5D micropore-array Parylene C membrane. By fitting the experimental data into Eq. (1), the equivalent Young's modulus of each membrane was extracted. Furthermore, the obtained equivalent Young's modulus was used to predict the deformation of the 2.5D micropore-array membranes during the actual filtration process. A COMSOL Multiphysics model was established to calculate the hydraulic pressure and shear stress applied on the membrane. A laminar flow model with a constant flow rate boundary condition was built and solved. The average pressure and shear stress on the membranes were calculated at a certain filtration throughput (100–200 mL/min), which exactly covered the flow rates in our particle separation experiments (150.78 ± 5.41 mL/min for single-layer filtration and 118.44 ± 4.81–179.95 ± 10.12 mL/min for multi-layer filtration with a PBS solution). After that, the vertical displacement of the entire membrane was estimated from the calculated pressure and equivalent Young's modulus using Eq. (2)35.
$$\frac{{23 - 9\mu }}{{252(1 - \mu )}}Ehd_v^3 + \frac{{2Eh^3d_v}}{{9(1 - \mu )}} = \frac{{q_ua^2}}{{24}},$$
where q u represents the uniformly distributed load on the membrane, and d v is the vertical displacement at the center of the membrane. Finally, the lateral size variation of a single micropore, δR, was estimated based on the calculated displacement, d v , of the membrane using Eq. (3):
$${\mathrm{\delta R}} = \sqrt {{\mathrm{R}}^2 + d_V^2},$$
where R represents the original radius of the membrane, δ represents the change rate of the radius, as well as the pore size, and R is the diameter of the micropore.
Particle separations via single-layer/multi-layer filtrations
To test the filtration performance of the present micropore-array filtration membranes, particles with varied diameters were chosen as the separation targets and went through single-layer or multi-layer filtrations.
In the single-layer filtration test, a Parylene C membrane (case 3 in Table 2) was packaged with a homemade poly(methyl methacrylate) (PMMA) holder, as shown in Fig. 2a. The utilized sample loading procedure is schematically shown in Fig. 2a. The sample for separation was prepared via the addition of two monodispersed polystyrene (PS) particles (4000 Series Monosized Particles, Thermo Scientific, USA) with nominal diameters of 9 μm and 12 μm (8.1 × 104/mL and 2.9 × 104/mL, respectively) into 3.2 mL deionized (DI) water and well mixed on a vortex. After taking 200 μL out for a diameter measurement, the other 3 mL solution was loaded into the packaged single-layer filtration system (Fig. 2a). The throughout here was obtained based on the volume of the solution (3 mL DI water with 3.3 × 105 particles), and the filtration time length was extracted from the recorded video of the whole filtration process. Then, the solution after filtration was collected and concentrated to 200 μL via centrifugation. The solutions containing particles before and after filtration were loaded (200 μL) into a high-throughput imaging flow cytometry system (FCM, Amnis®, ImageStream X, Merck, Germany) to record particle images. Subsequently, the pictures were manually checked to screen the well-focused ones and measure the particle diameters. Meanwhile, the Parylene C membrane used in the filtration was disassembled from the PMMA holder and coated with a 3 nm thick Au layer to conduct scanning electron microscopy (SEM, JSM-7500F, JEOL, Japan) observations, which enabled a diameter check of particles trapped on the membrane.
Fig. 2: 2.5D micropore-array Parylene C membrane and the filtration system.
a A single membrane packaged for single-layer filtration. b An assembly with a homemade PMMA holder containing four membranes with different pore sizes/diameters for multi-layer filtration applications. c Easy handling of the fabricated filtration membrane with tweezers. d Photographs of the Parylene C membranes prepared with and without the annealing treatment
In the multi-layer filtration, 4 pieces of 2.5D micropore-array Parylene C filtration membranes with different pore diameters/spaces were sequentially assembled using the homemade PMMA holder (Fig. 2b) in order to perform multiple-sized particle separation. The diameters/spaces of the filtration membranes arranged from the top to the bottom of the assembly were cases 4 to 1 in Table 2. A total of 2 mL DI water containing 4.5 × 106 multi-dispersed PS particles with diameters ranging from 6 to 22 μm (BaseLine Chromtech Research Centre, Tianjin, China) was filtered through the assembled multi-layer filtration system. After filtration, the Parylene C membranes were disassembled and coated with a 3 nm thick Au layer to conduct SEM (JSM-7500F, JEOL, Japan) studies and obtain the sizes of the particle trapped on each membrane.
Rare cell separation with single-layer filtration
Approximately 10 bladder epithelial cancer cells (T24) were spiked into 10 mL PBS and 10 mL undiluted midstream urine from a healthy volunteer. The addition of 10 T24 cells was realized via precise operation with a micropipette connecting to a microinjection pump (Fusion 200 series, Chemyx Inc., USA) under a stereomicroscope. The T24 cells (National Infrastructure of Cell Line Resource (Beijing)) were cultured with McCoy's 5a medium containing 10% Fetal bovine serum (FBS, Gibco, Thermo Fisher, USA), 2 mM Glutamine (Gibco, Thermo Fisher, USA), 1 mM sodium pyruvate (Gibco, Thermo Fisher, USA), 25 mM HEPES (Gibco, Thermo Fisher, USA), and 100 U/mL penicillin–streptomycin (Gibco, Thermo Fisher, USA). Before micropipette aspiration and transfer into the 10 mL PBS and urine, the adhered T24 cells (passages 10–20) were rinsed with 0.25% trypsin with 0.03% EDTA solution (Gibco, Thermo Fisher, USA), and then an additional 1 to 2 mL trypsin-EDTA solution was added into a T25 flask. The flask was allowed to sit at 37 °C until the T24 cells detached. The detached T24 cells were collected into a tube and centrifuged at 1000 rpm for 5 min. Then, the supernatant was disposed of, and the T24 cells were suspended into PBS and incubated with 5 μm Cell Tracker Red/Green (Invitrogen, Thermo Fisher, USA) and 1 μg/mL Hoechst 33342 (Invitrogen, Thermo Fisher, USA) at room temperature for 30 min for pre-labeling. After incubation, the cells were washed with PBS 3 times and re-suspended into the aforementioned McCoy's 5a medium for the micropipette operation. The 10 mL PBS and undiluted urine with spiked T24 cells were filtered following the same operation as particle separation through the micropore-array membrane with diameters/spaces of case 2 in Table 2. After filtration, the membranes were disassembled from the PMMA gadget, adhered to a slide and sealed with the ProLong antifade solution (ProLong® Diamond Antifade Mountant, Invitrogen, Thermo Fisher, USA) and a cover slip. The recovered cells on the membranes were observed and counted under a fluorescence microscope (CKX53, Olympus, Japan). For accuracy, every slide was checked 3 times. The cell separation experiments were repeated 5 times.
2.5D micropore-array Parylene C membranes
Figure 3a1, a2 shows the SEM images of the cross-sectional and top views of the Parylene C microstructures in the trenches between the adjacent silicon pillars, respectively, which were obtained after RIE and before the release stage of the molding process. Keyhole formation can be clearly observed on the top surface of the released membrane displayed in Fig. 3a3. Keyhole formation was unavoidable in the conventional molding process and has been well studied in the chemical vapor deposition field30,31,37. Although some attempts to reduce the size of the produced keyholes using etching/deposition cycling processes have been made, they were time consuming and complicated30,31,37. Herein, we proposed a simple annealing procedure, which was conducted at a temperature of 320 °C in a nitrogen atmosphere for 2 h, for keyhole removal. Figure 3a4–a6, a7–a9 contains the SEM images of the membrane subjected to annealing at a temperature of 320 °C in a nitrogen atmosphere for 2 h obtained after and before RIE, respectively. They show that all keyholes have been effectively removed by the annealing treatment because of the reflow of Parylene C.
Fig. 3: SEM images of the 2.5D micropore array Parylene C membranes.
SEM images of the a1 cross-sectional view and a2 surface of Parylene C microstructures on the silicon template after RIE and a3 released Parylene C membrane prepared via conventional molding. SEM images of the a4 cross-sectional view and the a5 surface of Parylene C microstructures on the silicon template and a6 released Parylene C membrane obtained after the RIE stage of the modified molding process. SEM images of the a7 cross-sectional view and a8 surface of Parylene C microstructures on the silicon template and a9 released Parylene C membrane obtained before the RIE stage of the modified molding process. b Membranes with diameters/spaces of cases 5 (b1) and 6 (b2) in Table 2. b3 Image showing the steep sidewall profiles and excellent flexibility properties of the fabricated 2.5D micropore-array Parylene C membrane with high porosity. SEM images of the b4 surface and b5 inclined cross-sectional view of the Parylene C membrane with an aspect ratio of approximately 7.5. b6 Image showing the steep sidewall profiles and excellent flexibility properties of the fabricated 2.5D micropore-array Parylene C membrane with a high aspect ratio. b7 A combination of the hexagonal and square micropores. SEM images of the Parylene C membranes with a rectangular pattern (b8) and a triangle pattern (b9)
The SEM image of the cross-sectional view of the Parylene C microstructure depicted in Fig. 3a4 exhibits a concave surface after keyhole removal during the annealing treatment of the modified molding process performed after RIE. In contrast, the cross-section of the Parylene C microstructure obtained with the annealing treatment conducted before RIE (Fig. 3a6) shows a flat surface. Figure 3a5, a8, a6, a9 displays the Parylene C microstructures on the silicon template after RIE and release of the 2.5D micropore-array Parylene C membranes. The surface topographies depicted in Fig. 3a8, a9 were smoother than those displayed in Fig. 3a5, a6. The surface profiles described in Fig. 3a7–a9 were better than those depicted in Fig. 3a4–a6, indicating that the annealing treatment conducted before RIE was more efficient than that performed after RIE during the preparation of the Parylene C membrane. The simple but efficient annealing treatment removed keyholes successfully and presented more stable and reproducible morphologies, which ensured higher reliability for practical applications.
The produced membranes were also annealed at 290 °C in a nitrogen atmosphere for 2 h, which corresponded to the highest reported working temperature for Parylene C in this work. However, the keyholes were nearly unchanged because 290 °C annealing did not cause the reflow of Parylene C. After annealing at 320 °C in a nitrogen atmosphere for 2 h, Parylene C lost approximately 5% of its original weight (according to the linear extrapolation from the data reported by Cao et al.38), while the positions of the X-ray diffraction peaks remained almost the same, indicating no significant changes in the chemical structure during the treatment. In addition, Metzen et al.39 investigated the chemical structure of annealed Parylene C via Fourier transform infrared spectroscopy and showed the absence of any changes observed after annealing for 3 h at 300 °C in a nitrogen atmosphere (two reduced bands were detected only until after 3 h of treatment at 350 °C). Thus, the annealing temperature of 320 °C is an appropriate option for keyhole removal while ensuring the absence of obvious weight losses or chemical structure changes. Therefore, all annealing treatments were performed at 320 °C in a nitrogen atmosphere for 2 h in this study, unless specified otherwise.
As shown in Fig. 2c, the fabricated 2.5D micropore-array Parylene C membrane can be easily handled with tweezers. Figure 2d displays two large (20 mm × 20 mm) filtration membranes prepared with and without annealing, which indicate unnoticeable differences in their physical appearance (including transparency).
Figure 3b1 shows the existence of two parameters describing the diameters of hexagonal micropores, d and d', which correspond to the diagonal and edge-to-edge lengths, respectively. The pore diameter mentioned in this work represents the diagonal length d (unless specified otherwise). In particular, Fig. 3b1 shows the SEM image of a typical fabricated 2.5D micropore-array membrane with pore diameters/spaces of case 5 in Table 2 and a thickness-to-width ratio of the spaces between adjacent pores of 4.64. Figure 3b2 displays the membrane with pore diameters/spaces of case 6 in Table 2 and a porosity of up to 91.37%, which was the highest value obtained for the membranes fabricated in this study. Additionally, the so-prepared membrane exhibited steep sidewall profiles and excellent flexibility, as shown in Fig. 3b3. Figure 3b4 shows the Parylene C membrane fabricated with micropore diameters of 1.39 ± 0.07 μm and a depth of 10.45 ± 0.21 μm, as depicted in Fig. 3b5, which reveals an aspect ratio up to 7.5. The image depicted in Fig. 3b6 reveals that the membrane with a high ratio aspect presents steep sidewall profiles and excellent flexibility similar to the membrane with high porosities. Square micropores with a diameter of 3.42 ± 0.09 μm and hexagonal micropores with a diameter of 11.21 ± 0.11 μm, featuring the same space of 16.50 ± 0.11 μm, can be prepared on the same membrane, as shown in Fig. 3b7. In addition to the close-packed hexagonal array, other micropore patterns were also produced to show the capability of the present Parylene C molding process, and their corresponding SEM images are shown in Fig. 3b8, b9.
Throughput test
Owing to the high porosity of the 2.5D micropore-array Parylene C membrane, a high throughput of up to 180 mL/min (obtained at a porosity of 58.46% with a PBS solution) filtration for an aqueous (PBS was used in this study) solution simply driven by gravity was achieved, as shown in Fig. 4a. The filtration throughput (>3 experiments for each) obtained by the present 2.5D Parylene C micropore array is the highest reported data to the best of the authors' knowledge, which mainly benefited from the high porosity and the large filtration area.
Fig. 4: Calculation and simulation for the mechanical characterization of the 2.5D micropore array Parylene C membranes.
a The measured filtration throughput of PBS driven by gravity with the prepared 2.5D micropore-array Parylene C membranes of different porosities, cases 1–4 in Table 2. b Equivalent Young's modulus values obtained for the flat non-porous Parylene C film and 2.5D micropore-array Parylene C membranes with pore diameters/spaces of case 2 in Table 2, which were prepared with and without annealing treatment at a temperature of 320 °C in a nitrogen atmosphere for 2 h. c Numerical simulation of hydraulic pressure and shear stress. d Calculated pressures applied to the 2.5D micropore-array membranes (left y-axis, red) with different porosities and the corresponding micropore size variations (right y-axis, blue) observed during the actual filtration process
The mechanical behavior of circular plates with large deflections was theoretically described by Timoshenko et al.33 The obtained results were used in this study to estimate the equivalent Young's modulus values from the experimentally measured vertical displacements under concentrated loads, as described in Eq. (1). The Young's modulus of the flat Parylene C film (which served as the system control and was used for calibration purposes) was measured as 3.01 ± 0.36 GPa (calculated from the data of 5 tests), which is almost consistent with the results of previous studies (2.76 GPa, data from the datasheet of SCS40) and confirms the reliability of the present measurement setup and data extraction procedure. The equivalent Young's modulus values of the 2.5D micropore-array Parylene C membranes with pore diameters/spaces of case 2 in Table 2 prepared with and without annealing treatment are compared in Fig. 4b (>3 samples for each).
The Young's modulus of the flat Parylene C film decreased after annealing36,38, while the equivalent Young's modulus of the fabricated 2.5D micropore-array Parylene C membranes remained almost unchanged after the annealing treatment. The retained high mechanical strength of the micropore-array membranes most likely resulted from their 2.5D structures.
With the established COMSOL Multiphysics model, the hydraulic pressure and shear stresses applied on the membranes during filtration were numerically simulated (shown in Fig. 4c). Thereby, the equivalent pressures applied to the 2.5D micropore-array Parylene C membranes with different porosities and the corresponding micropore size variations during the actual filtration process at a throughput higher than 110 mL/min (Fig. 4a) are shown in Fig. 4d. The negligible micropore size variations (<40 nm) indicated that the present 2.5D micropore-array membranes exhibited a high mechanical strength during the high-throughput filtration, which ensured their good separation performance, i.e., a high size resolution. Moreover, the elongation of the 2.5D micropore-array Parylene C membranes observed at the filtration throughput of 180 mL/min (with PBS solution) was below 0.28%, which was much smaller than the previously reported elongation yield of Parylene C (2.9% for the as-deposited Parylene C, from the datasheet of SCS40). Therefore, the 2.5D micropore-array membranes underwent an elastic displacement during the filtration operation.
Microbead separation via single-layer and multi-layer filtrations
To test the size-based separation performance of the present 2.5D micropore-array membranes, single-layer and multi-layer filtrations of microbeads with various diameters were carried out. In the single-layer filtration, monodispersed microbeads with different diameters were mixed as the loading sample. As shown in Fig. 5a, the size distribution of the microbeads before filtration showed two peaks centered at 9 μm and 12 μm. After filtration, only the peak centered at 9 μm remained, which revealed that only the microbeads with a diameter smaller than the edge-to-edge length d' (9.71 ± 0.20 μm) could go through the filtration membrane, while the microbeads of diameter larger than d' should have been trapped on the membrane, which was verified by the SEM image shown in Fig. 5a. In the multi-layer filtration, monodispersed microbeads with a diameter ranging from 6 μm to 22 μm were contained in the loading sample. The SEM images in Fig. 5b display the microbeads trapped on the Parylene C membranes with the edge-to-edge lengths of micropores, d', of cases 4-1 in Table 2. As indicated in Fig. 5b, the size distribution of the microbeads trapped on a membrane was exactly between the d' of this membrane and that of the above membrane, which proved that the present filtration membrane had a good size resolution. The PS microbeads were rigid and could not deform; the good size resolution is hence attributed to the negligible size variation of the micropore during the filtration, even at a high throughput. This further confirmed the strong mechanical strength of the prepared 2.5D micropore-array Parylene C membrane. In addition, the non-specific adhesion of the microbeads to the supporting structure (space) between adjacent micropores was absent, which corresponded to a very high separation purity of filtration (close to 100%). The high mechanical strength of the present 2.5D micropore-array Parylene C membrane guaranteed the excellent filtration performance of rigid microbead separation. Nevertheless, in real liquid biopsy, the separation of live cells would require further optimization of the pore diameters, as cells could deform to go through micropores even smaller than their nominal diameter, which requires further investigation.
Fig. 5: Particle separation results with the single-layer and multi-layer filtrations.
a Particle separation of the single-layer filtration. Size distribution of the particles before and after filtration (the filtration membrane of case 3 in Table 2), and a typical SEM image of the particles trapped on the filtration membrane. b Particle separation of the multi-layer filtration. Typical SEM images of the particles trapped on the filtration membranes (cases 1 to 4), and size distribution of the particles trapped on the filtration membranes with different micropore diameters during the multi-layer filtration
The size of the T24 cells used in the present work was measured as 13.5 ± 0.84 μm (ranging from 8 μm to 20 μm) under the microscope in PBS suspension. Therefore, the Parylene C membrane with a pore edge-to-edge length of 7.90 ± 0.08 μm (case 2 in Table 2) was used for cell separation. The throughputs of the cell separation from PBS and undiluted urine by this micropore array were measured. For the PBS solution, the throughput was 133 mL/min, the same as that shown in Fig. 4a, while for the undiluted urine, the throughput dropped to 100 mL/min because of the large number of background cells but was still high enough to run a 10 mL sample in 6 s. As shown in Fig. 6a, 87 ± 13% and 56 ± 13% of spiked T24 cells (approximately 10 cells in each test) were successfully captured from the PBS and undiluted urine, respectively. Figure 6b–d shows the captured T24 cells on the micropore array labeled with Hoechst 33342 (nucleus) and Cell Tracker Red (cytoplasm) and the merged image.
a Recovery rates for the T24 cells in 10 mL PBS and 10 mL undiluted urine by using a micropore array with a pore diameter of 9.13 ± 0.09 μm and a space of 4.69 ± 0.24 μm (case 2 in Table 2). b–d Captured T24 cells on the micropore array labeled with Hoechst 33342 (nucleus) and Cell Tracker Red (cytoplasm), and the merged image
In this study, a modified Parylene molding process was developed for the preparation of large-area (>20 mm × 20 mm) 2.5D micropore-array membranes with ultra-high porosities (up to 91.37% with the designed pore diameter/space of 100 μm/4 μm). The keyholes that formed in the Parylene structures, unavoidable during the conventional molding process, were successfully removed via a simple and effective annealing treatment at 320 °C in a nitrogen atmosphere for 2 h. Owing to the high porosity of the prepared 2.5D micropore-array membrane, a high throughput up to 180 mL/min (obtained at the porosity of 58.46% with PBS solution) for an aqueous solution was successfully achieved simply driven by gravity. The high mechanical strength of the fabricated 2.5D micropore-array membrane ensured a negligible micropore size variation during the high-throughput filtration and thus resulted in a high size resolution in single-layer and multi-layer particle separations. The precise filtration capability ensured high recovery rate cell separations from a large volume of PBS or undiluted urine. The preliminary experimental results showed that by using a 7.90 ± 0.08 μm-sized micropore array, 87 ± 13% and 56 ± 13% of spiked T24 cells (approximately 10 cells in each test) were successfully captured from 10 mL PBS and undiluted urine, respectively. The throughputs were 133 mL/min for PBS and 100 mL/min for undiluted urine. This result indicated that the present 2.5D micropore-array membrane has promising potential in fulfilling high-throughput, high recovery rate liquid biopsy of large-volume clinical samples.
Arya, S. K., Lim, B. & Rahman, A. R. Enrichment, detection and clinical significance of circulating tumor cells. Lab. Chip. 13, 1995–2027 (2013).
Shields, C. W. IV, Reyes, C. D. & López, G. P. Microfluidic cell sorting: a review of the advances in the separation of cells from debulking to rare cell isolation. Lab. Chip. 15, 1230–1249 (2011).
Low, W. S. & Abas, W. A. B. W. Benchtop technologies for circulating tumor cells separation based on biophysical properties. Biomed. Res. Int. 2015, 1 (2015).
Pesta, M. et al. May CTC technologies promote better cancer management?. EPMA J. 6, 1 (2015).
Alix-Panabières, C. & Pante, K. Technologies for detection of circulating tumor cells: facts and vision. Lab. Chip. 14, 57–62 (2014).
Dolfus, C. et al. Circulating tumor cell isolation: the assets of filtration methods with polycarbonate track-etched filters. Chin. J. Cancer Res. 27, 479–487 (2015).
Jin, C. et al. Technologies for label-free separation of circulating tumor cells: from historical foundations to recent developments. Lab. Chip. 14, 32–44 (2014).
Yu, L. et al. Advances of lab-on-a-chip in isolation, detection and post-processing of circulating tumour cells. Lab. Chip. 13, 3163–3182 (2013).
Li, P. et al. Probing circulating tumor cells in microfluidics. Lab. Chip. 13, 602–609 (2013).
Harouaka, R. A., Nisic, M. & Zheng, S. Y. Circulating tumor cell enrichment based on physical properties. J. Lab. Autom. 18, 455–468 (2013).
Chen, J., Li, J. & Sun, Y. Microfluidic approaches for cancer cell detection, characterization, and separation. Lab. Chip. 12, 1753–1767 (2012).
Fleischer, R. L. et al. Fission-track ages and track-annealing behavior of some micas. Science 143, 349–351 (1964).
Fleischer, R. L. et al. Particle track etching. Science 178, 255–263 (1972).
Desitter, I. et al. A new device for rapid isolation by size and characterization of rare circulating tumor cells. Anticancer. Res. 31, 427–442 (2011).
Wit, S. D. et al. The detection of EpCAM+ and EpCAM– circulating tumor cells. Sci. Rep. 5, 12270 (2015).
Adams, D. L. et al. The systematic study of circulating tumor cell isolation using lithographic microfilters. RSC Adv. 4, 4334–4342 (2013).
Adams, D. L. et al. Cytometric characterization of circulating tumor cells captured by microfiltration and their correlation to the CellSearch(?) CTC test. Cytom. Part A 87, 137–144 (2015).
Adams, D. L. et al. Precision microfilters as an all in one system for multiplex analysis of circulating tumor cells. RSC Adv. 6, 6405–6414 (2016).
Kang, Y. T. et al. Tapered-slit membrane filters for high-throughput viable circulating tumor cell isolation. Biomed. Micro. 17, 45–52 (2015).
Hosokawa, M. et al. High-density microcavity array for cell detection: single-cell analysis of hematopoietic stem cells in peripheral blood mononuclear cells. Anal. Chem. 81, 5308–5313 (2009).
Hosokawa, M. et al. Size-selective microcavity array for rapid and efficient detection of circulating tumor cells. Anal. Chem. 82, 6629–6635 (2010).
Tang, Y. et al. Microfluidic device with integrated microfilter of conical-shaped holes for high efficiency and high purity capture of circulating tumor cells. Sci. Rep. 4, 6052 (2014).
Harouaka, R. A. et al. Flexible micro spring array device for high-throughput enrichment of viable circulating tumor cells. Clin. Chem. 60, 323–333 (2014).
Xu, T. et al. A cancer detection platform which measures telomerase activity from live circulating tumor cells captured on a microfilter. Cancer Res. 70, 6420–6426 (2010).
Zheng, S. et al. Membrane microfilter device for selective capture, electrolysis and genomic analysis of human circulating tumor cells. J. Chromatogr. A 1162, 154 (2007).
Zheng, S. et al. 3D microfilter device for viable circulating tumor cell (CTC) enrichment from blood. Biomed. Micro. 13, 203–213 (2011).
Zhou, M. D. et al. Separable bilayer microfiltration device for viable label-free enrichment of circulating tumour cells. Sci. Report 4, 7392 (2014).
Yusa, A. et al. Development of a new rapid isolation device for circulating tumor cells (CTCs) Using 3D palladium filter and its application for genetic analysis. PLoS One 9, 8821 (2014).
Meng, E., Li, P. Y. & Tai, Y. C. Plasma removal of Parylene C. J. Micromech. Microeng. 18, 512–520 (2008).
Suzuki, Y. & Tai, Y. C. Micromachined high-aspect-ratio parylene spring and its application to low-frequency accelerometers. J. Micro. Syst. 15, 1364–1370 (2006).
Wen-Cheng, K. & Chen, C. W. Fabrication suspended high-aspect-ratio parylene structures for large displacement requirements. Int. J. Autom. Smart Technol. 4, 105–112 (2014).
Liu Y. et al. Filtration membrane with ultra-high porosity and pore size controllability fabricated by parylene C molding technique for targeted cell separation from bronchoalveolar lavage fluid (BALF). Proc. 18th Int. Conf. on Solid-State Sensors, Actuators and Microsystems (Transducers 2015) 1767–1769 (Anchorage, 2015).
Liu Y. et al. Highly precise and efficient cell separation with parylene C micropore arrayed filtration membrane. Proc. 19th Int. Conf. on Miniaturized Systems for Chemistry and Life Sciences (microTAS 2015) 389 (Gyeongju, 2015).
Dai W. et al. Chromatic confocal imaging based mechanical test platform for micro porous membrane. Proc. 13th IEEE Int. Conf. on Solid-State and Integrated Circuit Technology (ICSICT 2016) 16-6 (Hangzhou, 2016).
Dai W. et al. Mechanical strength of 2.5D Parylene C micropore-arrayed filtration membrane. 19th International Conference on Solid-State Sensors, Actuators and Microsystems (TRANSDUCERS 2017) 1215 (Kaohsiung, 2017).
S. Timoshenko, S. Woinowsky-Krieger. Theory of Plates and Shells (McGraw-Hill, New York, 1959).
Lei, Y. et al. A parylene-filled-trench technique for thermal isolation in silicon-based microdevices. J. Micromech. Microeng. 19, 035013 (2009).
Cao, Q. et al. Thermal decomposition of Parylene C film. J. Mater. Sci. Eng. 26, 620 (2008). Chinese.
von Metzen, R. P. & Stieglitz, T. The effects of annealing on mechanical, chemical, and physical properties and structural stability of Parylene C. Biomed. Micro. 15, 727–735 (2013).
Specialty Coating Systems, Inc. SCS PARYLENE PROPERTIES. Available at https://scscoatings.com/wp-content/uploads/2017/09/02-SCS-Parylene-Properties-1016.pdf.
This work was financially supported by the Major State Basic Research Development Program (973 Program, Grant No. 2015CB352100), the National Natural Science Foundation of China (Grant Nos. 81471750 and 91323304), the Beijing Natural Science Foundation (Grant No. 4172028), and the Seeding Grant for Medicine and Information Sciences (2016-MI-04) awarded by Peking University. We would also like to thank the technicians from the National Key Laboratory of Science and Technology on micro/nano fabrication for helping with the processes and Editage (www.editage.com) for editing the English language of this manuscript.
These authors contributed equally: Yaoping Liu, Han Xu.
Institute of Microelectronics, Peking University, Beijing, 100871, China
Yaoping Liu, Han Xu, Wangzhi Dai & Wei Wang
Department of Respirology, No. 1 Hospital of Peking University, Beijing, 100034, China
Haichao Li
National Key Laboratory of Science and Technology on Micro/Nano Fabrication, Beijing, 100871, China
Innovation Center for Micro-Nano-electronics and Integrated Systems, Beijing, 100871, China
Yaoping Liu
Han Xu
Wangzhi Dai
Correspondence to Yaoping Liu or Haichao Li or Wei Wang.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Liu, Y., Xu, H., Dai, W. et al. 2.5-Dimensional Parylene C micropore array with a large area and a high porosity for high-throughput particle and cell separation. Microsyst Nanoeng 4, 13 (2018). https://doi.org/10.1038/s41378-018-0011-8
Revised: 07 January 2018
The mechanical responses of advecting cells in confined flow
S. Connolly
, D. Newport
& K. McGourty
Biomicrofluidics (2020)
Microfluidics for label-free sorting of rare circulating tumor cells
Shu Zhu
, Fengtao Jiang
, Yu Han
, Nan Xiang
& Zhonghua Ni
The Analyst (2020)
Rapid liquid biopsy for Mohs surgery: rare target cell separation from surgical margin lavage fluid with a high recovery rate and selectivity
Wenbo Zhou
, Yaoping Liu
, Menglong Ran
, Xiaofan Zhao
, Hang Li
, Haichao Li
& Wei Wang
Lab on a Chip (2019)
Microfabrication of Micropore Array for Cell Separation and Cell Assay
, Han Xu
, Lingqian Zhang
Micromachines (2018)
Editorial Summary
Membranes: Steep walls speed up separations
Polymer films that detect cancer cells in blood or urine samples using size-based filtrations can benefit from a strategy for improving the strength and throughput rate of micropore arrays. Yaoping Liu from Peking University in China and colleagues first fabricated a silicon wafer of close-packed micropillars, and then coated this template with the polymer film Parylene C. The team used reactive ion etching to carve out a hexagonal array of microscale openings in the polymer with unique dimensions—smaller-than normal pore separations for high flow rates, and thickness more than twice the width of the sidewall to give exceptional mechanical stability. Experiments proved the new array structure could selectively trap particles ranging from microbeads to cancer cells with throughputs hundreds of times faster than conventional filtering devices.
For Authors & Referees
Microsystems & Nanoengineering ISSN 2055-7434 (online) | CommonCrawl |
Elliptic function
In the mathematical field of complex analysis, elliptic functions are a special kind of meromorphic functions, that satisfy two periodicity conditions. They are named elliptic functions because they come from elliptic integrals. Originally those integrals occurred at the calculation of the arc length of an ellipse.
Important elliptic functions are Jacobi elliptic functions and the Weierstrass $\wp $-function.
Further development of this theory led to hyperelliptic functions and modular forms.
Definition
A meromorphic function is called an elliptic function, if there are two $\mathbb {R} $-linear independent complex numbers $\omega _{1},\omega _{2}\in \mathbb {C} $ such that
$f(z+\omega _{1})=f(z)$ and $f(z+\omega _{2})=f(z),\quad \forall z\in \mathbb {C} $.
So elliptic functions have two periods and are therefore doubly periodic functions.
Period lattice and fundamental domain
If $f$ is an elliptic function with periods $\omega _{1},\omega _{2}$ it also holds that
$f(z+\gamma )=f(z)$
for every linear combination $\gamma =m\omega _{1}+n\omega _{2}$ with $m,n\in \mathbb {Z} $.
The abelian group
$\Lambda :=\langle \omega _{1},\omega _{2}\rangle _{\mathbb {Z} }:=\mathbb {Z} \omega _{1}+\mathbb {Z} \omega _{2}:=\{m\omega _{1}+n\omega _{2}\mid m,n\in \mathbb {Z} \}$ :=\langle \omega _{1},\omega _{2}\rangle _{\mathbb {Z} }:=\mathbb {Z} \omega _{1}+\mathbb {Z} \omega _{2}:=\{m\omega _{1}+n\omega _{2}\mid m,n\in \mathbb {Z} \}}
is called the period lattice.
The parallelogram generated by $\omega _{1}$and $\omega _{2}$
$\{\mu \omega _{1}+\nu \omega _{2}\mid 0\leq \mu ,\nu \leq 1\}$
is a fundamental domain of $\Lambda $ acting on $\mathbb {C} $.
Geometrically the complex plane is tiled with parallelograms. Everything that happens in one fundamental domain repeats in all the others. For that reason we can view elliptic function as functions with the quotient group $\mathbb {C} /\Lambda $ as their domain. This quotient group, called an elliptic curve, can be visualised as a parallelogram where opposite sides are identified, which topologically is a torus.[1]
Liouville's theorems
The following three theorems are known as Liouville's theorems (1847).
1st theorem
A holomorphic elliptic function is constant.[2]
This is the original form of Liouville's theorem and can be derived from it.[3] A holomorphic elliptic function is bounded since it takes on all of its values on the fundamental domain which is compact. So it is constant by Liouville's theorem.
2nd theorem
Every elliptic function has finitely many poles in $\mathbb {C} /\Lambda $ and the sum of its residues is zero.[4]
This theorem implies that there is no elliptic function not equal to zero with exactly one pole of order one or exactly one zero of order one in the fundamental domain.
3rd theorem
A non-constant elliptic function takes on every value the same number of times in $\mathbb {C} /\Lambda $ counted with multiplicity.[5]
Weierstrass ℘-function
Main article: Weierstrass elliptic function
One of the most important elliptic functions is the Weierstrass $\wp $-function. For a given period lattice $\Lambda $ it is defined by
$\wp (z)={\frac {1}{z^{2}}}+\sum _{\lambda \in \Lambda \setminus \{0\}}\left({\frac {1}{(z-\lambda )^{2}}}-{\frac {1}{\lambda ^{2}}}\right).$
It is constructed in such a way that it has a pole of order two at every lattice point. The term $-{\frac {1}{\lambda ^{2}}}$ is there to make the series convergent.
$\wp $ is an even elliptic function; that is, $\wp (-z)=\wp (z)$.[6]
Its derivative
$\wp '(z)=-2\sum _{\lambda \in \Lambda }{\frac {1}{(z-\lambda )^{3}}}$
is an odd function, i.e. $\wp '(-z)=-\wp '(z).$[6]
One of the main results of the theory of elliptic functions is the following: Every elliptic function with respect to a given period lattice $\Lambda $ can be expressed as a rational function in terms of $\wp $ and $\wp '$.[7]
The $\wp $-function satisfies the differential equation
$\wp '^{2}(z)=4\wp (z)^{3}-g_{2}\wp (z)-g_{3},$
where $g_{2}$ and $g_{3}$ are constants that depend on $\Lambda $. More precisely, $g_{2}(\omega _{1},\omega _{2})=60G_{4}(\omega _{1},\omega _{2})$ and $g_{3}(\omega _{1},\omega _{2})=140G_{6}(\omega _{1},\omega _{2})$, where $G_{4}$ and $G_{6}$ are so called Eisenstein series.[8]
In algebraic language, the field of elliptic functions is isomorphic to the field
$\mathbb {C} (X)[Y]/(Y^{2}-4X^{3}+g_{2}X+g_{3})$,
where the isomorphism maps $\wp $ to $X$ and $\wp '$ to $Y$.
• Weierstrass $\wp $-function with period lattice $\Lambda =\mathbb {Z} +e^{2\pi i/6}\mathbb {Z} $
• Derivative of the $\wp $-function
Relation to elliptic integrals
The relation to elliptic integrals has mainly a historical background. Elliptic integrals had been studied by Legendre, whose work was taken on by Niels Henrik Abel and Carl Gustav Jacobi.
Abel discovered elliptic functions by taking the inverse function $\varphi $ of the elliptic integral function
$\alpha (x)=\int _{0}^{x}{\frac {dt}{\sqrt {(1-c^{2}t^{2})(1+e^{2}t^{2})}}}$
with $x=\varphi (\alpha )$.[9]
Additionally he defined the functions[10]
$f(\alpha )={\sqrt {1-c^{2}\varphi ^{2}(\alpha )}}$
and
$F(\alpha )={\sqrt {1+e^{2}\varphi ^{2}(\alpha )}}$.
After continuation to the complex plane they turned out to be doubly periodic and are known as Abel elliptic functions.
Jacobi elliptic functions are similarly obtained as inverse functions of elliptic integrals.
Jacobi considered the integral function
$\xi (x)=\int _{0}^{x}{\frac {dt}{\sqrt {(1-t^{2})(1-k^{2}t^{2})}}}$
and inverted it: $x=\operatorname {sn} (\xi )$. $\operatorname {sn} $ stands for sinus amplitudinis and is the name of the new function.[11] He then introduced the functions cosinus amplitudinis and delta amplitudinis, which are defined as follows:
$\operatorname {cn} (\xi ):={\sqrt {1-x^{2}}}$
$\operatorname {dn} (\xi ):={\sqrt {1-k^{2}x^{2}}}$.
Only by taking this step, Jacobi could prove his general transformation formula of elliptic integrals in 1827.[12]
History
Shortly after the development of infinitesimal calculus the theory of elliptic functions was started by the Italian mathematician Giulio di Fagnano and the Swiss mathematician Leonhard Euler. When they tried to calculate the arc length of a lemniscate they encountered problems involving integrals that contained the square root of polynomials of degree 3 and 4.[13] It was clear that those so called elliptic integrals could not be solved using elementary functions. Fagnano observed an algebraic relation between elliptic integrals, what he published in 1750.[13] Euler immediately generalized Fagnano's results and posed his algebraic addition theorem for elliptic integrals.[13]
Except for a comment by Landen[14] his ideas were not pursued until 1786, when Legendre published his paper Mémoires sur les intégrations par arcs d’ellipse.[15] Legendre subsequently studied elliptic integrals and called them elliptic functions. Legendre introduced a three-fold classification –three kinds– which was a crucial simplification of the rather complicated theory at that time. Other important works of Legendre are: Mémoire sur les transcendantes elliptiques (1792),[16] Exercices de calcul intégral (1811–1817),[17] Traité des fonctions elliptiques (1825–1832).[18] Legendre's work was mostly left untouched by mathematicians until 1826.
Subsequently, Niels Henrik Abel and Carl Gustav Jacobi resumed the investigations and quickly discovered new results. At first they inverted the elliptic integral function. Following a suggestion of Jacobi in 1829 these inverse functions are now called elliptic functions. One of Jacobi's most important works is Fundamenta nova theoriae functionum ellipticarum which was published 1829.[19] The addition theorem Euler found was posed and proved in its general form by Abel in 1829. Note that in those days the theory of elliptic functions and the theory of doubly periodic functions were considered to be different theories. They were brought together by Briout and Bouquet in 1856.[20] Gauss discovered many of the properties of elliptic functions 30 years earlier but never published anything on the subject.[21]
See also
• Elliptic integral
• Elliptic curve
• Modular group
• Theta function
References
1. Rolf Busam (2006), Funktionentheorie 1 (in German) (4., korr. und erw. Aufl ed.), Berlin: Springer, p. 259, ISBN 978-3-540-32058-6
2. Rolf Busam (2006), Funktionentheorie 1 (in German) (4., korr. und erw. Aufl ed.), Berlin: Springer, p. 258, ISBN 978-3-540-32058-6
3. Jeremy Gray (2015), Real and the complex : a history of analysis in the 19th century (in German), Cham, pp. 118f, ISBN 978-3-319-23715-2{{citation}}: CS1 maint: location missing publisher (link)
4. Rolf Busam (2006), Funktionentheorie 1 (in German) (4., korr. und erw. Aufl ed.), Berlin: Springer, p. 260, ISBN 978-3-540-32058-6
5. Rolf Busam (2006), Funktionentheorie 1 (in German) (4., korr. und erw. Aufl ed.), Berlin: Springer, p. 262, ISBN 978-3-540-32058-6
6. K. Chandrasekharan (1985), Elliptic functions (in German), Berlin: Springer-Verlag, p. 28, ISBN 0-387-15295-4
7. Rolf Busam (2006), Funktionentheorie 1 (in German) (4., korr. und erw. Aufl ed.), Berlin: Springer, p. 275, ISBN 978-3-540-32058-6
8. Rolf Busam (2006), Funktionentheorie 1 (in German) (4., korr. und erw. Aufl ed.), Berlin: Springer, p. 276, ISBN 978-3-540-32058-6
9. Gray, Jeremy (14 October 2015), Real and the complex : a history of analysis in the 19th century (in German), Cham, p. 74, ISBN 978-3-319-23715-2{{citation}}: CS1 maint: location missing publisher (link)
10. Gray, Jeremy (14 October 2015), Real and the complex : a history of analysis in the 19th century (in German), Cham, p. 75, ISBN 978-3-319-23715-2{{citation}}: CS1 maint: location missing publisher (link)
11. Gray, Jeremy (14 October 2015), Real and the complex : a history of analysis in the 19th century (in German), Cham, p. 82, ISBN 978-3-319-23715-2{{citation}}: CS1 maint: location missing publisher (link)
12. Gray, Jeremy (14 October 2015), Real and the complex : a history of analysis in the 19th century (in German), Cham, p. 81, ISBN 978-3-319-23715-2{{citation}}: CS1 maint: location missing publisher (link)
13. Gray, Jeremy (2015). Real and the complex : a history of analysis in the 19th century. Cham. pp. 23f. ISBN 978-3-319-23715-2. OCLC 932002663.{{cite book}}: CS1 maint: location missing publisher (link)
14. John Landen: An Investigation of a general Theorem for finding the Length of any Arc of any Conic Hyperbola, by Means of Two Elliptic Arcs, with some other new and useful Theorems deduced therefrom. In: The Philosophical Transactions of the Royal Society of London 65 (1775), Nr. XXVI, S. 283–289, JSTOR 106197.
15. Adrien-Marie Legendre: Mémoire sur les intégrations par arcs d’ellipse. In: Histoire de l’Académie royale des sciences Paris (1788), S. 616–643. – Ders.: Second mémoire sur les intégrations par arcs d’ellipse, et sur la comparaison de ces arcs. In: Histoire de l’Académie royale des sciences Paris (1788), S. 644–683.
16. Adrien-Marie Legendre: Mémoire sur les transcendantes elliptiques, où l’on donne des méthodes faciles pour comparer et évaluer ces trancendantes, qui comprennent les arcs d’ellipse, et qui se rencontrent frèquemment dans les applications du calcul intégral. Du Pont & Firmin-Didot, Paris 1792. Englische Übersetzung A Memoire on Elliptic Transcendentals. In: Thomas Leybourn: New Series of the Mathematical Repository. Band 2. Glendinning, London 1809, Teil 3, S. 1–34.
17. Adrien-Marie Legendre: Exercices de calcul integral sur divers ordres de transcendantes et sur les quadratures. 3 Bände. (Band 1, Band 2, Band 3). Paris 1811–1817.
18. Adrien-Marie Legendre: Traité des fonctions elliptiques et des intégrales eulériennes, avec des tables pour en faciliter le calcul numérique. 3 Bde. (Band 1, Band 2, Band 3/1, Band 3/2, Band 3/3). Huzard-Courcier, Paris 1825–1832.
19. Carl Gustav Jacob Jacobi: Fundamenta nova theoriae functionum ellipticarum. Königsberg 1829.
20. Gray, Jeremy (2015). Real and the complex : a history of analysis in the 19th century. Cham. p. 122. ISBN 978-3-319-23715-2. OCLC 932002663.{{cite book}}: CS1 maint: location missing publisher (link)
21. Gray, Jeremy (2015). Real and the complex : a history of analysis in the 19th century. Cham. p. 96. ISBN 978-3-319-23715-2. OCLC 932002663.{{cite book}}: CS1 maint: location missing publisher (link)
Literature
• Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 16". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. pp. 567, 627. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253. See also chapter 18. (only considers the case of real invariants).
• N. I. Akhiezer, Elements of the Theory of Elliptic Functions, (1970) Moscow, translated into English as AMS Translations of Mathematical Monographs Volume 79 (1990) AMS, Rhode Island ISBN 0-8218-4532-2
• Tom M. Apostol, Modular Functions and Dirichlet Series in Number Theory, Springer-Verlag, New York, 1976. ISBN 0-387-97127-0 (See Chapter 1.)
• E. T. Whittaker and G. N. Watson. A course of modern analysis, Cambridge University Press, 1952
External links
Wikimedia Commons has media related to Elliptic functions.
• "Elliptic function", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• MAA, Translation of Abel's paper on elliptic functions.
• Elliptic Functions and Elliptic Integrals on YouTube, lecture by William A. Schwalm (4 hours)
• Johansson, Fredrik (2018). "Numerical Evaluation of Elliptic Functions, Elliptic Integrals and Modular Forms". arXiv:1806.06725 [cs.NA].
Topics in algebraic curves
Rational curves
• Five points determine a conic
• Projective line
• Rational normal curve
• Riemann sphere
• Twisted cubic
Elliptic curves
Analytic theory
• Elliptic function
• Elliptic integral
• Fundamental pair of periods
• Modular form
Arithmetic theory
• Counting points on elliptic curves
• Division polynomials
• Hasse's theorem on elliptic curves
• Mazur's torsion theorem
• Modular elliptic curve
• Modularity theorem
• Mordell–Weil theorem
• Nagell–Lutz theorem
• Supersingular elliptic curve
• Schoof's algorithm
• Schoof–Elkies–Atkin algorithm
Applications
• Elliptic curve cryptography
• Elliptic curve primality
Higher genus
• De Franchis theorem
• Faltings's theorem
• Hurwitz's automorphisms theorem
• Hurwitz surface
• Hyperelliptic curve
Plane curves
• AF+BG theorem
• Bézout's theorem
• Bitangent
• Cayley–Bacharach theorem
• Conic section
• Cramer's paradox
• Cubic plane curve
• Fermat curve
• Genus–degree formula
• Hilbert's sixteenth problem
• Nagata's conjecture on curves
• Plücker formula
• Quartic plane curve
• Real plane curve
Riemann surfaces
• Belyi's theorem
• Bring's curve
• Bolza surface
• Compact Riemann surface
• Dessin d'enfant
• Differential of the first kind
• Klein quartic
• Riemann's existence theorem
• Riemann–Roch theorem
• Teichmüller space
• Torelli theorem
Constructions
• Dual curve
• Polar curve
• Smooth completion
Structure of curves
Divisors on curves
• Abel–Jacobi map
• Brill–Noether theory
• Clifford's theorem on special divisors
• Gonality of an algebraic curve
• Jacobian variety
• Riemann–Roch theorem
• Weierstrass point
• Weil reciprocity law
Moduli
• ELSV formula
• Gromov–Witten invariant
• Hodge bundle
• Moduli of algebraic curves
• Stable curve
Morphisms
• Hasse–Witt matrix
• Riemann–Hurwitz formula
• Prym variety
• Weber's theorem (Algebraic curves)
Singularities
• Acnode
• Crunode
• Cusp
• Delta invariant
• Tacnode
Vector bundles
• Birkhoff–Grothendieck theorem
• Stable vector bundle
• Vector bundles on algebraic curves
Authority control: National
• France
• BnF data
• Germany
• Israel
• United States
• Japan
• Czech Republic
| Wikipedia |
N dimensional
The dimensionality of an array specifies the number of indices that are required to uniquely specify one of its entries. ккунø—тэнге. чмок. n-dimensional boys (16+). Культура. твоя подружка Part of the Mathematics Commons Repository Citation Nguyen, Tan Mai, N-Dimensional 8 Chapter 2: Definition of the N-Dimensional Quasipolar Coordinates and A Direct Proof of Their.. Looking for definition of N-dimensional? Define N-dimensional by Webster's Dictionary, WordNet Lexical Database, Dictionary of Computing, Legal Dictionary, Medical Dictionary, Dream Dictionary
d3_array[1], which recall is shorthand for d3_array[1, :, :], selects both rows and both columns of sheet-1:This dataset contains 6 grade-values. It is almost immediately clear that storing these in a 1-dimensional array is not ideal:
n-dimensional - Wiktionar
Dimensional Dungeons adds limitless procedurally generated dungeons to Minecraft! These dungeons are all placed in a single, separate dimension which can be accessed from a portal anywhere on the..
e how objects recede in space
n-dimensional involves computation in higher dimensions. Calculate n-dimensional euclidean distance from group centroids for each sample and select the lowest 3 for each group in R
Suppose you have an \(N\)-dimensional array, and only provide \(j\) indices for the array; NumPy will automatically insert \(N-j\) trailing slices for you. In the case that \(N=5\) and \(j=3\), d5_array[0, 0, 0] is treated as d5_array[0, 0, 0, :, :]
It provides tools for handling n-dimensional arrays (especially vectors and matrices). • Note that a matrix is actually a 2 dimensional array
Two-dimensional Arrays. Daniel Shiffman. An array keeps track of multiple pieces of information in linear order, a one-dimensional list. However, the data associated with certain systems..
N-dimensional Math Geek Net Fando
While no data has been lost, accessing this data using a single index is less than convenient; we want to be able to specify both the student and the exam when accessing a grade - it is natural to ascribe two dimensions to this data. Let's construct a 2D array containing these grades: N-Dimensional Datasets (astropy.nddata)¶. Introduction¶. The nddata package provides classes to represent images and other gridded data, some essential functions for manipulating images.. Definition: N dimensional space (or Rn for short) is just the space where the points are n-tuplets of You will notice that we are in a sense working backwards: for three dimensional space, we construct.. n-dimensional (not comparable). (mathematics) Having an arbitrary number of dimensions. -dimensional. multidimensional آبادیس - معنی کلمه n dimensional. معنی n dimensional در دیکشنری تخصصی
A matrix is a two-dimensional data structure where numbers are arranged into rows and columns. NumPy is a package for scientific computing which has support for a powerful N-dimensional array.. A complete listing of the available array-manipulation functions can be found in the official NumPy documentation. Among these functions, the reshape function is especially useful.NumPy is able to see the repeated structure among the list-of-lists-of-numbers passed to np.array, and resolve the two dimensions of data, which we deem the 'student' dimension and the 'exam' dimension, respectively. dimension 150 dimension elaboration of tradition dimension of a matrix dimension of a quantity dimension of elaboration dimension stone dimensional dimensional analysis dimensional change.. N-dimensional simplex. [ən-dɪˈmenʃənl ˈsɪmpleks]. N-мерный симплекс. N-мерный октаэдр (Гипероктаэдр). ��. N-dimensional cube
This page is about the meanings of the acronym/abbreviation/shorthand ND in the Academic & Science field in general and in the Mathematics terminology in particular. N-Dimensional Interpolation and fitting. single-dimensional interpolation. fast scattered N-dimensional interpolation. least squares curve fitting
Examples of n-dimensional vectors
Tinkercad is a free, easy-to-use app for 3D design, electronics, and coding NumPy provides an assortment of functions that allow us manipulate the way that an array's data can be accessed. These permit us to reshape an array, change its dimensionality, and swap the positions of its axes:
n-dimensional - перевод - Английский-Русский Словарь - Glosb
N-dimensional space — In mathematics, an n-dimensional space is a topological space whose dimension is n (where n is a fixed natural number). The archetypical example is n-dimensional Euclidean space..
size is the number of dimensions (N) of the N-dimensional space that gensim Word2Vec maps the words onto. Bigger size values require more training data, but can lead to better (more accurate)..
For simplicity, he is only a two-dimensional object as he is confined to lie in the plane of your computer screen. We've fixed his position and the direction his body is pointing. Nonetheless, just to specify the angles of his arms, legs, and head requires a vector in nine-dimensional space. (We'd need even more dimensions if we also wanted to specify his position or his cholesterol level.)
dimensional definition: The definition of dimensional is something a shape that can be measured. (adjective) An example of dimensional is a physical object with length, width and depth, like a table... For the dimension of a quantity, see Dimensional analysis. For display on a two-dimensional surface such as a screen, the 3d cube and 4d tesseract require projection Table of Contents. The N-dimensional array (ndarray). Constructing arrays. A segment of memory is inherently 1-dimensional, and there are many different schemes for arranging the items of an.. The following simple examples reveal how quickly the required number of dimensions increases as we try to describe physical objects. Dimensional Transceiver is a block used to teleport energy, fluids, items, and rails over distance and dimensions. It has an internal buffer of 25 000 MJ and can send power at a rate of 100 MJ/tick. Tesseract. View All FTB Twitter Feed. 10 Apr - More news
Examples of n-dimensional vectors - Math Insigh
Definition of n-dimensional in the Definitions.net dictionary. Information and translations of n-dimensional in the most comprehensive dictionary definitions resource on the web
Re-imagining discovery and access to research: grants, datasets, publications, citations, clinical trials, patents and policy documents in one place
Thus, if we want to access Brad's (item-1 along axis-0) score for Exam 1 (item-0 along axis-1) we simply specify:
One-dimensional - length only Two-dimensional - length and width only Three-dimensional As an illustration, a straight line is one-dimensional, a plane is two-dimensional, and a cube is..
Dimensional Mercenary Manga: Would you like to find a job? Even at the cost of your soul? If so, then you've found the right place. Our job hunting advice website, Soul Sellers..
This is not equivalent to a length-1 1D-array: np.array([15.2]). According to our definition of dimensionality, zero numbers are required to index into a 0-D array as it is unnecessary to provide an identifier for a standalone number. Thus you cannot index into a 0-D array.
What happens if we only supply one index to our array? It may be surprising that grades[0] does not throw an error since we are specifying only one index to access data from a 2-dimensional array. Instead, NumPy it will return all of the exam scores for student-0 (Ashley):
World Web Math: Vector Calculus: N Dimensional Geometr
#n-dimensional. Top. Views count
We can also uses slices to access subsequences of our data. Suppose we want the scores of all the students for Exam 2. We can slice from 0 through 3 along axis-0 (refer to the indexing diagram in the previous section) to include all the students, and specify index 1 on axis-1 to select Exam 2:
The first thing you should know about n dimensional space is that it is absolutely nothing to worry about. You aren't going to be asked to visualize 17 dimensional space or anything freaky like that, because nobody can visualize anything higher than 3 dimensional space (many of us aren't even very good at that). And you can throw out any ideas you might have about the fourth dimension being time or love or what have you, because all it is is an extra number hanging around. To be specific, Definition: A space is just a set where the elements are called points. and Definition: N dimensional space (or Rn for short) is just the space where the points are n-tuplets of real numbers. You will notice that we are in a sense working backwards: for three dimensional space, we construct cartesian coordinates to get a 3-tuple for every point; now, we forget about the middleman and simply define the point to be the 3-tuple. The origin, in any dimension, is just the n-tuplet (0,0, ... 0). What about vectors, you ask? Before we defined them to be a magnitude and a direction and then showed how there is a one-to-one correspondence between them and points; now we again invert the order of things and define vectors to be points. Since points are tuples and we know how to add, subtract and scalar multiply tuples, we know how to do all those things for vectors, too. It is also easy to extend the dot product to vectors in higher dimensions, via the algebraic definition. Just let ( x1, x2, ..., xn ) · ( y1, y2, ..., yn ) = x1 y1 + x2 y2 + ... + xn yn Having a dot product around allows us to define the length of a vector |v| = sqrt( v · v ) and the angle between two vectors: angle = cos-1 ( v &183; w / |v| |w| ) There is no cross product in dimensions greater than 3. For one thing, in dimensions 4 or higher, there are infinitely many unit vectors orthogonal to any given two. Lines and planes can also be found in higher dimensions, but there isn't often much reason to use them. Before, lines in two or three dimensions could be expressed as l(t) = OP + t v for P a point and v a vector on the line; the same formula works for higher dimensions. The familiar property of having exactly one line run through two distinct points is maintained. Planes in three dimensions live a double life: they are both two dimensional flat surfaces and n-1 dimensional flat things. If you want a two dimensional flat surface in n dimension, you are best off using the parametric formula S(r,s) = OP + r v + s w If, on the other hand, you want a n-1 dimensional flat thing, you are better off using the implicit formula A1 x1 + A2 x2 + ... + An xn = B These are usually called hyperplanes and are useful for approximating the graphs of functions. For example, functions from R to R have graphs in R2 which we approximate using 2 dimensional hyperplanes (i.e., lines). Exercises: What is the distance between the points (1,2,3,4) and (-5,2,0,12)? What is the angle between the vectors (1,0,2,0) and (-3,1,4,-5) ? Project the point (1,2,3,4,5) onto the vector (7,7,7,1,9). Give a formula for the line going through (1,2,3,4,5,6) and (0,0,0,17,0,0). Find the plane containing the three points (1,2,3,4), (2,7,2,7), and (-7,-5,-1,0). The unit hypercube in four dimensions is described by the equations 0
Given this indexing scheme, only one integer is needed to specify a unique entry in the array. Similarly only one slice is needed to uniquely specify a subsequence of entries in the array. For this reason, we say that this is a 1-dimensional array. In general, the dimensionality of an array specifies the number of indices that are required to uniquely specify one of its entries. Supplying Fewer Indices Than Dimensions. N-dimensional Arrays. The dimensionality of an array specifies the number of indices that are required to uniquely specify one of its entries Units and dimensions - Understand Dimensional analysis with Limitations and Applications. Know Dimensional Formulas of Quantities and Quantities with Same Dimensional Formula
IEEE Xplore. Delivering full text access to the world's highest quality technical literature in engineering and technology In this post, we will learn how to apply data augmentation strategies to n-Dimensional images get the most of our limited number of examples
What does N-Dimensional mean? Physics Forum
Geometrically, determinant represents the volume of $n$-dimensional parallelepiped spanned by the column or row vectors of the matrix. The vector product and the scalar product are the two ways of.. Before proceeding further down the path of high-dimensional arrays, let's briefly consider a very simple dataset where the desire to access the data along multiple dimensions is manifestly desirable. Consider the following table from a gradebook:
N-dimensional Article about n-dimensional by The Free Dictionar
No bookmarks yet, upload a bookmarks file or add a new bookmark by clicking the + below
Dimensional analysis definition, a method for comparing the dimensions of the physical quantities occurring in a problem to find relationships between the quantities without having to solve the problem..
What exactly does the term n-dimensions mean? What does N-Dimensional mean? Thread starter nomisrosen. Start date Jun 4, 2011
N-Vector (N-dimensional). Loading... (if this message do not disappear, try to refresh this page). How to calculate the norm of a vector? In a space of dimension $ n $, a vector $ \vec(v) $ of..
1-dimension DP problem. Suppose we want to find the fibonacci number at a particular index of the sequence. So fib(n) = nth element in the fibonacci sequence. So how should we go ahead and solve it..
Euclidean Distance In 'n'-Dimensional Space. One Dimension. In an example where there is only 1 variable describing each cell (or case) there is only 1 Dimensional space
Length is nothing but Height. W and H make the dimensions of a 2 dimensional rectangle while D, depth adds the 3rd dimension. Hope that answers your question You can think of axis-0 denoting which of the 2x2 "sheets" to select from. Then axis-1 specifies the row along the sheets, and axis-2 the column within the row:Let's begin our discussion by constructing a simple ND-array containing three floating-point numbers. an n-dimensional uniform design and is denoted by t-(v, k, λ)n design. Hence the n-dimensional design has more applications in real world problems disk layout and striping, partial match queries of
Newest 'n-dimensional' Questions - Stack Overflo
ology. An -dimensional hypersphere (or -sphere) of radius is the set of points in satisfying (I'll place the center at the origin for simplicity)
Because grades has three entries along axis-0 and two entries along axis-1, it has a "shape" of (3, 2).
Fills the 2-dimensional input Tensor with the identity matrix. Preserves the identity of the inputs in tensor - an n-dimensional torch.Tensor. sparsity - The fraction of elements in each column to be set..
N-Dimensional, Preston, United Kingdom. 34 likes. We are a small, well run, web design company. We offer a professional, but personal service to all our..
If one considers non-rigid objects, then the number of dimensions required to specify the configuration of the object can be quite high. As a simple example, consider this little guy below. For the dimension of a quantity, see Dimensional analysis. The first four spatial dimensions, represented in a two-dimensional picture. Two points can be connected to create a line segment ..n-dimensional cube, the perpendiculars that with that main diagonal compose the right-angled triangle are the main diagonal of the n-1-dimensional cube and another R-length-ed perpendicular
How many dimensions does it take to specify the position of a rigid object (for example, an airplane) in space? Naively, one would think that it would take three dimensions: one each to specify the $x$-coordinate, $y$-coordinate, and the $z$-coordinate of the object. It is correct that one needs only three dimensions to specify, for example, the center of the object. However, even if the center of a rigid object is specified, the object could also rotate. In fact, it can rotate in three different directions, such as the roll, pitch, and yaw of an airplane. Consequently, we need six dimensions to specify the position of a rigid object: three to specify the location of the center of the object, and three to specify the direction in which the object is pointing. As with Python sequences, you can specify an "empty" slice to include all possible entries along an axis, by default: grades[:, 1] is equivalent to grades[0:3, 1], in this instance. More generally, withholding either the 'start' or 'stop' value in a slice will result in the use smallest or largest valid index, respectively:
Accessing Data Along Multiple Dimensions in an Arra
Deutsch-Englisch-Übersetzung für: n dimensional. n dimensional in anderen Sprachen: Deutsch - Englisch
In fact, it appears that N-dimensional matrices aren't really addressed by any of the standard C++ So, to get my 4-dimensional matrix, I have to make an array of pointers pointing to an array of..
Additional dimensions can be mimicked using a dict: # Create a 3D k-by-m-by-n variable. x = {} for i N-dimensional variables #198. Closed. SteveDiamond opened this issue Jun 22, 2015 · 18 comments
Dimensional. Dimensional, adjectiv Sinonime: bidimensional, cvadridimensional, cvasidimensional, monodimensional, multidimensional, n-dimensional, pluridimensional, polidimensional, tridimensional..
Although accessing data along varying dimensions is ultimately all a matter of judicious bookkeeping (you could access all of this data from a 1-dimensional array, after all), NumPy's ability to provide users with an interface for accessing data along dimensions is incredibly useful. It affords us an ability to impose intuitive, abstract structure to our data.
Video: n-dimensional benchmark functions BenchmarkFcn
The first row of numbers gives the position of the indices 0…3 in the array; the second row gives the corresponding negative indices. The slice from \(i\) to \(j\) returns an array containing of all numbers between the edges labeled \(i\) and \(j\), respectively:
The basic format is. magic number size in dimension 0 size in dimension 1 size in dimension 2.. size in dimension N data
Listen to n-dimensional audio | SoundCloud is an audio platform that lets you listen to what you love and share Stream Tracks and Playlists from n-dimensional audio on your desktop or mobile device
imum size in 5-D the puzzle is far from..
Discover amazing music and directly support the artists who make it
Accessing Multi-dimensional Arrays. Multidimensional array elements are accessed using the row Let's see an example of a two-dimensional array with dimensions [3][3]. Below is the code to..
In this subreddit, we will attempt to synthesize these strands into a rigorous understanding of toroidal metaphysics and its n-dimensional hypertorus based on a rigorous numerology that is accessible..
Video: Area-Volume Formulas for N-Dimensional Spheres and Ball
C. n-dimensional Gaussian and multivariate normal densities. Now, let us examine the n-dimensional case. The n-variate normal den-sity with mean µ = (µ1, µ2, . . . , µn) and covariance.. dimensional riftunknown. A crack, split, or break between dimensions. There was a dimensional rift in between our world and theirs. by jambalaya October 07, 2016. 4 The declension of the adjective n-dimensional uses the incomparable form n-dimensional. The adjective has no forms for the comparative and superlative. The adjective n-dimensional can be used.. Multidimensional arrays can be described as arrays of arrays. For example, a bidimensional array can be imagined as a two-dimensional table made of elements, all of them of a same uniform data type
Is it harder to think about Fred's head than to think about a 60,000,000-dimensional vector? But it is much preferable to put electrodes in Fred's head than in the head of children suffering with severe epilepsy, as still needs to be done in some cases. Won't it be great if we can develop scientific means to avoid the latter? But in very high-dimensional spaces, Euclidean distances tend to become inflated (this is an instance of the so-called curse of dimensionality). Running a dimensionality reduction algorithm such as.. N-dimensional population structures. A population is a collection of organisms of the same species located within a prescribed area. This suggests that a population has the characteristics of..
The reshape function allows you to change the dimensionality and axis-layout of a given array. This adjusts the indexing interface used to access the array's underlying data, as was discussed in earlier in this module. Let's take a shape-(6,) array, and reshape it to a shape-(2, 3) array: I was trying to get a better intuition for the curse of dimensionality in machine learning, and needed to know the volume of a unit n-sphere -- so I..
We run into high dimensional vectors even in fields like neuroscience. Let's say we stick 100 electrodes in the head of our friend Fred, the lab rat, to simultaneously record the activity of 100 of his neurons. Right away, you can see we'll need a 100-dimensional vector to describe Fred's neuronal activity at any point in time. It gets worse, though, if you think about recording Fred's neural activity over an extend period of time. Let's say we record from Fred's neurons while he's working for ten minutes (or 600 seconds). If we sample his neural activity 1000 times a second, that means we will take $1000 \times 600 =$ 600,000 samples during those ten minutes. Multiplying that by the 100 neurons, and we see we need a 60,000,000-dimensional vector to represent Fred's neural activity during those ten minutes. Benchmarkfcns is a personal effort to provide a public repository of sources and documents for well-known optimization benchmark functions. Please visit the About page for more information.
Although NumPy does formally recognize the concept of dimensionality precisely in the way that it is discussed here, its documentation refers to an individual dimension of an array as an axis. Thus you will see "axes" (pronounced "aks-ēz") used in place of "dimensions"; however, they mean the same thing. In two dimensions there are the formulas that the area of disk enclosed withing a circle of radius R is πR2 The purpose of this material is to derived the formulas for the volume n-dimensional balls and.. Dask arrays scale Numpy workflows, enabling multi-dimensional data analysis in earth science, satellite imagery, genomics, biomedical applications, and machine learning algorithms
Nykamp DQ, "Examples of n-dimensional vectors." From Math Insight. http://mathinsight.org/n_dimensional_vector_examples It takes a vector in nine-dimensional space to specify the angles of this stick figure's arms, legs, and For simplicity, he is only a two-dimensional object as he is confined to lie in the plane of your..
Softmax function takes an N-dimensional vector of real numbers and transforms it into a vector of real number in range (0,1) which add upto 1. As the name suggests, softmax function is a soft version of.. What rhymes with n-dimensional space? Lookup it up at Rhymes.net - the most comprehensive rhyming words dictionary on the web Looking for n-dimensional? Find out information about n-dimensional. Some number of dimensions. See multidimensional views Explanation of n-dimensional N-dimensional Auto-Bäcklund Transformation. and Exact Solutions to n-dimensional Burgers System. Mingliang Wang1,2 , Jinliang Zhang 1* & Xiangzheng Li 1 1. School of Mathematics & Statistcs.. Examples of n-dimensional vectors by Duane Q. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License. For permissions beyond the scope of this license, please contact us.
Moment of force unit conversion between newton meter and newton millimeter, newton millimeter to newton meter conversion in batch, N.m N.mm conversion chart.. -dimensional definition: 1. having measurements in the stated directions: 2. having measurements a multi-dimensional problem. (Definition of -dimensional from the Cambridge Academic Content..
Stick figure position. It takes a vector in nine-dimensional space to specify the angles of this stick figure's arms, legs, and head. We denote the configuration vector specifying these angles by $\vec{\theta} = (\theta_1, \theta_2, \theta_3, \theta_4, \theta_5, \theta_6,\theta_7, \theta_8,\theta_9)$. You can drag the sliders with your mouse to change the components of the vector. The components $\theta_1$ and $\theta_2$ specify the angles of his left arm, $\theta_3$ and $\theta_4$ specify the angles of his right arm, $\theta_5$, $\theta_6$, $\theta_7$, and $\theta_8$ specify the angles of his left and right legs, and, finally, $\theta_9$ specifies the angle of his head. Thus far, we have discussed some rules for accessing data in arrays, all of which fall into the category that is designated "basic indexing" by the NumPy documentation. We will discuss the details of basic indexing and of "advanced indexing", in full, in a later section. Note, however, that all of the indexing/slicing reviewed here produces a "view" of the original array. That is, no data is copied when you index into an array using integer indices and/or slices. Recall that slicing lists and tuples do produce copies of the data.NumPy specifies the row-axis (students) of a 2D array as "axis-0" and the column-axis (exams) as axis-1. You must now provide two indices, one for each axis (dimension), to uniquely specify an element in this 2D array; the first number specifies an index along axis-0, the second specifies an index along axis-1. The zero-based indexing schema that we reviewed earlier applies to each axis of the ND-array: We make use of n-dimensional hypervolumes to define ecosystem states and assess how much they shift after environmental changes have occurred Перевод слова dimensional, американское и британское произношение, транскрипция, словосочетания, однокоренные слова, примеры использования
The normalizer scales each value by dividing each value by its magnitude in $n$-dimensional space for $n$ number of features. Say your features were x, y and z Cartesian co-ordinates your scaled.. Note the value of using the negative index is that it will always provide you with the latest exam score - you need not check how many exams the students have taken. Check out N-dimensional's art on DeviantArt. Browse the user profile and get inspired. Paint a picture. Experiment with DeviantArt's own digital drawing tools. N-dimensional
N-dimensional Matrices in C++ studywol
Translations in context of n-dimensional in German-English from Reverso Context: Sie ist Mitbegründerin von n-Dimensional game studio und Mitglied des Hyphen-Labs Kollektivs Alt. titles. Other World Warrior, The Dimensional Mercenary, Two-Dimensional Mercenary, 이차원 용병
Dimensional analysis is a method of reducing the number of variables required to describe a given physical situation by making use of the information implied by the units of the physical quantities.. At first, it may seem that going beyond three dimensions is an exercise in pointless mathematical abstraction. One might thing that if we want to describe something in our physical world, certainly three dimensions (or possibly four dimensions if we want to think about time) will be sufficient. It turns out that this assumption is far from the truth. To describe even the simplest objects, we will typically need more than three dimensions. In fact, in many applications of mathematics, it is challenging to develop mathematical models that can realistically describe a physical system and yet keep the number of dimensions from becoming incredibly large. This is because NumPy will automatically insert trailing slices for you if you don't provide as many indices as there are dimensions for your array. grades[0] was treated as grades[0, :].Let's build up some intuition for arrays with a dimensionality higher than 2. The following code creates a 3-dimensional array:
For example, the two-dimensional triangle and the three-dimensional tetrahedron can be seen as specific There are also notions of dimension (fractal dimensions such as Hausdorff dimension in.. Synonyms for one-dimensional at Thesaurus.com with free online thesaurus, antonyms, and That informants have a one-dimensional view of earlier periods in Shoshone history may well be expected Looking for n-dimensional? Find out information about n-dimensional. Some number of dimensions. See multidimensional views Explanation of n-dimensional Because the size of an input array and the resulting reshaped array must agree, you can specify one of the dimension-sizes in the reshape function to be -1, and this will cue NumPy to compute that dimension's size for you. For example, if you are reshaping a shape-(36,) array into a shape-(3, 4, 3) array. The following are valid: xit is a K -dimensional vector of explanatory variables, without a const term. β0, the intercept, is independent of i and t. β, a (K × 1) vector, the slopes, is independent of i and t. it , the error, varies..
Indexing means refer to an element of the array. In the following examples, we used indexing in single dimensional and 2-dimensional arrays as well: import numpy Number of points in n-dimensional space in a volume of any size at all around the optimal point: uncountable. Does jwest understand multivarients and n-dimensional systems
How to derive the volume of an n-dimensional hypersphere - YouTub
dimensions, a cube has three, and a tesseract has four. For some calculations, time may be added as a third dimension to two-dimensional (2D) space or a fourth dimension to three-dimensional (3D).. Dimension definition is - measure in one direction; specifically : one of three coordinates determining a position in space or four coordinates determining a position in How to use dimension in a sentence You can also supply a "step" value to the slice. grades[::-1, :] will returns the array of grades with the student-axis flipped (reverse-alphabetical order). You will see updates in your activity feed. You may receive emails, depending on your notification preferences. N-Dimensional Histogram Count
The site owner hides the web page description High-dimensional spaces - spaces with a dimensionality substantially greater than 3 - have properties that are substantially different from normal common-sense intuitions of distance, volume.. Similar to Python's sequences, we use 0-based indices and slicing to access the content of an array. However, we must specify an index/slice for each dimension of an array: Two-dimensional arrays can be passed as parameters to a function, and they are passed by reference. When declaring a two-dimensional array as a formal parameter, we can omit the size of the first..
As indicated above, negative indices are valid too and are quite useful. If we want to access the scores of the latest exam for all of the students, you can specify:Keeping track of the meaning of an array's various dimensions can quickly become unwieldy when working with real datasets. xarray is a Python library that provides functionality comparable to NumPy, but allows users provide explicit labels for an array's dimensions; that is, you can name each dimension. Using an xarray to select Brad's scores could look like grades.sel(student='Brad'), for instance. This is a valuable library to look into at your leisure.The output of grades[:, :1] might look somewhat funny. Because the axis-1 slice only includes one column of numbers, the shape of the resulting array is (3, 1). 0 is thus only valid (non-negative) index for axis-1, since there is only one column to specify in the array. Higher Dimensional Learning. Read the seven chapters of the Beginner's Guide to Dimensional Rifting scattered across Azsuna
Homogeneous coordinates are a way of representing N-dimensional coordinates with N+1 numbers. To make 2D Homogeneous coordinates, we simply add an additional variable, w, into existing.. The dimensioned types are therefore closely modeled on dictionaries but support n-dimensional keys. Therefore they allow you to express a mapping between a multi-variable key and other viewable.. HDF® supports n-dimensional datasets and each element in the dataset may itself be a complex object
This definition of dimensionality is common far beyond NumPy; one must use three numbers to uniquely specify a point in physical space, which is why it is said that space consists of three dimensions. Show declension of n-dimensional. n-dimensional ( not comparable). en There is also one compound of n-simplexes in n-dimensional space provided that n is one less than a power of two.. Calculator Use. Enter 2 sets of coordinates in the 3 dimensional Cartesian coordinate system, (X1, Y1, Z1) and (X2, Y2, Z2), to get the distance formula calculation for the 2 points and calculate distance.. n-dimensional voronoi diagram. Ask Question. Qhull performs the computations; see www.qhull.org. $d$-dimensional Delaunay triangulation computations are part of CGAL, specifically this module
N-dimensional space — Wikimedia Foundatio
Deklination und Steigerung von n-dimensional: Adjektiv n-dimensional dekliniert und kompariert im Plural, Superlativ, stark, schwach, Tabellen, Regeln, Downloads, Sprachausgabe How to abbreviate N-dimensional? N-dimensional can be abbreviated as N-D. What is N-D All Acronyms. N-D - N-dimensional [Internet]; Apr 7, 2020 [cited 2020 Apr 7]. Available from: https.. From left to right, the square, the cube, and the tesseract. The square isbounded by 1-dimensional lines, the cube by 2-dimensional areas, and thetesseract by 3-dimensional volumes. A projection of the cube is given since itis viewed on a two-dimensional screen Incremental n-dimensional convex hull algorithm. angle between the center and each corner of a regular minimal polygon in n dimensions Below mentioned libraries are very important and basics for learning and implementing projects on machine learning. · NumPy (numpy.org) N-dimensional array for numerical computation
Dot product formula for n dimensional space problems. In the case of the n dimensional space problem the dot product of vectors a = {a1 ; a2 ; ; an} and b = {b1 ; b2 ; ; bn} can be found by.. ND4J: N-Dimensional Arrays for Java. ND4J and ND4S are scientific computing libraries for the JVM. They are meant to be used in production environments, which means routines are designed to run..
In four dimensions, one can think of "stacks of sheets with rows and columns" where axis-0 selects the stack of sheets you are working with, axis-1 chooses the sheet, axis-2 chooses the row, and axis-3 chooses the column. Extrapolating to higher dimensions ("collections of stacks of sheets …") continues in the same tedious fashion. 2-Dimensional Benchmark Functions. n-Dimensional Benchmark Functions. Online 3D Function Visualizer For all straightforward applications of reshape, NumPy does not actually create a new copy of an array's data when performing a reshape operation. Instead, the original array and the reshaped array reference the same underlying data. The reshaped array simply provides a new index-interface for accessing said data, and is thus referred to as a "view" of the original array (more on this "views" in a later section). All Functions 1 Dimensional 2 Dimensional 3 Dimensional n Dimensional Convex Differentiable Multimodal Non-differentiable n-dimensional non-separable multimodal non-convex differentiable
Suomen isoin kauppakeskus 2018.
Valkoisen miehen taakka esimerkki.
Eminem vaatteet.
Puvunkengät etiketti.
Pikku kakkonen näyttelijät.
Seiti keittäminen.
Partnerka na impreze.
Paw patrol cake.
Pirkko arstila twitter.
Saunaonline peruutus.
Iltalehti vauva.
Kuriositeetti englanniksi.
Kaakaonibsit hyödyt.
Free movies with finnish subtitles.
Temptation island jonne.
Ronnie o'sullivan age.
Eyeliner pigmentointi.
Sammon puolustus sommittelu.
Wow legion daily reset.
Kissa märkäruoka määrä.
Relativa pronomen tyska.
Lahti kaavoituskohteet.
Sanoja humalalle.
Samsung galaxy j5 muistikortin koko.
Pivaset pv 202.
Handwerkskammer hildesheim.
Ripsienpidennykset hinta pori.
Resoribletti.
Työympäristö wikipedia.
Perheentalo iisalmi.
Burpee treeni.
Digi anna fi.
Lasinen juomapullo verkkokauppa.
Windside turbiini hinta.
Maapallo uskomukset.
Chilin säilöminen öljyyn.
Senaste nytt sverige.
Jakkihedelmä prisma.
Jack russell parson russell ero.
Mäntyhavun kennel.
Lapuan ponnistus jalkapallo. | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.